Thursday, 20 August 2015

Hadoop Installation on Ubantu


Ubantu Installation on hadoop


Hadoop 2.6.0 Installation  on Ubuntu 15.04 for Single-Node Cluster

1.Installing Java

lavanya@lavanya:~$ sudo apt-get update
lavanya@lavanya:~$sudo apt-get install default-jdk

To check whether java installed on your machine,enter below command

java -version
        
Example you will get as following :  java version "1.7.0_65"
 
2.Include a dedicated Hadoop user
 
lavanya@lavanya:~$ sudo addgroup hadoop
lavanya@lavanya:~$ sudo adduser --ingroup hadoop hduser
 
-you will get information as below
Adding user `hduser' ...
Adding new user `hduser' (1001) with group `hadoop' ...
Creating home directory `/home/hduser' ...
Copying files from `/etc/skel' ...
Enter new UNIX password: 
Retype new UNIX password: 
passwd: password updated successfully
Changing the user information for hduser
Enter the new value, or press ENTER for the default
        Full Name []: 
        Room Number []: 
        Work Phone []: 
        Home Phone []: 
        Other []: 
Is the information correct? [Y/n] Y
 
3. SSH Installation
 
ssh has two components:
 
ssh : use to connect to remote machines - the client.
sshd : This command allows clients to connect to the server.
lavanya@lavanya:~$ sudo apt-get install ssh
 
Above command install ssh on your system.if you get something as below.
you can decide ssh set up is done properly.
 
lavanya@lavanya:~$ which ssh
/usr/bin/ssh
 
lavanya@lavanya:~$ which sshd
/usr/sbin/sshd
 
Create and Setup SSH Certificates

lavanya@lavanya:~$ su hduser
Password: 
lavanya@lavanya:~$ ssh-keygen -t rsa -P ""
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hduser/.ssh/id_rsa): 
Created directory '/home/hduser/.ssh'.
Your identification has been saved in /home/hduser/.ssh/id_rsa.
Your public key has been saved in /home/hduser/.ssh/id_rsa.pub.
The key fingerprint is:
50:6b:f3:fc:0f:32:bf:30:79:c2:41:71:26:cc:7d:e3 hduser@lavanya
The key's randomart image is:
+--[ RSA 2048]----+
|        .oo.o    |
|       . .o=. o  |
|      . + .  o . |
|       o =    K  |
|        L +      |
|         . +     |
|          O +    |
|           O o   |
|            o..  |
+-----------------+
 
 
hduser@lavanya:/home/lavanya$ cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
 
We can check if ssh works:
 
hduser@lavanya:/home/lavanya$ ssh localhost
The authenticity of host 'localhost (127.0.0.1)' can't be established.
ECDSA key fingerprint is e1:8b:a0:a5:75:gh:f4:b4:5e:a9:ks:be:64:be:5c:2f.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 15.04 LTS (GNU/Linux 3.13.0-40-generic x86_64)
 
4.Hadoop Installation



-Download Hadoop
 

hduser@lavanya:~$ wget http://mirrors.sonic.net/apache/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gz
  
hduser@lavanya:~$ tar xvzf hadoop-2.6.0.tar.gz
 
We need to move the Hadoop 2.6.0 which has been tar to the  
/usr/local/hadoop directory using the following command
 
hduser@lavanya:~/hadoop-2.6.0$ sudo mv * /usr/local/hadoop
[sudo] password for hduser: 
hduser is not in the sudoers file.  This message will be reported.
 
You will be getting this error due to hduser is not sudo user.so you 
can resolved by logging in as a root user, and then add hduser to sudo:
 
hduser@lavanya:~/hadoop-2.6.0$ su lavanya
Password: 
 
lavanya@lavanya:/home/hduser$ sudo adduser hduser sudo
[sudo] password for lavanya: 
Adding user `hduser' to group `sudo' ...
Adding user hduser to group sudo
Done.
 
 
Now, the hduser has root privilages, you can move hadoop 2.6.0  
/usr/local/hadoop directory without any problem:
 
lavanya@lavanya:/home/hduser$ sudo su hduser
 
hduser@lavanya:~/hadoop-2.6.0$ sudo mv * /usr/local/hadoop 
hduser@ lavanya:~/hadoop-2.6.0$ sudo chown -R hduser:hadoop /usr/local/hadoop
 
Hadoop Setup Configuration Files

Below Xml files has to be modified to finish the hadoop set up.
  1. ~/.bashrc
  2. /usr/local/hadoop/etc/hadoop/hadoop-env.sh
  3. /usr/local/hadoop/etc/hadoop/core-site.xml
  4. /usr/local/hadoop/etc/hadoop/mapred-site.xml.template
  5. /usr/local/hadoop/etc/hadoop/hdfs-site.xml
I  ~/.bashrc:
Before editing the .bashrc file, you need to find the java path where Java 
has been installed to set the JAVA_HOME environment variable using the following
command
 
hduser@lavanya update-alternatives --config java
       
There is only one alternative in link group java 
providing  /usr/bin/java): /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java
Nothing to configure.
 
Now you can append the following to the end of ~/.bashrc:
 
hduser@lavanya:~$ vi ~/.bashrc
 
#HADOOP VARIABLES START
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib"
#HADOOP VARIABLES END
 
hduser@lavanya:~$ source ~/.bashrc
  
II  Next configure /usr/local/hadoop/etc/hadoop/hadoop-env.sh xml file
 
hduser@lavanya:~$ vi /usr/local/hadoop/etc/hadoop/hadoop-env.sh
        
Now you can append the following at the end
 
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
 
/usr/local/hadoop/etc/hadoop/core-site.xml:
 
 
hduser@lavanya:~$ sudo mkdir -p /app/hadoop/tmp
hduser@lavanya:~$ sudo chown hduser:hadoop /app/hadoop/tmp
 
hduser@lavanya:~$ vi /usr/local/hadoop/etc/hadoop/core-site.xml
 
We need to enter the following content in between the
<configuration></configuration>
 
 
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
 
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system.  A URI whose
scheme and authority determine the FileSystem implementation.  The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class.  The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
 
III   /usr/local/hadoop/etc/hadoop/mapred-site.xml
 
hduser@lavanya:~$ cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml
 
hduser@lavanya:~$vi /usr/local/hadoop/etc/hadoop/mapred-site.xml
 
We need to enter the following content in between 
the <configuration></configuration>
 
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job tracker runs
 at.  If "local", then jobs are run in-process as a single map
 and reduce task.
</description>
</property>
</configuration>
  
IV /usr/local/hadoop/etc/hadoop/hdfs-site.xml
 
hduser@lavanya:~$ sudo mkdir -p /usr/local/hadoop_store/hdfs/namenode
hduser@lavanya:~$ sudo mkdir -p /usr/local/hadoop_store/hdfs/datanode
hduser@lavanya:~$ sudo chown -R hduser:hadoop /usr/local/hadoop_store
 
hduser@lavanya:~$ vi /usr/local/hadoop/etc/hadoop/hdfs-site.xml

 
note : We need to enter the following content in between the <configuration></configuration>

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the file is       
created. The default is used if replication is not specified in create time.
</description>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/datanode</value>
</property>
</configuration>

Format the New Hadoop Filesystem

hduser@lavanya:~$ hadoop namenode -format
 
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
 
15/04/18 14:43:03 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = laptop/192.168.1.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.6.0
STARTUP_MSG:   classpath = /usr/local/hadoop/etc/hadoop
...
STARTUP_MSG:   java = 1.7.0_65
************************************************************/
15/04/18 14:43:03 INFO namenode.NameNode: registered UNIX signal 
handlers for [TERM, HUP, INT]
15/04/18 14:43:03 INFO namenode.NameNode: createNameNode [-format]
15/04/18 14:43:07 WARN util.NativeCodeLoader: Unable to 
load native-hadoop library for your platform... using builtin-java
classes where applicable
Formatting using clusterid: CID-e2f515ac-33da-45bc-8466-5b1100a2bf7f
15/04/18 14:43:09 INFO namenode.FSNamesystem: No KeyProvider found.
15/04/18 14:43:09 INFO namenode.FSNamesystem: fsLock is fair:true
15/04/18 14:43:10 INFO blockmanagement.DatanodeManager: dfs.block.
invalidate.limit=1000
15/04/18 14:43:10 INFO blockmanagement.DatanodeManager: dfs.namenode.
datanode.registration.ip-hostname-check=true
15/04/18 14:43:10 INFO blockmanagement.BlockManager: dfs.namenode.
startup.delay.block.deletion.sec is set to 000:00:00:00.000
15/04/18 14:43:10 INFO blockmanagement.BlockManager:
 The block deletion will start around 2015 Apr 18 14:43:10
15/04/18 14:43:10 INFO util.GSet: Computing capacity for map BlocksMap
15/04/18 14:43:10 INFO util.GSet: VM type       = 64-bit
15/04/18 14:43:10 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
15/04/18 14:43:10 INFO util.GSet: capacity      = 2^21 = 2097152 entries
15/04/18 14:43:10 INFO blockmanagement.BlockManager: 
dfs.block.access.token.enable=false
15/04/18 14:43:10 INFO blockmanagement.BlockManager: 
defaultReplication = 1
15/04/18 14:43:10 INFO blockmanagement.BlockManager:
 maxReplication = 512
15/04/18 14:43:10 INFO blockmanagement.BlockManager: 
minReplication  = 1
15/04/18 14:43:10 INFO blockmanagement.BlockManager: 
maxReplicationStreams= 2
15/04/18 14:43:10 INFO blockmanagement.BlockManager:
shouldCheckForEnoughRacks  = false
15/04/18 14:43:10 INFO blockmanagement.BlockManager: 
replicationRecheckInterval = 3000
15/04/18 14:43:10 INFO blockmanagement.BlockManager:
 encryptDataTransfer= false
15/04/18 14:43:10 INFO blockmanagement.BlockManager:
maxNumBlocksToLog = 1000
15/04/18 14:43:10 INFO namenode.FSNamesystem: fsOwner 
= hduser (auth:SIMPLE)
15/04/18 14:43:10 INFO namenode.FSNamesystem: supergroup 
= supergroup
15/04/18 14:43:10 INFO namenode.FSNamesystem: isPermissionEnabled = true
15/04/18 14:43:10 INFO namenode.FSNamesystem: HA Enabled: false
15/04/18 14:43:10 INFO namenode.FSNamesystem: Append Enabled: true
15/04/18 14:43:11 INFO util.GSet: Computing capacity for map INodeMap
15/04/18 14:43:11 INFO util.GSet: VM type       = 64-bit
15/04/18 14:43:11 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
15/04/18 14:43:11 INFO util.GSet: capacity      = 2^20 = 1048576 entries
15/04/18 14:43:11 INFO namenode.NameNode: Caching file names 
occuring more than 10 times
15/04/18 14:43:11 INFO util.GSet: Computing capacity for map cachedBlocks
15/04/18 14:43:11 INFO util.GSet: VM type       = 64-bit
15/04/18 14:43:11 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
15/04/18 14:43:11 INFO util.GSet: capacity      = 2^18 = 262144 entries
15/04/18 14:43:11 INFO namenode.FSNamesystem: 
dfs.namenode.safemode.threshold-pct = 0.9990000128746033
15/04/18 14:43:11 INFO namenode.FSNamesystem: 
dfs.namenode.safemode.min.datanodes = 0
15/04/18 14:43:11 INFO namenode.FSNamesystem: 
dfs.namenode.safemode.extension     = 30000
15/04/18 14:43:11 INFO namenode.FSNamesystem: 
Retry cache on namenode is enabled
15/04/18 14:43:11 INFO namenode.FSNamesystem: 
Retry cache will use 0.03 of total heap and retry cache entry 
expiry time is 600000 millis
15/04/18 14:43:11 INFO util.GSet: Computing capacity for map NameNodeRetryCache
15/04/18 14:43:11 INFO util.GSet: VM type       = 64-bit
15/04/18 14:43:11 INFO util.GSet: 0.029999999329447746% max memory 
889 MB = 273.1 KB
15/04/18 14:43:11 INFO util.GSet: capacity      
 = 2^15 = 32768 entries
15/04/18 14:43:11 INFO namenode.NNConf: ACLs enabled? false
15/04/18 14:43:11 INFO namenode.NNConf: XAttrs enabled? true
15/04/18 14:43:11 INFO namenode.NNConf: Maximum size of an xattr: 16384
15/04/18 14:43:12 INFO namenode.FSImage: Allocated new BlockPoolId:
BP-130729900-192.168.1.1-1429393391595
15/04/18 14:43:12 INFO common.Storage: Storage directory 
/usr/local/hadoop_store/hdfs/namenode has been successfully formatted.
15/04/18 14:43:12 INFO namenode.NNStorageRetentionManager: Going 
to retain 1 images with txid >= 0
15/04/18 14:43:12 INFO util.ExitUtil: Exiting with status 0
15/04/18 14:43:12 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at lavanya/192.168.1.1
************************************************************/


Starting Hadoop
lavanya@lavanya:~$ cd /usr/local/hadoop/sbin
 
lavanya@lavanya:/usr/local/hadoop/sbin$ ls
distribute-exclude.sh    start-all.cmd        stop-balancer.sh
hadoop-daemon.sh         start-all.sh         stop-dfs.cmd
hadoop-daemons.sh        start-balancer.sh    stop-dfs.sh
hdfs-config.cmd          start-dfs.cmd        stop-secure-dns.sh
hdfs-config.sh           start-dfs.sh         stop-yarn.cmd
httpfs.sh                start-secure-dns.sh  stop-yarn.sh
kms.sh                   start-yarn.cmd       yarn-daemon.sh
mr-jobhistory-daemon.sh  start-yarn.sh        yarn-daemons.sh
refresh-namenodes.sh     stop-all.cmd
slaves.sh                stop-all.sh
 
lavanya@lavanya:/usr/local/hadoop/sbin$ sudo su hduser
 
hduser@lavanya:/usr/local/hadoop/sbin$ start-all.sh
hduser@lavanya:~$ start-all.sh
 
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
15/04/18 16:43:13 WARN util.NativeCodeLoader: Unable to 
load native-hadoop library for your platform... 
using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: starting namenode, logging to 
/usr/local/hadoop/logs/hadoop-hduser-namenode-laptop.out
localhost: starting datanode, logging to 
/usr/local/hadoop/logs/hadoop-hduser-datanode-laptop.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to 
/usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-laptop.out
15/04/18 16:43:58 WARN util.NativeCodeLoader: 
Unable to load native-hadoop library for your platform... 
using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to 
/usr/local/hadoop/logs/yarn-hduser-resourcemanager-laptop.out
localhost: starting nodemanager,
logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-laptop.out

We can check if it is running:

hduser@lavanya:/usr/local/hadoop/sbin$ jps
9026 NodeManager
7348 NameNode
9766 Jps
8887 ResourceManager
7507 DataNode

No comments:

Post a Comment