Introduction
Apache Hadoop is an open-source software framework that supports data-intensive distributed applications, licensed under the Apache v2 license. It supports the running of applications on large clusters of commodity hardware. Hadoop was derived from Google’s MapReduce and Google File System(GFS) papers.
Hadoop development needs a hadoop cluster.
A trial hadoop cluster can be setted up in minutes. In my previous blog post, I have mentioned about hadoop cluster set up using tarball.
But for production environments, tarball installation is not a good approach.
Because it will become complex while installing other hadoop ecosystem components.
If we have a high speed internet connection, we can install hadoop directly from the internet using yum install in minutes.
But most of the production environments are isolated. So internet connection may not be there.
There we can perform this yum install by creating a local yum repository.
Creation of local yum repository is explained well in my previous blog “Creating A Local YUM Repository”
Yum install will work with REDHAT or CentOS linux distributions.
Here I am giving the explanation of Cloudera Distribution of Hadoop installation.
Prerequisites
OPERATING SYSTEM
RedHat or CentOS (32 or 64 bit)
PORTS
The ports necessary for hadoop should be opened. So we need to set appropriate firewall rules. The ports used by hadoop are listed in the last part of this post.
If you are not interested in, you can simply switch off the firewall.
The command for turning off the firewall is
service iptables stop
JAVA
Sun java is required.
Download java from oracle website (32 or 64 bit depending on OS)
Install the java.
Simple installation and setting JAVA_HOME may not points the newly installed java as the default one.
It may still point to the openjdk if it is present.
So to point to the new java.
Do the following steps.
alternatives --config java
This will a list of java installed in the machine and to which java, it is currently pointing.
This will ask you to choose any java from the list.
Exit from this by pressing cntrl+c.
To add our sun java to this list. Do the following step.
/usr/sbin/alternatives --install /usr/bin/java java <JAVA_HOME>/bin/java 2
This will add our newly installed java to the list.
Then do
alternatives --config java
and choose the newly installed java. Now java –version will show sun java.
SETTING UP THE LOCAL HADOOP REPOSITORY
Download the Cloudera rpm repository from a place where you have internet access.
Download the repository corresponding to your OS version.
The repo file corresponding to different operating systems are listed below. Copy the repo file and download the repository.
For OS Version | Click this Link |
---|---|
Red Hat/CentOS/Oracle 5 | Red Hat/CentOS/Oracle 5 |
Red Hat/CentOS 6 (32-bit) | Red Hat/CentOS/Oracle 6 |
Red Hat/CentOS 6 (64-bit) | Red Hat/CentOS/Oracle 6 |
You can download it rpm by rpm or do a repo-sync.
Repo-sync is explained in my previous post Creating A Local YUM Repository.
Once this is done, create a local repository in one of the machines.
Then create a repo file corresponding to the newly created repository and add that repo file to all the cluster machines.
After this we can do yum install similar like a machine having internet access.
Now do a yum clean all in all the cluster machines.
Do the following steps in the corresponding nodes. The following steps explains the installation of MRV1 only.
Installation
NAMENODE MACHINE
yum install hadoop-hdfs-namenode
SECONDARY NAMENODE
yum install hadoop-hdfs-secondarynamenode
DATANODE
yum install hadoop-hdfs-datanode
JOBTRACKER
yum install hadoop-0.20-mapreduce-jobtracker
TASKTRACKER
yum install hadoop-0.20-mapreduce-tasktracker
IN ALL CLIENT MACHINES
yum install hadoop-client
Normally we run datanode and tasktracker in the same node. Ie these are co-located for data locality.
Now edit the core-site.xml, mapred-site.xml and hdfs-site.xml in all the machines.
Configurations
Sample configurations are given below.
But for production set up, you can set other properties.
core-site.xml
<configuration> <property> <name>hadoop.tmp.dir</name> <value>/app/hadoop/tmp</value> <description>A Base dir for storing other temp directories</description> </property> <property> <name>fs.default.name</name> <value>hdfs://<namenode-hostname>:9000</value> <description>The name of default file system</description> </property> </configuration>
mapred-site.xml
<configuration> <property> <name>mapred.job.tracker</name> <value><jobtracker-hostname>:9001</value> <description>Job Tracker port</description> </property> <property> <name>mapred.local.dir</name> <value>/app/hadoop/mapred_local</value> <description>local dir for mapreduce jobs</description> </property> <property> <name>mapred.tasktracker.map.tasks.maximum</name> <value>6</value> <description>The maximum number of map tasks that will be run simultaneously by a task tracker. </description> </property> <property> <name>mapred.tasktracker.reduce.tasks.maximum</name> <value>2</value> <description>The maximum number of reduce tasks that will be run simultaneously by a task tracker. </description> </property> </configuration>
hdfs-site.xml
<configuration> <property> <name>dfs.replication</name> <value>3</value> <description>replication factor</description> </property> </configuration>
Then create hadoop.tmp.dir in all the machines. Hadoop stores its files in this folder.
Here we are using the location /app/hadoop/tmp
mkdir –p /app/hadoop/tmp mkdir /app/hadoop/mapred_local
This type of installation automatically creates two users
1) hdfs
2) mapred
The directories should be owned by hdfs, so we need to change the ownership
chown –R hdfs:hadoop /app/hadoop/tmp chmod –R 777 /app/hadoop/tmp chown mapred:hadoop /app/hadoop/mapred_local
The properties
mapred.tasktracker.map.tasks.maximum :- this will set the number of map slots in each node.
mapred.tasktracker.reduce.tasks.maximum :- this will set the number of reduce slots in each node.
This number is set by doing a calculation on the available RAM and jvm size of each task slot. The default size of the task slot is 200 MB. So if you have 4GB RAM free after sharing to OS requirements and other processes. We can have 4*1024 MB/200 number of task slots in that node.
ie 4*1024/200 = 20
So we can have 20 task slots which we can divide into map slots and reduce slots.
Usually we give higher number of map slots than reduce slots.
In hdfs-site.xml we are giving the replication factor. Default value is 3.
Formatting Namenode
Now go to the namenode machine and login as root user.
Then from cli, switch to hdfs user.
su – hdfs
Then format the namenode.
hadoop namenode –format
Starting Services
STARTING NAMENODE
In the namenode machine, execute the following command as root user.
/etc/init.d/hadoop-hdfs-namenode start
You can check whether the service is running or not by using the command jps.
Jps will work only if sun java is installed and added to path.
STARTING SECONDARY NAMENODE
In the Secondary Namenode machine, execute the following command as root user.
/etc/init.d/hadoop-hdfs-secondarynamenode start
STARTING DATANODE
In the Datanode machines, execute the following command as root user.
/etc/init.d/hadoop-hdfs-datanode start
Now the hdfs is started. Now we will be able to execute hdfs tasks.
STARTING JOBTRACKER
In the jobtracker machine login as root user, then switch to hdfs user.
su – hdfs
Then create the following hdfs directory structure and permissions.
hadoop fs –mkdir /app hadoop fs –mkdir /app/hadoop hadoop fs –mkdir /app/hadoop/tmp hadoop fs –mkdir /app/hadoop/tmp/mapred hadoop fs –mkdir /app/hadoop/tmp/mapred/staging hadoop fs –chmod –R 1777 /app/hadoop/tmp/mapred/staging hadoop fs –chown mapred:hadoop /app/hadoop/tmp/mapred
After doing this start the jobtracker
/etc/init.d/hadoop-0.20-mapreduce-jobtracker start
STARTING TASKTRACKER
In the tasktracker nodes, start the tasktracker by executing the following command
/etc/init.d/hadoop-0.20-mapreduce-tasktracker start
You can check the namenode webUI using a browser.
The URL is
http://<namenode-hostname>:50070
The Jobtracker web UI is
http://<jobtracker-hostname>:50030
If hostname resolution is not happened correctly, the hostname:port may not work.
In these situtaions you can use http://ip-address:port
Now our hadoop cluster is ready for use.
With this method, we can create hadoop cluster of any size within short time.
Here we have the entire hadoop ecosystem repository, so installing other components such as hive, pig, Hbase, sqoop etc can be done very easily.