How to Change the Hostname of Ubuntu Server ?

We can change the hostname of a machine by several ways. I am mentioning one way to change the hostname of Ubuntu server.

  1. Open the terminal or Login with putty.exe as root user (if you are working remotely)
  2. Goto /etc/
  3. Type nano hostname
  4. Change the HOSTNAME to your preferred machine name
  5. Press Cntrl+X
  6. Save the configuration by pressing Y
  7. Log off or reboot

Note: Add the new hostname and ipaddress in the /etc/hosts file also

Advertisements

How to Change the Hostname of CentOS or RedHat Linux systems?

We can change the hostname of a machine by several ways. I am mentioning two ways to change the hostname.

Method 1 :- Editting /etc/sysconfig/network file

  1. Open the terminal or Login with putty.exe as root user (if you are working remotely)
  2. Goto /etc/sysconfig/
  3. Type nano network
  4. Change the HOSTNAME to your preferred machine name
  5. Press Cntrl+X
  6. Save the configuration by pressing Y
  7. Log off or reboot

Method 2:- Editting  /proc/sys/kernel/hostname file

  1. Open the terminal or Login with putty.exe as root user (if you are working remotely)
  2. Goto /proc/sys/kernel/
  3. Type nano hostname
  4. Change the HOSTNAME to your preferred machine name
  5. Press Cntrl+X
  6. Save the configuration by pressing Y
  7. Close the terminal and login again

In this method, no reboot is required to get the change in effect

Note: Add the ipaddress and new hostname to /etc/hosts file also

Hadoop commands in hive command line interface

We can execute hadoop commands in hive cli. It is very simple.
Just put an exclamation mark (!) before your hadoop command in hive cli and put a semicolon (;) after your command.

Example:

hive> !hadoop fs –ls / ;

drwxr-xr-x   - hdfs supergroup          0 2013-03-20 12:44 /app
drwxrwxrwx   - hdfs supergroup          0 2013-05-23 11:54 /tmp
drwxr-xr-x   - hdfs supergroup          0 2013-05-08 18:47 /user

Very simple.. 🙂

Making hive usable to multiple users in a hadoop cluster.

By default, hive operations are limited to the superuser. If you are using cdh, then the superuser is hdfs.
The reason for this is because of the permission of hive warehouse directory.
By default the read/permission of this directory is given only to the superuser.
So if we want to use hive from multiple users, change the permission of this directory accordingly.
If you want to make hive usable by all users, then do the following command.

hadoop fs –chmod –R 777 /user/hive/warehouse

hadoop fs –chmod –R 777 /tmp

If you group the users in specific groups, then you can do this by giving read/write permission to group only. ie 775

Changing the Hive Warehouse Directory

By default the hive warehouse directory is located at  the hdfs location /user/hive/warehouse

If you want to change this location, you can add the following property to hive-site.xml.

Everyone using hive should have appropriate read/write permissions to this warehouse directory.

<property>
   <name>hive.metastore.warehouse.dir</name>
   <value>/user/hivestore/warehouse </value>
   <description>location of the warehouse directory</description>
 </property>

Custom Text Input Format Record Delimiter for Hadoop

By default mapreduce program accepts text file and it reads line by line. Technically speaking the default input format is text input format and the default delimiter is ‘/n’ (new line).

In several cases, we need to override this property. For example if you have a large text file and you want to read the contents between ‘.’  in each read.

In this case, using the default delimiter will be difficult.

For example.

If you have a file like this


Ice cream (derived from earlier iced cream or cream ice) is a frozen  title="Dessert"  usually made from dairy products, such as milk and cream and often combined with fruits or other ingredients and flavours. Most varieties contain sugar, although some are made with other sweeteners. In some cases, artificial flavourings and colourings are used in addition to, or instead of, the natural ingredients. The mixture of chosen ingredients is stirred slowly while cooling, in order to incorporate air and to prevent large ice crystals from forming. The result is a smoothly textured semi-solid foam that is malleable and can be scooped.

And we want each record as


1) Ice cream (derived from earlier iced cream or cream ice) is a frozen dessert usually made from dairy products, such as milk and cream and often combined with fruits or other ingredients and flavours

2) Most varieties contain sugar, although some are made with other sweeteners

3) In some cases, artificial flavourings and colourings are used in addition to, or instead of, the natural ingredients

4) The mixture of chosen ingredients is stirred slowly while cooling, in order to incorporate air and to prevent large ice crystals from forming

5) The result is a smoothly textured semi-solid foam that is malleable and can be scooped

This we can do it by overriding one property textinputformat.record.delimiter

We can either set this property in the driver class or just changing the value of delimiter in the TextInputFormat class.

The first method is the easiest way.

Setting the textinputformat.record.delimiter in Driver class

The format for setting it in the program (Driver class)  is


conf.set(“textinputformat.record.delimiter”, “delimiter”)

The value you are setting by this method is ultimately going into the TextInputFormat class. This is explained below.

Editting the TextInputFormat class

.

Default TextInputFormat class

public class TextInputFormat extends FileInputFormat<LongWritable, Text> {

  @Override
  public RecordReader<LongWritable, Text>
    createRecordReader(InputSplit split,
                       TaskAttemptContext context) {
// By default,textinputformat.record.delimiter = ‘/n’(Set in configuration file)
    String delimiter = context.getConfiguration().get(
        "textinputformat.record.delimiter");
    byte[] recordDelimiterBytes = null;
    if (null != delimiter)
      recordDelimiterBytes = delimiter.getBytes();
    return new LineRecordReader(recordDelimiterBytes);
  }

  @Override
  protected boolean isSplitable(JobContext context, Path file) {
    CompressionCodec codec =
      new CompressionCodecFactory(context.getConfiguration()).getCodec(file);
    return codec == null;
  }
}

Editted TextInputFormat class


public class TextInputFormat extends FileInputFormat<LongWritable, Text> {

  @Override
  public RecordReader<LongWritable, Text>
    createRecordReader(InputSplit split,
                       TaskAttemptContext context) {

// Hardcoding this value as “.”
// You can add any delimiter as your requirement

    String delimiter = “.”;
    byte[] recordDelimiterBytes = null;
    if (null != delimiter)
      recordDelimiterBytes = delimiter.getBytes();
    return new LineRecordReader(recordDelimiterBytes);
  }

  @Override
  protected boolean isSplitable(JobContext context, Path file) {
    CompressionCodec codec =
      new CompressionCodecFactory(context.getConfiguration()).getCodec(file);
    return codec == null;
  }
}

Simple Sentence Detector and Tokenizer Using OpenNLP

Machine learning is a branch of artificial intelligence. In this we  create and study about systems that can learn from data. We all learn from our experience or others experience. In machine learning, the system is also getting learned from some experience, which we feed as data.

So for getting an inference about something, first we train the system with some set of data. With that data, the system learns and will become capable to give inference for new data. This is the basic principal behind machine learning.

There are a lot of machine learning toolkits available. Here I am explaining a simple program by using Apache OpenNLP. OpenNLP library is a machine learning based toolkit which is made for text processing. A lot of components are available in this toolkit. Here I am  explaining a simple sentence detector and a tokenizer using OpenNLP.

Sentence Detector

Download the en-sent.bin from the Apache OpenNLP website and add this to the class path.


public void SentenceSplitter()
	{
	SentenceDetector sentenceDetector = null;
	InputStream modelIn = null;
	
	try {
       modelIn = getClass().getResourceAsStream("en-sent.bin");
       final SentenceModel sentenceModel = new SentenceModel(modelIn);
       modelIn.close();
       sentenceDetector = new SentenceDetectorME(sentenceModel);
	}
	catch (final IOException ioe) {
		   ioe.printStackTrace();
		}
	finally {
		   if (modelIn != null) {
		      try {
		         modelIn.close();
		      } catch (final IOException e) {}
		   }
		}
	String sentences[]=(sentenceDetector.sentDetect("I am Amal. I am engineer. I like travelling and driving"));
	for(int i=0; i<sentences.length;i++)
	{
		System.out.println(sentences[i]);
	}
	}

Instead of giving sentence inside the program, you can give it as an input file.

Tokenizer

Download the en-token.bin from the Apache OpenNLP website and add this to the class path.

public void Tokenizer() throws FileNotFoundException
     {
	//InputStream modelIn = new FileInputStream("en-token.bin");
	InputStream modelIn=getClass().getResourceAsStream("en-token.bin");
		try {
			  TokenizerModel model = new TokenizerModel(modelIn);
			  Tokenizer tokenizer = new TokenizerME(model);
			  String tokens[] = tokenizer.tokenize("Sample tokenizer program using java");
			  
			  for(int i=0; i<tokens.length;i++)
				{
					System.out.println(tokens[i]);
				}
			}
			catch (IOException e) {
			  e.printStackTrace();
			}
			finally {
			  if (modelIn != null) {
			    try {
			      modelIn.close();
			    }
			    catch (IOException e) {
			    }
			  } 
			}		
	}

RStudio Installation

RStudio is an IDE for R. It will make the R programming more user friendly.

Here I am explaining the way to install RStudio in linux machines.
RStudio needs R. So before installing RStudio, we have to install R. Installation of R is explained in my previous post.
R-3.0.0 needs RStudio-0.97 or above.
We can download Rstudio server rpm from the below website http://www.rstudio.com/ide/download/
Then install it using the command

rpm –ivh <rpm-name>

Sometimes it may show dependency issues.
Install the necessary dependencies and go ahead.

After that start R-Studio server.

/etc/init.d/rstudio-server start

If the installation is done correctly it will start.
You can verify the installation using the command

sudo rstudio-server verify-installation

Then go to webbrowser (Mozilla, chrome) and type the url
http://ipaddress:8787

8787 is the default port, you can change this by editing the configuration file.
All the users in the linux except the system users(whose userid lower than 100) can use rstudio.
The credentials are the same as linux credentials.

HDFS Operations Using Java Program

We are familiar with Hadoop Distributed File System operations such as copyFromLocal, copyToLocal, mv, cp, rmr etc.
Here I am explaining the method to do these operations using Java API. Currently I am explaining the programs to do copyFromLocal and copyToLocal functions only.

Here I used eclipse IDE for programming which is installed in my windows desktop machine.
I have a hadoop cluster. The cluster machines and my destop machine are in the same network.

First create a java project and inside that create a folder named conf. Copy the hadoop configuration files (core-site.xml, mapred-site.xml, hdfs-site.xml) from your hadoop installation to this conf folder.

Create another folder named source which we are using as the input location and put a text file inside that source folder.
One thing you have to remember is that the source and destination locations will be given appropriate permissions. Otherwise read/write will be blocked.

Copying a File from Local to HDFS

The command is
hadoop fs -copyFromLocal

package com.amal.hadoop;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;

/**
 * @author amalgjose
 *
 */
public class CopyFromLocal {

	public static void main(String[] args) throws IOException {
		
		Configuration conf =new Configuration();
		conf.addResource(new Path("conf/core-site.xml"));
		conf.addResource(new Path("conf/mapred-site.xml"));
		conf.addResource(new Path("conf/hdfs=site.xml"));
		FileSystem fs = FileSystem.get(conf);
		Path sourcePath = new Path("source");
		Path destPath = new Path("/user/training");
		if(!(fs.exists(destPath)))
		{
			System.out.println("No Such destination exists :"+destPath);
			return;
		}
		
		fs.copyFromLocalFile(sourcePath, destPath);
		
	}
}

Copying a File from HDFS to Local

The command is
hadoop fs -copyToLocal

package com.amal.hadoop;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
/**
 * @author amalgjose
 *
 */
public class CopyToLocal {
public static void main(String[] args) throws IOException {
		
		Configuration conf =new Configuration();
		conf.addResource(new Path("conf/core-site.xml"));
		conf.addResource(new Path("conf/mapred-site.xml"));
		conf.addResource(new Path("conf/hdfs=site.xml"));
		FileSystem fs = FileSystem.get(conf);
		Path sourcePath = new Path("/user/training");
		Path destPath = new Path("destination");
		if(!(fs.exists(sourcePath)))
		{
			System.out.println("No Such Source exists :"+sourcePath);
			return;
		}
		
		fs.copyToLocalFile(sourcePath, destPath);
		
	}
}

Rhipe Installation

Rhipe was first developed by Saptarshi Guha.
Rhipe needs R and Hadoop. So first install R and hadooop. Installation of R and hadoop are well explained in my previous posts. The latest version of Rhipe as of now is Rhipe-0.73.1. and  latest available version of R is R-3.0.0. If you are using CDH4 (Cloudera distribution of hadoop) , use Rhipe-0.73 or later versions, because older versions may not work with CDH4.
Rhipe is an R and Hadoop integrated programming environment. Rhipe integrates R and Hadoop. Rhipe is very good for statistical and analytical calculations of very large data. Because here R is integrated with hadoop, so it will process in distributed mode, ie  mapreduce.
Futher explainations of Rhipe are available in http://www.datadr.org/

Prerequisites

Hadoop, R, protocol buffers and rJava should be installed before installing Rhipe.
We are installing Rhipe in a hadoop cluster. So the job submitted may execute in any of the tasktracker nodes. So we have to install R and Rhipe in all the tasktracker nodes, otherwise you will face an exception “Cannot find R” or something similar to that.

Installing Protocol Buffer

Download the protocol buffer 2.4.1 from the below link

http://protobuf.googlecode.com/files/protobuf-2.4.1.tar.gz

tar -xzvf protobuf-2.4.1.tar.gz

cd protobuf-2.4.1

chmod -R 755 protobuf-2.4.1

./configure

make

make install

Set the environment variable PKG_CONFIG_PATH

nano /etc/bashrc

export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig

save and exit

Then executed the following commands to check the installation

pkg-config --modversion protobuf

This will show the version number 2.4.1
Then execute

pkg-config --libs protobuf

This will display the following things

-pthread -L/usr/local/lib -lprotobuf -lz –lpthread

If these two are working fine, This means that the protobuf is properly installed.

Set the environment variables for hadoop

For example

nano /etc/bashrc

export HADOOP_HOME=/usr/lib/hadoop

export HADOOP_BIN=/usr/lib/hadoop/bin

export HADOOP_CONF_DIR=/etc/hadoop/conf

save and exit

Then


cd /etc/ld.so.conf.d/

nano Protobuf-x86.conf

/usr/local/lib   # add this value as the content of Protobuf-x86.conf

Save and exit

/sbin/ldconfig

Installing rJava

Download the rJava tarball from the below link.

http://cran.r-project.org/web/packages/rJava/index.html

The latest version of rJava available as of now is rJava_0.9-4.tar.gz

install rJava using the following command

R CMD INSTALL rJava_0.9-4.tar.gz

Installing Rhipe

Rhipe can be downloaded from the following link
https://github.com/saptarshiguha/RHIPE/blob/master/code/Rhipe_0.73.1.tar.gz

R CMD INSTALL Rhipe_0.73.1.tar.gz

This will install Rhipe

After this type R in the terminal

You will enter into R terminal

Then type

library(Rhipe)

#This will display

------------------------------------------------

| Please call rhinit() else RHIPE will not run |

————————————————

rhinit()

#This will display

Rhipe: Detected CDH4 jar files, using RhipeCDH4.jar
Initializing Rhipe v0.73
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/client-0.20/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/client/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
Initializing mapfile caches

Now you can execute you Rhipe scripts.