Advertisements

How to check the Java architecture from command line ?

To check the Java architecture whether it is 32 bit or 64 bit, the following commands will be helpful.

Execute the following commands in the command line and check the results

java -d32 -version

java -d64 -version

If any of the above command is giving an error message similar to “Error: This Java instance does not support a xx-bit JVM.” If it is not supporting 32 bit, then we can say it is a 64 bit Java. If it is not supporting 64 bit, we can say it is a 32-bit Java. The screenshot of the same is attached below.

java_screen.PNG

Hope this will help .. ūüôā

Advertisements

Sample program with Hadoop Counters and Distributed Cache

Counters are very useful feature in hadoop. This helps us in tracking global events in our job, ie across map and reduce phases.
When we execute a mapreduce job, we can see a lot of counters listed in the logs. Other than the default built-in counters, we can create our own custom counters. The custom counters will be listed along with the built-in counters.
This helps us in several ways. Here I am explaining a scenario where I am using a custom counter for counting the number of good words and stop words in the given text files. The stop words in this program are provided at the run time using distributed cache.
This is a mapper only job. The property job.setNumReduceTasks(0) makes the it a mapper only job.

Here I am introducing another feature in hadoop called Distributed Cache.
Distributed cache will distribute application specific read only files efficiently through out the application.
My requirement is to filter the stop words from input text files. The stop words list may vary. So if I hard code the list in my program, I have to update the code everytime to make changes in the stop word list. This is not a good practice.¬†I used distributed cache for this and the file containing the stop words is loaded to the distributed cache. This makes the file available to mapper as well as reducer.¬†In this program,¬†we don’t require any reducer.

The code is attached below. You can also get the code from the github.

Create a java project with the above java classes. Add the dependent java libraries.(Libraries will be present in your hadoop installation). Export the project as a runnable jar and execute. The file containing the stop words should be present in hdfs. The stop words should be added line by line in the stop word file. Sample format is given below.

is

the

am

are

with

was

were

Sample command to execute the program is given below.

hadoop jar <jar-name>  -skip  <stop-word-file-in hdfs>   <input-data-location>    <output-location>

Eg:  hadoop jar Skipper.jar  -skip /user/hadoop/skip/skip.txt     /user/hadoop/input     /user/hadoop/output

In the job logs, you can see the custom counters also. I am attaching a sample log below.

Counters

Program to compress a file in Snappy format

Hadoop supports various compression formats. Snappy is one among the compression formats supported by hadoop. I created a snappy compressed file using the google snappy library and used in hadoop.  But it gave me an error that the file is missing the Snappy identifier. I did a little research on this and found the workaround for that. The method I followed for finding the solution was as follows.
I compressed a file in snappy using the google snappy library and the snappy codecs present in hadoop. I verified the file size and checksum of both the files and found that It is having difference. The compressed file created using hadoop snappy is having some bytes more than that of the compressed file created using google snappy. It is some extra metadata that is consuming the extra bytes.
The code shown below will help you in creating snappy compressed file which will work perfectly in hadoop. This code requires the following dependent jars. This is available in your hadoop installation.
1)  hadoop-common.jar

2) guava-xx.jar

3) log4j.jar

4) commons-collections.jar

5) commons-logging.x.x.x.jar

You can download the code directly from github

“Missing artifact jdk.tools:jdk.tools:jar:1.6”

While using maven, we may face an error like
“Missing artifact jdk.tools:jdk.tools:jar:1.6”

This problem can be fixed by adding the below lines to your pom.xml file.
Replace ${JAVA_HOME} in the xml file with the absolute path of JAVA_HOME.

<dependency>
<groupId>jdk.tools</groupId>
<artifactId>jdk.tools</artifactId>
<scope>system</scope>
<version>1.6</version>
<systemPath>${JAVA_HOME}/lib/tools.jar</systemPath>
</dependency>

Simple Hive JDBC Client

Here I am explaining a sample hive jdbc client. With this we can fire hive queries from java programs. The only thing is that we need to start the hive server. By default, hive server listens at port 10000. The sample program is given below. The program is self explanatory and you can rewrite it to execute any type of hive queries. For this program you need the mysql-connector jar in the classpath.

import java.sql.SQLException;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.Statement;
import java.sql.DriverManager;

  /*
   * @author
   * Amal G Jose
   * 
   */

public class HiveJdbc
  private static String driver = "org.apache.hadoop.hive.jdbc.HiveDriver";

  /**
   * @param args
   * @throws SQLException
   */
  public static void main(String[] args) throws SQLException {
      try {
      Class.forName(driver);
    } catch (ClassNotFoundException e) {
      e.printStackTrace();
      System.exit(1);
    }

    Connection connect = DriverManager.getConnection("jdbc:hive://:10000/default", "", "");
    Statement state = connect.createStatement();
    String tableName = "test";
    state.executeQuery("drop table " + tableName);
    ResultSet res = state.executeQuery("create table " + tableName + " (key int, value string)");
   
   // Query to show tables
    String show = "show tables";
    System.out.println("Running: " + show);
    res = state.executeQuery(show);
    if (res.next()) {
      System.out.println(res.getString(1));
    }

    // Query to describe table
    String describe = "describe " + tableName;
    System.out.println("Running: " + describe);
    res = state.executeQuery(describe);
    while (res.next()) {
      System.out.println(res.getString(1) + "\t" + res.getString(2));
    }

  }
}

Swapping Two numbers without using a third variable

This is a simple method for swapping the values of two numeric variables without using a third variable.
The sample java code is given below.

public void Swapping(int a, int b)
	{
                System.out.println(Values Before Swapping);
		System.out.println(a+" , "+b);
		a=a+b;
		b=a-b;
		a=a-b;
                System.out.println(Values After Swapping);
		System.out.println(a+" , "+b);
	}

Checking for Odd or Even without using any Conditional Statements

Last day my friend asked me a question to write a program which tells whether a given number is odd or even without using any conditional statements. It is very simple. There may be several solutions. Two of the solutions are given below. The code is given below

Using Array


public void OddEven(int num)
	{
		String []store = {"even","odd"};
		System.out.println("The number is "+store[(num%2)]);	
	}

Using try-catch

public void EvenOdd ( int num)
	{
		int temp = num%2;
		try {
			int ans = 10/temp;
			System.out.println("Number is odd");
		}
		catch (Exception e) {
			System.out.println("Number is Even");
		}
	}