What is Big Data? Why Big Data?

Big data is becoming a very hot topic in the Industry and every companies are trying to open an account in providing a solution for this BigData.

Why this BigData became very popular..?

Technology became very advanced and now we have reached a situation where data are able to speak. That means, previously we used data to find the results of events happened previously. Now we are using data to predict events that are going to happen.

Consider a doctor who learned everything with his several years of education and practice. This means he is doing his consulting based on his experience.  If we build a system that learns from historic data and predict events based on its learning, it will be very helpful in a lot of areas. As the size of useful data (data that contains details) increases, the accuracy of the prediction also increases. If the data size exceeds some limits, it will be very difficult to process it using conventional data processing technologies. Here the problem of big data arises. Now a days the demand of insight generation and prediction using big data is very high. Because almost systems everything that interacts with people are now giving recommendations. These recommendations are given based on some analysis over previous data. The accuracy of the recommendations increases as the size of data.  For example, we are getting item recommendations from Amazon, flipkart, ebay, friend suggestions from facebook, google advertisements etc. All these are happening by processing large data. This increases the sales in case of business and usability in case of other systems.

For solving these big data problems, frameworks such as hadoop, storm, spark etc evolved.

Most of the big data solutions are the implementation of distributed storage, distributed processing, in-memory processing etc.
Now a lot of analytics are happening in social media. You can’t do a control+Z on things that you uploaded to social media.


Facebook Opensourced Presto

Facebook opensourced its data processing technique ‘Presto’ to the world. Presto is a distributed query engine based on ANSI SQL. It is very optimized and currently running with more than 300 petabytes of data, which may one among the top big data processing systems. Presto is a totally different from mapreduce. It is an in memory data processing mechanism and is very much optimised. From the details given in the facebook newsletter and presto website, it is 10 times faster than Hive. mapreduce.Hive came from facebook only, so presto will definitely beat hive. Hive queries are ultimately running as multiple mapreduce jobs and it will take more time. From my point of view, the competition may be between Cloudera Impala and Presto. Impala’s performance with huge datasets is not available now from any production environments because it is a budding technology from cloudera family, but presto is already tested and running in huge dataset production environment. Another interesting fact about presto is that we can use the already existing infrastructure and hadoop cluster for deploying presto, because presto supports hdfs as its underlying data storage. It supports other storage systems also. So it is flexible. Leading internet companies including Airbnb and Dropbox are using Presto. Presto code and further details are available in this link

I have deployed Presto and Impala on a small cluster of 8 nodes. I haven’t got enough time to explore more on presto. I am planning to explore more on the coming days. 🙂

An Introduction to Apache Hive

Hive is an important member of hadoop ecosystem. It runs on top of hadoop.  Hive uses a SQL type query language to process the data in hdfs. Hive is very simple as compared to writing several lines of mapreduce codes using programming languages such as Java. Hive was developed by facebook in a vision to support their SQL experts to handle big data without much difficulty.  Hive queries are easy to learn for people who don’t know any programming languages.  People having experience in SQL can go straight forward with hive queries. The queries fired into hive will ultimately run as mapreduce.

Hive runs in two execution modes, local and distributed mode.

In local, the hive queries run as a single process and uses the local file system. In distributed mode, the mapper and reducer runs as different process and uses the hadoop distributed file system.

The installation of hive was explained well in my previous post Hive Installation.

Hive stores its contents in hdfs and table details (metadata) in some databases. By default the metadata is stored in derby database, but this is just for play around setups only and cannot be used for multiuser environments. For multiuser environments, we can use databases such as mysql, postgresql , oracle etc for storing the hive metadata. The data are stored in hdfs and it is contained in a location called hive warehouse directory which is defined by the property hive.metastore.warehouse.dir. By default this will be /user/hive/warehouse

We can fire queries into hive using a command line interface or using clients written in different programming languages. Hive server exposes a thrift service making hive accessible from various programming languages .

The simplicity and power of hive can be explained by comparing the word count program written in java program and in hive query.

The word count program written in java is well explained in my previous post A Simple Mapreduce Program – Wordcount . For that have to write a lot of lines of code and it will take time and it needs some good programming knowledge also.

The same word count can be done using hive query in a few lines of hive query.

CREATE TABLE word_counts AS
SELECT word, count(1) AS count FROM
(SELECT explode(split(line, '\s')) AS word FROM docs) word
ORDER BY word;

Hadoop Mapreduce- A Good Presentation…

This is a video  I found from youtube that explains hadoop mapreduce very clearly using the wordcount example.

Deployment and Management of Hadoop Clusters