Heterogeneous storages in HDFS

From hadoop 2.3.0 onwards, hdfs supports heterogeneous storage. What is this heterogeneous storage? What are the advantages of using this?.

Hadoop came as a processing system for processing and storing huge data, a scalable batch processing system. But now it became the platform for DataLake for Enterprises. In large enterprises, various types of data needs to be stored and processed for advanced analytics. Some of these data are required frequently, some are not required frequently, some are required very rarely. If we store all these in the same platform or hardware, the cost will be more. For example, if we are using a cluster in AWS. We have EC2 nodes for our cluster nodes. EC2 uses EBS and ephemeral storage. Depending upon the type of storage, the cost varies. S3 storage is cheaper than EBS storage, but access speed will be less. Similarly glacier will be cheaper compared to S3, but again the data retrieval will take time. Similarly, if we want to keep data in different storage types depending upon the priority and requirement, we can use this feature in hadoop. This feature was not available in earlier versions of hadoop. This is available in hadoop version 2.3.0 onwards. Now datanode can be defined as a collection of storages. Various storage policies available in hadoop are Hot, Warm, Cold, All_SSD, One_SSD and Lazy_Persist.

  • Hot – for both storage and compute. The data that is popular and still being used for processing will stay in this policy. When a block is hot, all replicas are stored in DISK.
  • Cold – only for storage with limited compute. The data that is no longer being used, or data that needs to be archived is moved from hot storage to cold storage. When a block is cold, all replicas are stored in ARCHIVE.
  • Warm – partially hot and partially cold. When a block is warm, some of its replicas are stored in DISK and the remaining replicas are stored in ARCHIVE.
  • All_SSD – for storing all replicas in SSD.
  • One_SSD – for storing one of the replicas in SSD. The remaining replicas are stored in DISK.
  • Lazy_Persist – for writing blocks with single replica in memory. The replica is first written in RAM_DISK and then it is lazily persisted in DISK.

Recovering a corrupted EC2 instance

Amazon Web Services is one of the most popular cloud service providers. I am a customer of Amazon. I like the services provided by Amazon very much. Compared to other cloud service providers AWS is simple, secure and advanced. I use EC2 machines for my project related activities as well as my personal experiments. Since I mostly work on open source software, 99.99 % of my EC2 instances are Linux instances. The only way to access these instances is through ssh. I use putty as the ssh client. If something happens to the ssh server, we will not be able to access the server. Sometimes the ssh server crashes due to overload. This can be resolved by rebooting the instance.

Sometimes because of wrong configs in the sshd config file, the ssh server may stop. The ssh server will not start until we make that file proper and restart the service. But for making these changes we have to access the machine.

By default we don’t have direct root login into the machine. We usually login to one user which is a sudo user and using sudo privileges, we access the root. If something happens to the sudoers file or if some wrong entry made in sudoers file, the root access will be revoked.

These are some of the commonly occurred situations where users loose access or super user privilege in the ec2 machine. Most of the users terminate and leave the instance in this situation.

If the instance is an EBS backed instance, we don’t have to terminate and leave the machines in this kind of situations. We can recover these instances. It is simple and can be done in few steps. If the instance is with ephemeral storage, we cannot do anything, because shutting down will clear all the data in the instance.

  1. Start a new instance in the same availability zone as that of the EBS of the broken machine. Micro or nano instance type is fine. If you already have an instance, no need of this instance.
  2. Stop the broken machine. Note down the mount locations
  3. Detach the EBS from the instance.
  4. Attach the EBS to the second EC2 instance (The newly launched one)
  5. Mount the EBS to some directory in the second EC2 instance.
  6. Navigate through the files and directories and make the required changes.
  7. Unmount the EBS
  8. Detach the EBS from the second instance
  9. Attach the EBS to the first instance
  10. Use the same mount location as that of the orginal
  11. Start the instance.

This should fix the problem.

Raspberry Pi 3 released.

Raspberry Pi 3 got released this week. Still remembering the moment before 3 years, my colleague introduced me about this magic device. Then I googled about this device and read the details. I am using this device from the past one year . I have used Raspberry Pi B+, Pi2 and now waiting for the delivery of Pi3 from element14. It is one of the superb devices that I have ever used. I can use my electronics knowledge and computer science knowledge using this. Now the Pi3 came with builtin wifi and bluetooth module so that it can be connected to a wifi or a bluetooth device without any external peripheral. Physical appearance is same as that of Pi2, but this version is more powerful. Now we can say Good Bye..!!! to wifi adapter modules. This will be a very big hit in the IoT market. Expecting more wonders from element14. 🙂

Increasing the inodes in the disk

I faced an issue while storing large number of small files in the disk. In my linux machine, I was unable to store data because the inodes were getting filled before the storage reaches its maximum limit. This issue was annoying me and wasted a lot of storage. This happened to me several times. Initially I just did a temporary workaround for this issue by clearing old files. But since it became a frequent problem for me, I searched for the solution for this and finally I figured out a work around. The workaround is by choosing a different type of file system while formatting the disk. I found an option to specify the number of inodes while formatting the disk, but I am not sure about the optimal number of inodes that I can specify. I saw some threads in some forums regarding the issues related to improper number of inodes.

/etc/mke2fs.conf file contains various file types with the various inode ratio. The lower the inode ratio, the more you can create files in your file system.

The syntax is given below

mkfs.ext4 -T usage-type /dev/something

The usage type which gives more number of inodes in news. I used news usage-type for my requirement. This gives more inodes as compared to ext4. After doing this, mount the drive and type df -i. This will give the inodes in the new disk.

Hue Error – DatabaseError: database is locked

You may face this error in Hue while using Impala or Hive. This is because of the lock happening in the backend database used in Hue. Hue uses a backend database to store all the metadata and history. By default it uses sqlite, which is not suitable for multiuser environments. The usage of the sqlite causes this issue.

We can resolve this by using mysql, postgresql or oracle database as the metastore for hue.

The detailed steps are explained in this document

Changing the python version in pyspark

pyspark will pick one version of python from the multiple versions of python installed in the machine. In my case, I have python 3, 2.7 and 2.6 installed in my machine and pyspark was picking python 3 by default. If we have to change the python version used by pyspark, set the following environment variable and run pyspark.

export PYSPARK_PYTHON=python2.6

similarly we can configure any version of python with pyspark. Ensure that python2.6 or whatever you are specifying is available

Programmatic way to reboot EC2 instances

Sometimes we might have to reboot EC2 instances. If the requirement is to restart EC2 instances regularly, we can achieve it by writing a small piece of code. I also came across a similar requirement and a portion of the code I used is given below.

 

bashrc file not loading automatically

Recently I faced an issue in my CentOS linux machine. When I login to the machine, the bashrc file was not getting loaded and because of this, the environment variables present in the bashrc file was also not getting loaded.

The solution for this issue is given below.

Create a file with the name .profile in the user’s home directory and add the following content to the file.

if [ -f ~/.bashrc ]; then
    source ~/.bashrc
fi

ORA-01045:user name lacks CREATE SESSION privilege; logon denied

After creating a user in oracle database, I tried to login using SQL developer and got an error “ORA-01045:user name lacks CREATE SESSION privilege; logon denied”.

The reason for this error was insufficient privileges.

I solved this issue by granting the following privilege.

grant create session to "<user-name>";

 

 

Unauthorized connection for super-user: hue from IP “x.x.x.x”

If you are getting the following error in hue,

Unauthorized connection for superuser: hue from IP “x.x.x.x”

Add the following property in the core-site.xml of your hadoop cluster and restart the cluster

<property>
<name>hadoop.proxyuser.hue.groups</name>
<value>*</value>
</property>

<property>
<name>hadoop.proxyuser.hue.hosts</name>
<value>*</value>
</property>

You may face similar error with oozie also. In that case add a similar conf for oozie user in the core-sire.xml

<property>
<name>hadoop.proxyuser.oozie.groups</name>
<value>*</value>
</property>

<property>
<name>hadoop.proxyuser.oozie.hosts</name>
<value>*</value>
</property>