How to check the entries in fstab without system reboot

/etc/fstab contains information about the disks. It has the details about where the partitions and storage devices should be mounted. We usually configure automount, disk quota, mount points etc in this fstab.

Inorder to test the entries or modifications in fstab without restart the following commands will be helpful

mount -a

The above command will mount all the filesystems mentioned in the fstab. This is just like a refresh command to activate the entries in fstab.

mount -fav

The above command will help if you don’t want to apply the modifications in the fstab and want to validate the entries only.  This will just fake the entries in the fstab without applying the changes. This is a very useful command.

 

 

Advertisements

Disable SELinux without reboot

To disable the SELinux by modifying /etc/sysconfig/selinux file, we have to perform a reboot. In some cases, we may not be able to perform a reboot because this involves a downtime of the system. In this situations we can disable SELinux by using a simple command. This will not disable SELinux permanently. The effect will last until the next reboot, but you have the option to edit the selinux file so that it will be in the disabled state even after  the reboot also. The steps for disabling selinux permanently are explained in my previous post.

The command the check the status of SELinux is given below.

sestatus

This may show enforcing or permissive or disabled. In permissive mode, SELinux will not block anything, but merely warns you. The line will show enforcing when it’s actually blocking.

To disable the SELinux temporarily we can use the following command. This has to be executed as root or using sudo.

setenforce 0

After this command execution we can check the status of selinux using sestatus command. If it is permissive, we are good to go. 🙂

Disable SELinux in CentOS and RHEL

Security-Enhanced Linux (SELinux) is a security architecture integrated into the 2.6.x kernel using the Linux Security Modules. It is a project of the United States National Security Agency (NSA) and the SELinux community. SELinux integration into Red Hat Enterprise Linux was a joint effort between the NSA and Red Hat.

Most of the application needs SELinux to be turned off. Turning off selinux is simple. You can use the following steps to turn off selinux in RHEL or CentOS 6 and 7 operating systems.

Open the file /etc/sysconfig/selinux . The contents will be similar as below.

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - SELinux is fully disabled.
SELINUX=disabled
# SELINUXTYPE= type of policy in use. Possible values are:
# targeted - Only targeted network daemons are protected.
# strict - Full SELinux protection.
SELINUXTYPE=targeted

 

The contents are self explanatory. Change the value of SELINUX as disabled and save the file. Then reboot the system.

Utility to get the complete details of a Linux system

This is a small shell script that captures almost all the necessary details of a linux system. I tested this script in CentOS and Redhat operating systems. You can access this script directly from github.

How to add EPEL Repository in Linux ?

Linux is my favourite operating system. I like windows for multimedia activities. But when it comes to work and experiments, I like linux. Linux gives us the flexibility to perform all operations and it is a vast ocean to explore. Most of us might have heard about EPEL. We used to download lot of packages from EPEL.

But did anyone knows what is EPEL ??
EPEL stands for Extra Packages for Enterprise Linux. It is an opensource repository maintained by the community which contains lot of useful software packages for Redhat, CentOS and Scientific Linux. We can find packages for almost everything as per our needs from this repository.

  • EPEL repository is 100% opensource and is free to use.
  • No extra effort is required to install these packages.
  • Version specific packages are available depending upon the OS version. So this will not cause any conflicts with existing packages in the OS.
  • Can be simply installed using yum

By default the epel repository will not be added in the linux. We have to add it explicitly. We have to download the epel repo and add it to the repositories. This can be simply done by installing an rpm. The following steps help you in adding the epel repository to your CentOS/Redhat machine.

RHEL/CentOS 7 64-Bit

## RHEL/CentOS 7 64-Bit ##
# wget http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
# rpm -ivh epel-release-7-5.noarch.rpm

RHEL/CentOS 6 32-Bit

## RHEL/CentOS 6 32-Bit ##
# wget http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
# rpm -ivh epel-release-6-8.noarch.rpm

RHEL/CentOS 6 64-Bit

## RHEL/CentOS 6 64-Bit ##
# wget http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
# rpm -ivh epel-release-6-8.noarch.rpm

RHEL/CentOS 5 32-Bit

## RHEL/CentOS 5 32-Bit ##
# wget http://download.fedoraproject.org/pub/epel/5/i386/epel-release-5-4.noarch.rpm
# rpm -ivh epel-release-5-4.noarch.rpm

RHEL/CentOS 5 64-Bit

## RHEL/CentOS 5 64-Bit ##
# wget http://download.fedoraproject.org/pub/epel/5/x86_64/epel-release-5-4.noarch.rpm
# rpm -ivh epel-release-5-4.noarch.rpm

RHEL/CentOS 4 32-Bit

## RHEL/CentOS 4 32-Bit ##
# wget http://download.fedoraproject.org/pub/epel/4/i386/epel-release-4-10.noarch.rpm
# rpm -ivh epel-release-4-10.noarch.rpm

RHEL/CentOS 4 64-Bit

## RHEL/CentOS 4 64-Bit ##
# wget http://download.fedoraproject.org/pub/epel/4/x86_64/epel-release-4-10.noarch.rpm
# rpm -ivh epel-release-4-10.noarch.rpm

Monitoring Tools for Hadoop Clusters

For getting the exact status of server machines, we use monitoring tools. Now a days a lot of monitoring tools are available with good monitoring capabilities and alert capabilities. I was in a search for finding a good monitoring as well as alert tool for hadoop clusters. From my observation, I found some free tools

1) Ganglia
2) Nagios
3) Zabbix
4) Cloudera Manager
6) Apache Ambari

My observations are as follows.

For using Ambari or Cloudera Manager, the cluster should be installed using that tool itself.
That means, we cannot monitor an existing cluster using these tools.
Ganglia provides good matrices and we can capture custom matrices using ganglia. Ganglia is very much flexible.Hadoop comes with a set of configurations that can be used for capturing hadoop matrices using ganglia.These properties can be seen in hadoop-metrics.properties file. New ganglia web UI is very good and we can export the metrics as csv or json files. This is a very useful feature.But ganglia doesn’t have the alert giving capability such as sending mails in case of issues.Here we can use Nagios. Nagios-Ganglia integration is a good tool for monitoring hadoop clusters. Because we will get good metrics capturing capability as well as alert sending capability.

Ganglia is free. Nagios base version is free. Base version of nagios serves our needs.

Zabbix is also a good tool. A lot of production clusters are running with zabbix as monitoring tool.

Decommissioning a Datanode in a Hadoop cluster

Sometimes we may require to remove a node from a hadoop cluster without loosing the data.
For this we have to do the decommissioning procedures.
Decommisioning will exclude a node from the cluster after replicating the data present in the decommissioning node to the other active nodes.

The decommissioning is very simple. The steps are explained below.
First stop the tasktracker in the node to be decommissioned.
In the namenode machine add the below property to the hdfs-site.xml

<property>
<name>dfs.hosts.exclude</name>
<value>/etc/hadoop/conf/dfs.exclude</value>
</property>

where dfs.exclude is a file that we have to create and place it in a safe location. Better to keep it in HADOOP_CONF_DIR (/etc/hadoop/conf).

Create a file named dfs.exclude and add the hostnames of machines that need to be decommissioned line by line.

Eg: dfs.exclude

hostname1
hostname2
hostname3

After doing this, execute the following command from the superuser in the namenode machine.

hadoop dfsadmin -refreshNodes

After this, check the namenode UI. ie http://namenode:50070
You will be able to see the machines under decommissioning nodes.
The decommissioning process will take some time.
After the re-replication gets completed, the machine will be added to decommissioned nodes list.
After this, the decommissioned node can be safely removed from the cluster. 🙂

Upgrading Hadoop Clusters

Last day me and my friends tried hadoop cluster upgrade.

We tried two upgrades and both were successful.

One was from cdh3u1 cluster to cdh-4.3.0 and other from cdh 4.1.2 to cdh 4.3.0.
For upgrading we need to upgrade the hadoop installation and the filesystem.
It was a nice experience.

The steps we followed are listed below.

First we checked the filesystem for missing blocks and created the report of the entire filesystem.

From the superuser (hdfs), we executed the command

hadoop dfsadmin –report  > reportold.log

hadoop  fsck  / >  fsckold.log

With this we will get the reports and status of the entire filesystem.

We can keep these for future comparison.

If there are any issues found in the report, do the necessary actions for making it proper.

If everything is fine, we can move futher with our upgrade process.

After this  we stopped all the processes.

For ensuring no accidental data loss, we backed up our namenode and datanode storage.

ie dfs.name.dir and dfs.data.dir.

After that we copied the hadoop configuration files and saved it in a different location for further use.

Then we uninstalled the entire hadoop installation(old version).

Care should be taken for keeping the contents of dfs.name.dir, dfs.data.dir secure.

We created a CDH 4.3.0 local repository and installed CDH 4.3.0 in all the machines similar to old version. The installation steps are mentioned in my previous posts.

Creating A Local YUM Repository

Hadoop Installation

Then we added the configuration files which we copied from the older installation previously.

We pointed the dfs.name.dir and dfs.data.dir to the correct locations.

After doing this, in the namenode machine, execute the following command.

/etc/init.d/hadoop-hdfs-namenode upgrade

Or

service hadoop-hdfs-namenode upgrade

This This will start the namenode and will upgrade the hadoop filesystem to the newer version.

After this, start all the other daemons and check whether everything is working fine or not.

Check the filesystem using the below commands (execute these commands from superuser)

hadoop dfsadmin –report  > reportnew.log

hadoop  fsck  /  >  fscknew.log

Compare the reportnew.log , fscknew.log with reportold.log and fsckold.log.

Note: If we are not satisfied with the upgrade, we can rollback to the previous version. This can be done by uninstalling the newer version and installing the older version  and execting the command

/etc/init.d/hadoop-hdfs-namenode rollback

This can be done only once and cannot do once the upgrade is finalized

If both the reports are same and if there is no problem of missing blocks, we can finalize our upgrade.

Stop all the daemons and execute the following command in the namenode machine

/etc/init.d/hadoop-hdfs-namenode finalizeUpgrade

Once the upgrade is finalized, we cannot rollback.

Note: From our experience we found that cdh4.1.2 filesystem and cdh4.3.0 filesystem are compatable. ie we found cdh-4.3.0 working properly by using the cdh4.1.2’s filesystem without executing the upgrade command.

Linux Filesystem colour codes

When we fire ls –all in linux cli, files may be listed in different colours  

The color code of the files is as follows:

Blue: Directory file

White: Normal file

Green: Executable file

Yellow: Device file

Magenta: Picture file

Cyan: link file

Red: Compressed file

File Symbol

-(Hyphen) = Normal file

d=directory

l=link file

b=Block device file

c=character device file

Accessing Unix Server through Putty using Private Key

Type hostname or ipaddress

Capture1

Then in the left side part of putty, click on SSH and expand.

Capture2

Then you can see a section auth

Click on auth

Capture3

There you will get a window with browse button.

Capture4

Load your private key file (.ppk) and press open.

Then enter username and passphrase-key (if given) and login

This is the method we usually use to login to unix  cloud instances .