htop command not found centos 7

In a freshly installed CentOS or RHEL servers, we may get the following error while trying to use htop. For installing htop, we need epel repository. Following the below steps to install htop.

yum clean all

yum install epel-release

yum install htop

Cassandra not getting started in CentOS 7 and RHEL 7

After a recent update in CentOS 7 and RHEL 7, the cassandra daemon stopped working. I was getting the following error while trying to start the cassandra using systemd. Similar installations were working fine in the recent past and suddenly it stopped working.

Mar 20 13:22:34 localhost systemd[1]: New main PID 72596 does not belong to service, and PID file is not owned by root. Refusing.
Mar 20 13:22:34 localhost systemd[1]: New main PID 72596 does not belong to service, and PID file is not owned by root. Refusing.
Mar 20 13:22:34 localhost systemd[1]: Failed to start LSB: distributed storage system for structured data.

Root Cause

The cassandra starts, but the systemd cannot control it. The cause is that when the cassandra starts, the old initialization SysV script is used, in which it is obviously impossible to specify the user and group to start the service.

It’s about user/group options for systemd:
—————–
[Service]
User=cassandra
Group=cassandra
—————–

But since the process pid is created with the permissions of the cassandra user, and the user and group are not specified in the initialization script, the systemd consider that it uses the root to start the service (by default) and does not allow creating the pid with cassandra user permissions.
——————
systemd[1]: New main PID 2545 does not belong to service, and PID file is not owned by root. Refusing.
——————

More details in CVE-2018-16888 (https://access.redhat.com/security/cve/cve-2018-16888)
——————
It was discovered systemd does not correctly check the content of PIDFile files before using it to kill processes. When a service is run from an unprivileged user (e.g. User field set in the service file), a local attacker who is able to write to the PIDFile of the mentioned service may use this flaw to trick systemd into killing other services and/or privileged processes.
——————

Solution

Update the /etc/rc.d/init.d/cassandra file. Either make the following patch manually or replace the entire file with the file that I provided below.

Option: 1 – Manual Patch

Open /etc/rc.d/init.d/cassandra file and make the modifications as per comments in the below script. The below snippet is not the complete script, it is only the portion which needs update. Do not copy paste and replace the file completely with this

case "$1" in
start)
# Cassandra startup
echo -n "Starting Cassandra: "
[ -d `dirname "$pid_file"` ] || \
install -m 755 -o $CASSANDRA_OWNR -g $CASSANDRA_OWNR -d `dirname $pid_file`
# -Commented for fix
#su $CASSANDRA_OWNR -c "$CASSANDRA_PROG -p $pid_file" > $log_file 2>&1
# +Added for fix
runuser -u $CASSANDRA_OWNR -- $CASSANDRA_PROG -p $pid_file > $log_file 2>&1
retval=$?
# +Added new
chown root.root $pid_file
[ $retval -eq 0 ] && touch $lock_file
echo "OK"
;;
stop)
# Cassandra shutdown
echo -n "Shutdown Cassandra: "
# -Commented as per the fix
#su $CASSANDRA_OWNR -c "kill `cat $pid_file`"
# +Added for fixing the issue
runuser -u $CASSANDRA_OWNR -- kill `cat $pid_file`
retval=$?
[ $retval -eq 0 ] && rm -f $lock_file
for t in `seq 40`; do
status -p $pid_file cassandra > /dev/null 2>&1
retval=$?
if [ $retval -eq 3 ]; then
echo "OK"
exit 0
else
sleep 0.5
fi;
done

Option:2 – Replace the file

Replace the /etc/rc.d/init.d/cassandra file with the file present in the following link. This patch was made as per the JIRA issue CASSANDRA-15273.

Steps to replace the file are given below.

mv /etc/rc.d/init.d/cassandra /etc/rc.d/init.d/cassandra.old
curl -o /etc/rc.d/init.d/cassandra https://gist.githubusercontent.com/amalgjose/74cf98e0110c27b6124b0adbb698d372/raw/c08ce3481e9cb0601e79e127c78a65bf82080e5f/cassandra
systemctl daemon-reload
systemctl restart cassandra

 
The Gist code is pasted below.

 

This solution helped me. I hope this will help someone else also.

How to connect a CentOS computer to Internet using USB Wifi Adapter ?

I have an old desktop computer with CentOS 7 operating system installed without GUI. I wanted to connect to internet using a USB wifi adapter. My internet router was located in a different room and LAN cable was not available with me. So I used netgear USB wifi adapter for establishing the internet connection. This post is about troubleshooting and fixing the connectivity issue.

The model that I have used is Netgear WNA3100M Wireless-N300 USB Mini Adapter. The picture is shown below.

wifi_network_adapter

Netgear WNA3100M

I checked the network interfaces and ip address using ifconfig command. It listed an interface named wlp18s0b1. But no ip address was assigned.

I tried ifup command. But it gave me an error as follows.

ifup wlp18s0b1

/sbin/ifup: configuration for wlp18s0b1 not found.
Usage: ifup 

Then I tried listing the USB interfaces using lsusb command and it listed the network adapter usb device. This means that the device is getting detected.

The next steps that I tried are using the nmcli command.

The following command will list all the available Wifi connection profiles.

nmcli connection show

To connect to a wifi network, use the below command. You have to pass your wifi ssid and password as shown below as arguments.

nmcli dev wifi connect your-wifi-ssid password wifi-password

My desktop got connected to the internet after triggering the above command. After this, every time I start my computer, if the desktop is not automatically connected to the internet, I issue the following commands.

nmcli connection show

nmcli connection up your-wifi-connection

This solution helped me. Hope this will help someone else also :).

ImportError: libSM.so.6: cannot open shared object file: No such file or directory

For CentOS users

yum install libXext libSM libXrender

For Ubuntu users

apt-get update && apt-get install -y libsm6 libxext6 libxrender1 libfontconfig1

 

Stream Processing Framework in Python – Faust

I was looking for a highly scalable streaming framework in python. I was using spark streaming till now for reading data from streams with heavy through puts. But somehow I felt spark a little heavy as the minimum system requirement is high.

Last day I was researching on this and found one framework called Faust. I started exploring the framework and my initial impression is very good.

This framework is capable of running in distributed way. So we can run the same program in multiple machines. This will enhance the performance.

I tried executing the sample program present in their website and it worked properly. The same program is pasted below. I have used CDH Kafka 4.1.0. The program worked seamlessly.

To execute the program, I have used the following command.

python sample_faust.py worker -l info

The above program reads the data from Kafka and prints the message. This framework is not just about reading messages in parallel from streaming sources. This has integrations with an embedded key-value data store RockDB. This is opensourced by Facebook and is written in C++.

Merge two dictionaries in Python

This is the simplest way to merge or combine two dictionaries in python. This operation in supported in python version above 3.5.

 

Sample Output

{'p': 2, 'q': 4, 'r': 6, 's':8}

Convert csv to json using pandas

The following sample program explains you on how to read a csv file and convert it into json data. Two programs are explained in this blog post. The first program expects the column names in the csv file and second program does not need column names in the file.

The first program expects the headers in the first line of the csv. In case of missing headers, we have to pass it explicitly in the program.

Sample Input

EMPID,FirstName,LastName,Salary
1001,Amal,Jose,100000
1002,Edward,Joe,100001
1003,Sabitha,Sunny,210000
1004,John,P,50000
1005,Mohammad,S,75000

Here the first line of the csv data is the header

Sample Output

[{"EMPID":1001,"FirstName":"Amal","LastName":"Jose","Salary":100000},{"EMPID":1002,"FirstName":"Edward","LastName":"Joe","Salary":100001},{"EMPID":1003,"FirstName":"Sabitha","LastName":"Sunny","Salary":210000},{"EMPID":1004,"FirstName":"John","LastName":"P","Salary":50000},{"EMPID":1005,"FirstName":"Mohammad","LastName":"S","Salary":75000}]

 

If the csv file contains a header row, then you should explicitly pass header=0 to override the column names. If headers are not present in the csv file, we have to explicitly pass the field names in a list to the argument names. Duplicates in this list are not allowed. A sample implementation is given below.

 

How to convert a csv file to json file ?

Sometimes we may get dataset in csv format and need to be converted to json format.  We can achieve this conversion by multiple approaches. One of the approaches is detailed below. The following program helps you to convert csv file into multiline json file.  Based on your requirement, you can modify the field names and reuse this program.

The sample input is give below.

1001,Amal,Jose,100000
1002,Edward,Joe,100001
1003,Sabitha,Sunny,210000
1004,John,P,50000
1005,Mohammad,S,75000

 

Output multiline json is given below.

{"EmpID": "1001", "FirstName": "Amal", "LastName": "Jose", "Salary": "100000"}
{"EmpID": "1002", "FirstName": "Edward", "LastName": "Joe", "Salary": "100001"}
{"EmpID": "1003", "FirstName": "Sabitha", "LastName": "Sunny", "Salary": "210000"}
{"EmpID": "1004", "FirstName": "John", "LastName": "P", "Salary": "50000"}
{"EmpID": "1005", "FirstName": "Mohammad", "LastName": "S", "Salary": "75000"}