Production deployment of a Python Web Service (Flask / Tornado Application)

Python Flask and Tornado are two of the most popular frameworks in python for developing RESTful services.

Do you know how to develop and deploy a production grade python application. ?

A sample python flask service is given below. This is a sample flask web service. This has only one endpoint (/requestme) at is a GET method. (sample_flask.py). I am not focusing on the coding standards. My goal is to show you the production implementation of a python application.

We can run this program in the command line by executing the following command.

> python sample_flask.py

The service will be up and running in port 9090. You will be able to make requests to the application by using the URL http://ipaddress:9090/requestme.

How many requests will this python web service can handle ? 

10 or 20 or 100 ?? … Any guess ??

Definitely this is not going to handle too many requests. This is good for development trials and experimental purpose. But we cannot deploy something like this in production environment.

How to scale python applications  ?

Refer to the below diagram. The diagram has multiple instances of flask applications with Gunicorn WSGI proxied and load balanced through Nginx web server.

haproxy_python

Production Deployment of Python Flask Application

Sample Nginx configuration that implements the reverse proxy and load balancing is given below. 

This is a sample configuration and this does not have the advanced parameters.

server {
listen 80;
server_name myserverdomain

location / {
proxy_pass http://upstream_backend/requestme;
  }
}

upstream backend {
server gunicornapplication1:8080;
server gunicornapplication2:8080;

}

 

The upstream section routes the requests to the two gunicorn backends and the requests are routed in round robin manner. We can add as many backend servers as we need based on the load.

How to run the python applications with gunicorn ?

First lets install gunicorn

> pip install gunicorn

Now it is simple, run the following command.

> gunicorn -w 4 app:app

Now the our application will run with 4 workers. Each worker is a separate process and will be able to handle requests. The gunicorn will take care of handling the requests between each of the workers.

We can start multiple gunicorn instances like this and keep it behind the nginx. This is the way to scale our python applications.

Hope this helps 🙂 

About amalgjose
I am an Electrical Engineer by qualification, now I am working as a Software Architect. I am very much interested in Electrical, Electronics, Mechanical and now in Software fields. I like exploring things in these fields. I love travelling, long drives and music.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: