Sometimes we get use cases where we have to deal with large response from a REST endpoint. For example I have a REST endpoint that gives response with size ranges in several GBs. In this case if we use the normal way to request/response, the program will consume so much memory and may even break because of memory issues.

The best approach to handle these kind of scenarios is by streaming the response. In this way the program will not hold the entire response in memory. Instead it will stream the data and send it to the target. In our case the target is a file.

The sample program is given below. This program demonstrates a POST method. But we can do the same with GET method also. The chunk size can be adjusted based on your convenience. This will be very useful in case if you have a execute program in a small sized machine that deals with responses of large size.

import requests
session = requests.Session()
authentication = {"USER":"", "PASSWORD":""}
payload = {"query":"some query"}
local_file = "data.json"
# This is a dummy URL. You can replace this with the actual URL
URL = "https://sampledatadowload.com/somedata"
# This is a POST request
with session.post(URL, stream=True, data=payload, auth=(authentication["USER"], authentication["PASSWORD"]), verify=False) as r:
r.raise_for_status()
with open(local_file, 'wb') as f:
for chunk in r.iter_content(chunk_size=128):
f.write(chunk)

view raw
stream_data.py
hosted with ❤ by GitHub