Efficient way to store images into file system using deepstream instead of opencv

Hello Team,

Is there any efficient way to write images into file (other than cv2.imwrite). It is impacting through put FPS. We need this to save as alert when an object is detected in our video analytics application.

We have taken reference from deepstream-imagedata-multistream.py, but unfortunately this also uses cv2.imwrite to save images.

deep stream version: 6.4

Thanks and Regards,
Udaykiran Patnaik.

Hi All, any help is highly appreciated. Even using L4 GPU is not helping us, and latency is increasing a lot by using cv2.imwrite(). Please support. Is there any alternative.?

There is currently no good alternative for Python. But if you can use c/c++, we have the fast hardware encoding solution. You can refer to our source code in the sources\apps\sample_apps\deepstream-image-meta-test.

We are not good at C/C++. Is there any sample source where I can modify C/C++ and create a python binding for that and use it inside my python code?

Yes. You need to bind the API in the sources\includes\nvds_obj_encode.h path. You can refer to our deepstream-image-meta-test demo to learn how to use that.
About how to bind the API, you can refer to CUSTOMUSERMETAGUIDE.md.

I did have a similar requirement a while back.
We had to create a video for every 500 images and save them in our jetson and then send it to another machine using ftp.

Due to our various requirements we ended up using ros2, so for creating and saving video, we first published the images and bounding box from the deepstream and then subscribed in our another ros2 node.

There we had a multiprocessing function of python where we created the bounding box on the images, created the video, saved it and also transfered it. There may have been some latency stuff but not enough to worry us since our detectron2 model was running at 25 fps.

I do not know if this was of any help to you but I thought of sharing my experience. Good Luck.

@rajupadhyay59 Thanks for suggestion. Currently we are not using ROS but that’s a good Idea for future enhancement. But What throughput FPS are you getting end to end for a single stream. How many streams are you using (live or recorded)? What is total resource consumption? What are the different computer vision tasks involved in pipeline? How to use Ros2 in deepstream- Can we find any reference?

Its a live stream that runs for the entire day.
Our pipeline with only nvinfer runs at 28 FPS but since we have alot of other processing, we have set the FPS at 25.

We have preprocessing opencv related(custom gst-element to do preprocessing in gpu memory), preprocessing for the image before it goes to the model, nvinfer, getting the image, segmentation & bounding data with tracker id and publishing them using ros2.

We have different ros nodes which have tasks of their own like 3d pos calculation, one node is for gui, one for the video creation and saving stuff etc.

Here is a helpful link for ros2 with deepstream. I have tested it with both, foxy and humble.

Good Luck.


Have you tried libjpeg turbo and encode as a binary and use python file write. I believe that is sequential write than random write. I have used this in the past when using python but using cpp and hardware encoder would be faster.

jpeg = TurboJPEG()
image_buffer = jpeg.encode(image,quality=97, pixel_format=0)
out_file = open("test.jpg", 'wb')
1 Like