How to save each and every frame that was streaming through deepstream-test1-usbcam to test the quality of the image after saving?!

• Hardware Platform: Xavier NX
• DeepStream Version: 5/5.1
• JetPack Version: 4.4/4.5.1
I have tested the deepstream-test1-usbcam code is working well and getting almost 30fps with streaming, I want to save each frame(every 30 frames) as a image to check the quality because we were using opencv to read and write the frame but that’s giving only 10-15fps. In the example code, I have checked it’s saving the image only when it’s getting a detection but I want to save every frame. Is it possible? Need your ideas.

Are you referring to the sample of deepstream_python_apps/apps/deepstream-imagedata-multistream at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub??

If so, in tiler_sink_pad_buffer_probe() function, every frame is available. If you don’t want the check condition of object dtection, just remove the “while l_obj is not None” loop.

Yes @Fiona.Chen , I was testing the usb camera code here: deepstream_python_apps/apps/deepstream-test1-usbcam at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub, but there is no option to save frame, I found the option here: deepstream_python_apps/apps/deepstream-imagedata-multistream at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub, but it was working only when detection happen. So I just have to remove the while condition to save all the frames! Let me try and come back.

@Fiona.Chen thanks I can able to save each frames, but fps fell very low. One more help, how can I stop the streaming part, so I can get better fps?!.

It is because you are using OpenCV to compress and save images. OpenCV is very slow. I don’t understand what do you mean by “stop the streaming part”.

I meant, to stop the display part, I found the answer in a different thread.

If OpenCV is very slow, do we have any other method to save the frames??? I found only the OpenCV method in the deepstream example.

For python app, openCV is the only way. Python is not so efficient.

Yes, gotcha, thanks @Fiona.Chen.