Batch saving images in deepstream-image-meta-test is too slow

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.3
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.5.3.1
• NVIDIA GPU Driver Version (valid for GPU only) GPU
• Issue Type( questions, new requirements, bugs) questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi. How can I support multiple stream saving without lagging when I set up 5 1080P RTSP streams as input and modify the code to save each frame as an image? The video playback stutters. How can I implement multi-threaded image saving to address this issue?
How can I save multiple batches of images using multi-threading while adding only one OSD plugin, for example, with a batch size of 30?

Do you want to save images with OSD(bboxes and texts in the images)? Do you want to save all the frames for every stream? What format do you want to save(JPG, PNG,…)?

Hi.When I save images upon detecting targets, in jpg format, I noticed that the playback lags when increasing the number of input streams. Is there a way to resolve this issue?

I only need to save each frame when a target is detected.

Will the frame be saved several times when there are several targets in the frame?

When there are multiple targets in the same frame, the frame will only be saved once

Hello, is there any way to solve this problem? For example, can we use multi-threading to save images after the callback function?

The “nvds_obj_enc_process()” and “nvds_obj_enc_finish()” are already asynchronized.
Which platform are you working on? What is the GPU?
If it is a necessary to save all frames from the videos, can you try other formats? E.G. H264, H265,…

Hi.I have an RTX 3070 GPU and I’m using the deepstream 6.3 development container. If I run two pipelines using “nvds_obj_enc_process()” and “nvds_obj_enc_finish()”, and the two “nvds_obj_enc_process()” calls are independent of each other, will the speed be doubled or remain the same as a single pipeline? I am using H.264 RTSP streams, how can I use other formats, referring to the format of the stream or something else.

The “nvds_obj_enc_process()” and “nvds_obj_enc_finish()” are already asynchronous. Does this mean that using multiple threads will result in the same speed as single-threaded processing? Does it refer to all “nvds_obj_enc_process()” interfaces using the same buffer queue, or something else?

The “nvds_obj_enc_create_context()”, “nvds_obj_enc_process()” and “nvds_obj_enc_finish()” interfaces will encode the frames in another thread. If there is only one encoding context, there is only one encoding thread.

The jpeg encoding speed is very fast( in us level) if the “save_img” parameter of NvDsObjEncUsrArgs is set to FALSE. The image file save may take more time for the IO limitation. There is already sample for separate the jpeg encoding and image file saving to different threads in /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-image-meta-test

Thank you for your response. I will use “nvds_obj_enc_create_context” to create multiple contexts to test the speed of saving images and see if it can accelerate the image saving process.

By the way, is this method of taking screenshots using GPU encoding? Is there a limit on the number of encodings when creating multiple contexts with “nvds_obj_enc_create_context”?

What do you mean? What is “this method”?

There is no software limitation. And there may be extra memory consumption with multiple contexts.

I was referring to using the nvds_obj_enc_process method to save screenshots. Thank you, I don’t have any more questions for now.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.