Deepstream pipeline start and stop pipeline with out reloading model files for multiple session inferencing

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only) 4.6.1
• TensorRT Version 8.2.1.8
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi,

We are working on a solution where we would have to start inferencing on the USB cameras on some hardware interrupt. To achieve this, every time we receive hardware interrupt we have to start the pipeline and stop it based on another interrupt from the hardware.

The problem with this approach is, that every time we start and stop the pipeline the elements that are responsible for inference are reloading the model weights, which consumes some amount of time, which is not feasible for an edge solution if we want to port.

The deep stream pipeline we have contains one primary detector and the secondary classifier, so every time these 2 models get loaded between the start and stop of the pipeline.

Any ideas on how to hold the models in the context of the pipeline and feed the pipeline with data as needed to be based on hardware events?

You can configure the TRT engine input instead of model input. Gst-nvinfer — DeepStream 6.3 Release documentation

How did you stop the pipeline? Just the pipeline status to “NULL”? Anything else?

Hello Fiona,

We are already using the engine file, that is what I was referring to as a model file. Even loading the engine file will consume loading time , on the Xavier board its close to 8 seconds for both the engine files to be loaded. So we are looking for a way to hold these loaded engine files in context and use the element repeatedly based up on hardware events.

About

Yes, we set the state to NULL when the interrupt comes up.

So a possible way is to stop and restart the source element only. Please refer to deepstream_reference_apps/runtime_source_add_delete at master · NVIDIA-AI-IOT/deepstream_reference_apps (github.com)

Let me give this a try and get back if this solves it.

Thank you

Hello @nagabharath.vadla Do you still need support for this topic? Or should we close it? Thanks.

Hello Yingliu,

We were working on getting this replicated using our pipeline. I thought of getting back with any issues we see during the replication, so was holding on. Request you to keep the ticket open for some more time.

Meanwhile, related to the same topic, can we do the same run time deletion and addition on the filesink element to create new output videos using an external signal ?
Or does that need a different approach ?

Thank you for your patience

It is similar. But if you just want to change the output file name(including path), no need to delete the element, just change the “location” property is OK.

Please use “gst-inspect-1.0 filesink” to check the usage of the properties. filesink (gstreamer.freedesktop.org)

Thank you for the response. While we were trying to hold the pipeline in context across different video files as different sources. We are getting an internal data stream error once the end of the stream is received from the initial source. The moment the bus gets the EOS event, even adding another source is not useful. Can you please suggest how we avoid getting the EOS on the bus but keep the pipeline running till a new message with a new video file as input is received ?

No. If you want to change the source, EOS is a MUST to finish the previous session correctly.

I see in the run time source add and delete code, the EOS signal is received on the element as Gst.MessageType.ELEMENT and on the bus Gst.MessageType.EOS, When the code reaches a stage where Gst.MessageType.EOS is reached, post which adding a source, we are seeing an internal data stream error.

Here are the steps we followed.

step1: Listen on a Queue for jobs to process,
step2: When a message is received on the queue, we would be able to get the video file names through this message that needs to be processed, in our case we would need to process 2 video streams at a time to generate output
step3: once the processing is done, we aggregate the results from both streams and we would want to wait for another job from the queue
step4: When another job is received we will have to repeat the steps from step2 to step4

The problem is how to hold the pipeline after the first pair of videos are processed, when we tried adding the sources using the sample code provided deepstream_rt_src_add_del.py, we see an internal data stream error, and prior to that Gst.MessageType.EOS detected.

When we looked at the logs of the sample python file deepstream_rt_src_add_del.py provided, in those logs we did not see Gst.MessageType.EOS this signal coming until the very end of the code execution where by that time we have added sources at run time.

Any suggestions around this ?

@Fiona.Chen Need your comment here, thanks.

Can you upgrade to latest DeepStream 6.1.1? Can the deepstream_rt_src_add_del.py sample work in your platform?

Hello Fiona,

Unfortunately upgrading to 6.1.1 is not possible because of the BSP that we use on the VVDN Jetson Xavier platform. Yes, the original deepstream_rt_src_add_del.py did work on the jetson platform with 6.0, the problem is after we did the modification to cater to our needs to process a job of 2 video files each and every time using some external queuing.

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Can you provide a simple app to reproduce your failure?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.