Deepstream API: Infer directly from Primary gst Infer element(pgie)

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) 1080Ti, RTX 3060
• DeepStream Version 6.0.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.2.0.6
• NVIDIA GPU Driver Version (valid for GPU only) 470.82.01
• Issue Type( questions, new requirements, bugs) questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hello everyone,

I have created a pgie as primary model in pipeline. Now, I want to do some inferences on primary model without reloading it. Is there any deepstream api to call a function related to inference and give it image and pgie as input argument? In other words I want to access to the primary model - defined in the pipeline - to do inference. Is it possible?

Not sure if you are looking for TensorRT which can be used to do the inference without needing deepstream.

I’m not sure could I explain my request or my question. I know how could I use engine file to generate outputs. But I have another question:

For example I have a model Car.engine: it takes about 2GB of GPU Memory- for detecting Car in the input streams. Know, assume I have a another threads that listen to incoming messages. These message contains some images and I Should Do car detection on them, too. But if I want to read Car.engine another time, I waste 2GB of GPU memory, while I read before. Consider this scenario: if I have primary, secondary and tertiary and I want to loading them again separately -separately of main pipeline- my gpu will be fulled.

I want to generate output directly myself without using pipeline. The engine file is read by pgie via


pgie = gst_element_factory_make ("nvinfer", "primary-nvinference-engine");
g_object_get (G_OBJECT (pgie), "batch-size", &pgie_batch_size, NULL);
g_object_set (G_OBJECT (pgie), "config-file-path", config_retina.c_str(),
                  "output-tensor-meta", TRUE, "batch-size", num_sources, NULL);

I don’t think it’s possible. Feeding the engine to TensorRT is the pre-condition for inference, so if you want to switch engines, you have to reload. Alternatively, you need to upgrade your Jetson with more memory.

I found gst_nvinfer_start function in gstnvinfer.cpp file. I think it’s responsible for doing inference but I don’t know how could I generate input for it or could I call those function without conflicting between main stream video and my images!!!

You have two kinds of sources, one is streams(videos) and the other is images. And the images are not continuous, right?

DeepStream currently does not support dynamically change batch during PLAYING status, you need to encapsulate your images from message into video(with timestamp) and input to the pipeline as another stream.

Or you need to implement all things directly with TensorRT without deepstream.

Bravo!

I have a continuous source as rtsp camera: it is always available. But I have some threads and In these threads I receive some images. these threads will be checked every 60 seconds, and I check input messages via kafka. Images in these messages are as base64 so I should do decode. And finally, I should do inference on images also.

If I reload engine file again and separately from primary engine -notice that the primary engine and this engine are equal files,both of them do car detection- it consumes more GPU for just one model that has loaded twice.

I did inference for engine file separately and it was good.

By considering another stream how could I feed images to it? Is there any sample that could guide me?

I think I’ve replied in previous post. You need to encapsulate your images from message into video(with timestamp) and input to the pipeline as another stream.