How to do inference from primary engine without reloading it?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) 1080Ti, RTX 3060
• DeepStream Version 6.0.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.2.0.6
• NVIDIA GPU Driver Version (valid for GPU only) 470.82.01
• Issue Type( questions, new requirements, bugs) questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hello everyone!

I want to do inference in deepstream from specific engine like primary engine without reloading it another time - because loading it consume GPU Memory again . Currently, I reload it but it approximately consumes 1GB of GPU Memory. Is there a way to do that?

Can you specify more details about your case?

I hope I could explain it. I know it’s somewhat a special case. I want to do inference myself not via pipeline. When primary engine set up in pipeline I just read metadata outputs from input streams and I could not feed desired image to this engine and then parse metadata from it. I’d like to feed out of pipeline, also.

Reload in current process or another process?

I want to do inference myself not via pipeline.

Did you want to call TRT interface to do inference instead of DS pipeline?

I could not feed desired image to this engine and then parse metadata from it

Why? you mean skip the preprocessing? but where you did this? implemented in model?