Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) 1080Ti, RTX 3060
• DeepStream Version 6.0.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 220.127.116.11
• NVIDIA GPU Driver Version (valid for GPU only) 470.82.01
• Issue Type( questions, new requirements, bugs) questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I want to do inference in deepstream from specific engine like primary engine without reloading it another time - because loading it consume GPU Memory again . Currently, I reload it but it approximately consumes 1GB of GPU Memory. Is there a way to do that?
Can you specify more details about your case?
I hope I could explain it. I know it’s somewhat a special case. I want to do inference myself not via pipeline. When primary engine set up in pipeline I just read metadata outputs from input streams and I could not feed desired image to this engine and then parse metadata from it. I’d like to feed out of pipeline, also.
Reload in current process or another process?
I want to do inference myself not via pipeline.
Did you want to call TRT interface to do inference instead of DS pipeline?
I could not feed desired image to this engine and then parse metadata from it
Why? you mean skip the preprocessing? but where you did this? implemented in model?
No. I want to do inference via pipeline.
Assume that you have a car detection model. The pgie generates a metadata and this metadata contains a mat (in dsexample) . Now I want to do pgie another time for this mat . I want to avoid loading trt engine twice.
We do not support this feature.