Please provide complete information as applicable to your setup.
• Hardware Platform (GPU) • DeepStream Version 6.4 • NVIDIA GPU Driver Version (valid for GPU only) NVIDIA GeForce GTX 1650 / Driver Version: 525.147.05 / CUDA Version: 12.0 • Issue Type( questions, new requirements, bugs) question • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Hello,
I know that if I have an ONNX model Deepstream will create an engine file out of it as soon as I start my pipeline.
However, how can create one without starting the pipeline? Is there a way to this before?
If you already have built the engine lets say using trtexec, as long as the path is correct, deepstream will not generate an engine using your onnx file since there already is an existing engine (provided the path to the engine correct)
model-engine-file= engine_fp16.engine
for now, try keeping your engine in the same directory where you launch your pipeline.
The difference of time to generate the tensorrt engine should not be that much.
Deepstream also uses TensorRT engine only. Deepstream is an IVA, basically running gstreamer pipeline on the GPU memory.
You can use trtexec, deepstream, python code or c++ code to generate a tensorrt engine from the onnx model, the difference in time would be less if not negligible.
Atleast this is what I feel, please feel to correct me.
Glad that everything is working for you though. Good Luck.
if there is no TRT engine, deepstream will create a new engine file. if there is already an engine file, DeepStream will load the engine directly if setting the model-engine-file property.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks