Convert ONNX to engine outside of the Pipeline

Please provide complete information as applicable to your setup.

• Hardware Platform (GPU)
• DeepStream Version 6.4
• NVIDIA GPU Driver Version (valid for GPU only) NVIDIA GeForce GTX 1650 / Driver Version: 525.147.05 / CUDA Version: 12.0
• Issue Type( questions, new requirements, bugs) question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hello,

I know that if I have an ONNX model Deepstream will create an engine file out of it as soon as I start my pipeline.

However, how can create one without starting the pipeline? Is there a way to this before?

Best regards

please refer to this code.

Hello,

I tried to do it like this but when starting the application, it still tries to convert as it cannot open the model.

Where can I look up what exactly does Deepstream in the back while converting the ONNX file to TensorRT engine?

If you already have built the engine lets say using trtexec, as long as the path is correct, deepstream will not generate an engine using your onnx file since there already is an existing engine (provided the path to the engine correct)

model-engine-file= engine_fp16.engine

for now, try keeping your engine in the same directory where you launch your pipeline.

Many thanks four your super quick reply.

However, it has thrown some error with regards to the batch size…

I tried some things but I could not really start the pipeline with custom built engine.

So what I wanted is to do “backengineering” and see how deepstram converts it to the engine file and see where I am doing the mistake.

i am unsure if the way engine is being created is the issue here but i get your point, so try it out.

i really would like to see the error the pipeline throws when you use deepstream with the engine generated from the trtexec.

could you share a whole log? if setting model-engine-file, please make sure batch-size setting is the same with the batch-size of the engine file.

It worked now and I could set the right parameters.

However, if use trt directly, it takes more than one hour on my machine to convert the ONNX model to TRT Engine.

If Deepstream converts ONNX model automatically it takes only couple of minutes…

So whats the difference between the 2? what does Deepstream do under the hood?

The difference of time to generate the tensorrt engine should not be that much.
Deepstream also uses TensorRT engine only. Deepstream is an IVA, basically running gstreamer pipeline on the GPU memory.

You can use trtexec, deepstream, python code or c++ code to generate a tensorrt engine from the onnx model, the difference in time would be less if not negligible.

Atleast this is what I feel, please feel to correct me.

Glad that everything is working for you though. Good Luck.

if there is no TRT engine, deepstream will create a new engine file. if there is already an engine file, DeepStream will load the engine directly if setting the model-engine-file property.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.