Parameters for Deepstream to convert onnx to engine

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson TX2
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.1.3
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) question

Hello,

I know, that Deepstream can automatically convert my model from .onnx to .engine format, if I didn’t use model-engine-file parameter in config.

While using trtexec to convert my model manually, I can chose build options such as --fp16, inputIOFormats and so on.

What parameters are used by Deepstream to convert model from .onnx to .engine? Does resulting.engine enable fp16 precision?

You can use the trtexec --help to check the parameters you need.

I don’t want to use trtexec to convert my model. I just want to know, which build options are used by DeepStream, when it converts onnx to engine

Please refer to our Guide. These parameters are in the configuration file of nvinfer.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.