Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson TX2
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.1.3
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) question
Hello,
I know, that Deepstream can automatically convert my model from .onnx to .engine format, if I didn’t use model-engine-file
parameter in config.
While using trtexec
to convert my model manually, I can chose build options such as --fp16
, inputIOFormats
and so on.
What parameters are used by Deepstream to convert model from .onnx
to .engine
? Does resulting.engine
enable fp16 precision?