How to run nvinfer with mixed precision

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I have custom model that I converted in tensorrt engine. I converted it with mixed precision that means that trt decides if layers should have fp32, fp16 or int8 optimization.
in [property] I can choose only 3 types of network-mode 0=FP32, 1=INT8, 2=FP16 mode but there is not mixed type
So the question is, can I run my network in mixed mode in DS?

“layer-device-precision” parameter is for setting the formats for specific layers. Gst-nvinfer — DeepStream 6.1.1 Release documentation
There is a sample in TAO sample: deepstream_tao_apps/pgie_yolov3_tao_config.txt at master · NVIDIA-AI-IOT/deepstream_tao_apps (github.com)

Thnaks. I’ll check

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.