Yolo-nas tracking is so bad!

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): GTX 970 4GB
• DeepStream Version: 6.2
• TensorRT Version: 8.5.2
• CUDA Version: 11.8
• NVIDIA GPU Driver Version (valid for GPU only): 520.61.05
• Issue Type( questions, new requirements, bugs): Detect failed
I try to deploy Yolo-nas to Deepstream but result is so bad. Many object is not detect. But when I inference by torch, result is very good. Please help me!
• Notes: I want to try to deploy model yolo-nas (super-gradients) to deepstream but I can’t. I can’t figure out for nvdsinfer_build_engine for model onnx has extracted by yolonas notebook ( not follow GitHub - marcoslucianops/DeepStream-Yolo: NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models". Please help me build engine!

Can you show the log when DeepStream build engine from ONNX model? Please follow the guide to generate the ONNX model: DeepStream-Yolo/docs/YOLONAS.md at master · marcoslucianops/DeepStream-Yolo (github.com)

loadEngine.txt (6.8 KB)
converONNX2TRT.txt (1.7 KB)
This is log when I convert oNNX2TRT. After converted to TRT, I used this model TRT for github: Bytetrack. I don’t know reason that “tracking is so bad”!

Which application report “tracking is so bad”? Do you see any issue with DeepStream?

When I convert, I don’t see issues but when I inferred by Deepstream, the results are so bad. This is a different result from the model torch.

Can you share some picture to show the result? Can you share the steps of how you run DeepStream and Pytorch?