Accuracy with DLA

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Orin AGX 64GB
• DeepStream Version 6.3
• JetPack Version (valid for Jetson only) JP5.1.2
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I am not seeing any detections when I run a customized Yolov8-small and Yolov5-small model on the DLA. I am generating the TensorRT engine that runs on the DLA but setting the parameters in the Deepstream config file. The same model, when run on the GPU using Deepstream shows several detections on the same video file. I have set the precision to be FP16 for both DLA and GPU. I am sure I am missing something. Can you please point me in the right direction?

Thank you in advance!

What do you mean? Can the engine with DLA output correct bboxes? Can the engine with GPU output correct bboxes?

I see that the engine with GPU is giving thecorrect bboxes because I see bounding boxes drawn on the output video. I am using deepstream-app to run the model, both on DLA and GPU. However, I do not see the bounding boxes on output video when I run the engine with DLA.

Can you tell us how you generated the yolov8s onnx model and how you generated the DLA engine of the model?

The Yolov8s ONNX file was generated by exporting the PyTorch model to ONNX. The DLA engine was generated by setting the appropriate flags in the Deepstream config file.

Have you tried “trtexec” to generate the DLA engine from the onnx model?

yes, I tried generating the engine using trtexec as well. I do not see any difference in performance.
On a side note, I upgraded to JP 6.0 and I do not see this issue with the same model on JP6.0. Is there an issue with JP5.1.2 w.r.t. DLA?

There will be differences with different TensorRT versions since the DLA backends for different TensorRT versions are different. The latest TensorRT version will provide better performance and accuracy for Nvidia’s continuing improvement. It is not an issue.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.