Orin-nx has weaker inference performance than xavier-nx?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 6.2
• JetPack Version (valid for Jetson only) 5.1.1
• TensorRT Version 8.5.2
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi all, an odd question please.
I’m using deepstream-app to inference 4-way rtsp streams, and I found that orin-nx inference performance is not as good as xavier-nx.
The former inference performance is only 18x4=72fps, however the latter result is 20x4=80fps, which looks very strange!

By the way, both have the same jetpack version, the same model size(yolov8s,640), and both have maximum performance enabled.

Sorry, I just found that orin-nx highest performance mode id is 0. When set to 0, everything looks normal!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.