Jetson performance accessment for my object detection app (DS6 python) from multi rtsp stream

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Jetson NX
• DeepStream Version
• JetPack Version (valid for Jetson only)
4.6 or 4.6.1
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

There’re mutiple cameras in my production scenario, I’m planning to use Jetson NX or higher spec hardware to deploy a custom DS6 python app to implement the object detection for all of the cameras:

  • All cameras output 720p or 1080p with h.264 encoding, FPS30 streams by RTSP
    so my DS6 python app needs to read the streams by RTSP.
  • My DS6 python app is using a single TAO retrained detectnet_v2 resnet18 model for inference
    so the inferencing is based on single one model.
  • Requires inference perf can achive 30FPS

I read the offical benchmarks, but the cases listed there are all for single stream, I’m not sure how to assess with my case of multiple RTSP stream, Could you help?

Performance depends on your actual pipeline and configurations. The best way is to test with the real pipeline.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.