LOW FPS deepstream and triton

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) : NVIDIA GeForce RTX 3090
• DeepStream Version : 6.3
• JetPack Version (valid for Jetson only)
• TensorRT Version : 8.5.3-1+cuda11.8
• NVIDIA GPU Driver Version (valid for GPU only) : 535.104.05
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

I am using deepstream-test3.py I added additional model and also kafka to send results and I got LOW FPS , how can I enhance FPS?

I think you have mentioned your CUDA version in the “TensorRT version”.
Unless I am mistaken, TensorRT’s latest version is 8.6 on the x86 platform.

please refer to topic for performance analysis.
please refer to topic for fps checking.

yes you are right I fixed the version above

I check first article

regarding solution 1 after enable export NVDS_ENABLE_LATENCY_MEASUREMENT=1

where I can find latency logs?

if you are not using deepstream-app, please refer to this FAQ1 and topic.

There is no update from you for a period, assuming this is not an issue any more. Hence we are closing this topic. If need further support, please open a new one. Thanks.
Sorry for the late reply, Is this still an DeepStream issue to support? Thanks!

  1. is the source RTSP or local file? how many source streams? did you use own model? did you modify the code?
  2. can you simplify the pipeline to narrow down this issue? for example, use fakesink or remove inference.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.