Hi,
Env.
GPU:NVIDIA T4, Ubuntu 18.04, GStreamer 1.14.1, NVIDIA driver 440+, CUDA 10.2, TensorRT 7.0, Deepstream 5.0
Running same deepstream-app based on same hardware(T4), the inference performance of DS5.0 is lower than that of DS4.0.
And DS5.0 reaches bottleneck, while GPU and memery never reached 100%.
Is where any way to improve inference performance?
A part of config file:
[source0]
enable=1
type=4
uri=rtsp://192.168.170.65:554/xxx
num-sources=1
gpu-id=0
Hi @Mr.Z
Since it’s hard for us to setup 10 RTSP streams, could you refer to the section - “The DeepStream application is running slowly.” in FAQ to measure the latency of the plugins and narrow down which plugin cause the latendy ?
The DeepStream application is running slowly.
•Solution1: One of the plugins in the pipeline may be running slowly.
You can measure the latency of each plugin in the pipeline to determine whether one of them is slow.
•To enable frame latency measurement, run this command on the console:
$ export NVDS_ENABLE_LATENCY_MEASUREMENT=1
•To enable latency for all plugins, run this command on the console:
$ export NVDS_ENABLE_COMPONENT_LATENCY_MEASUREMENT=1
...
Hi mchi,
we also meet same problems and have tested our config by your suggestion above. But there is no any progress.
Can you give us more advices? Thanks!
Hi mchi,
The suggestions above are not helpful.
And I want to get the component latency but can’t get result.
I have post this problem is Cannot get latency measurement result but have not been solved yet.