Please provide complete information as applicable to your setup.
**• Hardware Platform ---------------> GPU
**• DeepStream Version ------------> 7.0
**• TensorRT Version ----------------> 8.6
**• NVIDIA GPU Driver Version ------------> 545
I am testing with deepstream-python-apps deepstream-preprocess-test and I added tracker after nvinfer, how queue is adding I have done the same way ! When I am adding two cameras for testing 25 FPS goes to 12-13 FPS for both camera !
Could you share me why it’s happening and How I can re-solve it ???
if testing 2 rtsp sources, is the fps still 12~25?
noticing you are testing 5 rtsp source, did you set MUXER_BATCH_TIMEOUT_USEC correctly? the new value should be 1000000/max_fps.
please add this code and enable NVDS_ENABLE_LATENCY_MEASUREMENT=1 , NVDS_ENABLE_COMPONENT_LATENCY_MEASUREMENT=1 to add Latency measurement. then you can check which plugin consume too much time.
do you mean there is no fps drop issue again? are both testing with 2 rtsp sources or 5 rtsp sources are fine?
about “10-15 sec delay” issue, did you add other code modifications? do you mean using player plays the output rtsp has 10-15 seconds delay? if replacing udpsink with nveglglessink, is there still delay issue? wondering if it is related to rtspserver.
When I’m running NGC trained model “models/Primary_Detector/resnet18_trafficcamnet.etlt”, It’s working fine and with batch push timeout 40000, allmost FPS is 25.
But when I am using custom yolo model yolov4-tiny (darknet) with the help of Deepstream-yolo repo
with Nvdspreprocess I am getting “CUDA failure: an illegal memory access was encountered in file yoloPlugins.cpp at line 261”
About “an illegal memory access”, it seems that this issue is not related to the original issue. could you open a new topic to focus on the new issue? Thanks!