Hi,
When I was running the Obj_Yolo deepstream app with a usb cam, I found that more than half the frames have no results. It is not the case when running with recorded videos that were .h264 encoded.
Any advice why this is happening and how I can fix it?
My settings are the following:
Jetson Xavier
DeepStream 5.0
JetPack 4.4
TensorRT 7.1.3
NVIDIA GPU Driver Version 10.2
[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=1
batched-push-timeout=40000
width=1092
height=614
enable-padding=0
nvbuf-memory-type=0
[primary-gie]
enable=1
gpu-id=0
batch-size=1 #Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=2
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_yoloV3.txt
Hi,
I realize this issue might be hard to replicate for others even though it seems to be a constant phenomenon for me. So can you or your co-workers give me some advice on how I can locate the issue along the pipeline?
Right now I suspect the issue might be raised during image parsing before the actual inference even begin, but I don’t how to verify this speculation. Is there any tool or procedure that can help me with that? Or maybe you can point me to the right section of the source code to take a closer look?
Thanks for your advice.
I found that this Deepstream example only perform inference once for every three frames, and the missing results were completed with the [tracker] module.
Can you explain the necessity for such design of detection pipeline?