I am facing an issue with Deepstream 6.2 pipeline. In the pipeline I recieve a h264 encodedrtspstream, pass it throughnvv4l2decoder → nvinfer → nvtracker → nvosd → nvv4l2h264encelements and stream the output using SRT. Since latency of elements is crucial for us, We have optimised all the parameters for minimum latency. During this optimization I decided to change thenvdsosdprocess-mode=MODE_GPUfor faster drawing and removal ofnvvideoconvertelements that have to change fromNV12 → RGBA → NV12` pixel formats. This is the graph of the pipeline that I am running, you can also see all of the element properties here.
However with this setup I get this weird video in the output where only the bounding boxes are “ghosting”. The rest of the frame doesn’t have these corruption errors. It also seems like the bounding boxes are back-propagating, because they sometimes draw a bounding box around a bounding box:
This behaviour only happens when the nvosd is in GPU mode, on CPU it looks okay.
I looked around the deepstream development forums, but I can’t find anybody with a similar error. Have you seen this before and how can I fix this?
I forgot to add, that I am running deepstream in a docker container, with the base image of nvcr.io/nvidia/deepstream-l4t:6.2-baseIn the container I built gstreamer 1.20 from source for the srtsink element.
OK. The pipeline doesn’t seem to have any problems. Let’s narrow it down first.
Could you dump the rtsp source as a h264 file and try to use the filesource and filesink in your pipeline?
Is there a problem if you don’t upgrade Gstreamer?
Also we have released our DeepStream 6.4, it is based on the gstreamer 1.20. You can also try that in the latest version.
I tested according to your suggestions. I ran 4 tests:
nvosd in GPU mode with the just the base image: nvcr.io/nvidia/deepstream-l4t:6.2-base
nvosd in GPU mode with my own image (source built gstreamer 1.20)
(Please see the bboxes in the bottom right)
nvosd in CPU mode with the just the base image: nvcr.io/nvidia/deepstream-l4t:6.2-base
nvosd in CPU mode with my own image (source built gstreamer 1.20)
In both images the GPU mode bboxes were ghosting and tracing (Points 1 and 2)
In both images the CPU mode bboxes were behaving as expected (Points 3 and 4)
As you can see from the gifs, the ghosting effect is much smaller if h264 elementary streams are used with filesrc and filesink.
The pipeline for tests 1 and 2 (GPU) is the following:
The difference between them is that the nvdsosd is wrapped in nvvideoconvert elements and the process-mode is different.
Unfortunately our carrier board manufacturer does not provide support for Jetpack 6.0 yet, so I cannot test with DS 6.4
Do you have any ideas what else to try?
Unfortunately this forum doesn’t allow me to share video files (I had to convert the previous videos to .gif to display them). The rtsp stream that I used is only available in my local network. Can you perhaps try this with the Nvidia sample traffic video that is in the deepstream SDK?
I tested with the same video and get the same result. In the pipeline I had to remove disable-dpb=True from the nvv4l2decoder because the Nvidia sample stream has b-frames in it. This change didn’t affect the corruption as you can see: nvdsosd process-mode=1 (GPU):
nvvideoconvert ! nvdsosd ! nvvideoconvert (CPU):
I tend to believe that it might be a bug in the nvdsosd somewhere, because the feature to draw boxes on the GPU only did come out in the most recent DS 6.2 release. May this be the case?
What is more, we would really need a way to fix this in DS 6.2, because updating our Orin’s firmware simply isn’t possible for various reasons. I hope you understand.
The stream is already in a file (.h264 file). In the previous reply I used the filesrc and filesrc already to cut out the network elements.
The gifs that I have shared in my replies were all recorded with the filesrc and filesink elements.
However I didn’t realize that I can upload zip files to this forum. So here you go, here are both the .h264 elementary streams: bbox_ghosting_videos.zip (44.6 MB)
You can just click my icon and message to me.
I run the pipeline on my Orin with DeepStream 6.4. It works normally both on gpu and cpu mode. Could you modify your pipeline and try that?
1.change the live-source=false
2.remove the tracker plugin to confirm whether the problem is caused by the tracker plugin.
I can confirm that removing the tracker did fix the problem, however it induced bbox flickering (as expected). After I reduced the nvinfer interval to 0, then the boxes were smooth and no smearing of the boxes appeared.
Switching live-source between true and false had no effect.
Here is the tracker config I used: config_tracker_NvDCF_accuracy.zip (2.7 KB)
In the initial pipeline you posted, we didn’t see the interval parameter of nvinfer configured. You mean when you set the interval, the video gets smearing problem. And after you reduced the nvinfer interval to 0 , then the boxes were smooth and no smearing of the boxes appeared?
Could you attach the config file of nvinfer too?
Yes, I tested it again right now and the smearing stops with the interval=0 in nvinfer.
The smearing and bounding box residue gets worse when the interval increases. (I tested with interval 1-30)
And yes, with the tracker removed and interval=0 the boxes were not smearing.
I tested also with all the different tracker configurations
compute-hw=1
enable-batch-process=false
enable-past-frame=true
tracker-height=1280
tracker-width=704
But none of these fixed the smearing and residue issue.
It seems to me that there is a bug somewhere in the tracker bounding box shifting. It seems by the videos that the batch meta of the previous frames doesn’t get cleared properly in GPU mode. What do you think about this?
Here is the config file with the old interval value (4):
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
custom-network-config=config/day/yolov8m/arm64/yolov8m.cfg
model-file=yolov8m.wts
model-engine-file=model_b1_gpu0_fp32_yolo8m_arm.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=1
network-mode=0
num-detected-classes=80
# Filter out classes we don't want to detect
# Filter out from 80 other classes except for 0;1;2;3;5;7;8
# Which correspond to person,bicycle,car,motorcycle,bus,truck,boat (according to label.txt)
# Very long line but most easy to implement (https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html)
filter-out-class-ids=4;6;9;10;11;12;13;14;15;16;17;18;19;20;21;22;23;24;25;26;27;28;29;30;31;32;33;34;35;36;37;38;39;40;41;42;43;44;45;46;47;48;49;50;51;52;53;54;55;56;57;58;59;60;61;62;63;64;65;66;67;68;69;70;71;72;73;74;75;76;77;78;79
interval=4
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=libnvdsinfer_custom_impl_Yolov8m_arm.so
engine-create-func-name=NvDsInferYoloCudaEngineGet
[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300
This has a nv3dsink which should display to a monitor. However I am running in a headless orin, so I would have to replace it with a filesink to save to a .h264 file like previously.
Edit
I found the docker container with the sample files, however I ran into an issue when compiling the examples:
deepstream_test2_app.c:30:10: fatal error: cuda_runtime_api.h: No such file or directory
Since I have not done a lot of C programming, debugging it is going to be a bit slow. I will try with the python examples tomorrow.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
Yes. You can use the python demo. And you can use the methods below to narrow down the issue too.
As we have many mode with the nvtracker, you can try to use the perf config file instead of your accuracy file.
When you run your demo, you can ckeck the loading information.
If you have a monitor, you can use the monitor to display the video directly.