Deepstream objects detected boxes are continuously highly moving around the detected objects.

Hi,
Using official deepstream-app from deepstream-4.0.1 we set up a demo with following high level pipeline components created by DeepStream-app,

. VideoCamera(Rtsp) → | GstRtspBin → GstDecodeBin → Primary_Gie_Bin → Tracking_Bin → OSD_Bin → SinkBin (Rtsp) | → VlcDisplay

in this way we can get a video from rtsp camera, analyze it in NvInfer and NvTracker and we can get person detection and tracking.
But Boxes that are visualized around persons are continuously moving of about 5 to 20 pixel even if persons are completely still.
This produces a very disturbing graphical effect and a lot of pixel distortion around persons, this image distortion is filled only after next key frame arrives.
So if key frame arrives after many seconds we have a very corrupted video output.

This happens on all cards: Jetson-Xavier, Jetson-TX2, Jetson-Nano, …

How can we avoid this continuous moving of boxes around the detected objects ?
Is there a setting of NvInfer plugin ? … or of NvTracker plugin ?
or any other setting that stabilize box around the objects ?

ThankYou,
Maurizio

May I know which low level lib are u using for nvtracker, currently you can use KLT, IOU or NvDCF tracker by specifying the ll-lib-file property, you can refer https://docs.nvidia.com/metropolis/deepstream/plugin-manual/index.html#page/DeepStream_Plugin_Manual%2Fdeepstream_plugin_details.02.02.html%23wwpID0E0YV0HA

May I know which low level lib are u using for nvtracker,

We are using DCF as low level library, … it is present in deepstream installation at following path:

ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_nvdcf.so

beside we are using “objectDetector_Yolo” with deepstream “libnvdsinfer_custom_impl_Yolo.so”

tracker setting in deepstream_app_config_yoloV3.txt is the following :

[tracker]
enable=1
tracker-width=1920
tracker-height=1080
ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_nvdcf.so
ll-config-file=./tracker_config.yml
gpu-id=0
    # -------------------------------------------------

content of “./tracker_config.yml” is the following

/opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_Yolo$ more  ./tracker_config.yml
%YAML:1.0

NvDCF:
  useBufferedOutput: 0
  maxTargetsPerStream: 30 
  filterLr: 0.11 
  gaussianSigma: 0.75 
  minDetectorConfidence: 0.0 
  minTrackerConfidence: 0.4 
  featureImgSizeLevel: 1 
  SearchRegionPaddingScale: 1 
  maxShadowTrackingAge: 200 
  probationAge: 10 
  earlyTerminationAge: 8 
  minVisibiilty4Tracking: 0.01 
  minTrackingConfidenceDuringInactive: 2.0 

OSD setting is the following:
[osd]
enable=1
gpu-id=0
border-width=1
text-size=18
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=9
clock-color=1;0;0;0
nvbuf-memory-type=0
process-mode=0
# -------------------------------------------------

Could it be that in previous values is present some value that produces boxes moving around detected objects ?

Thanks
M.

For corrupted video output:

Using below pipeline to check if it’s better.
gst-launch-1.0 uridecodebin uri= ! nveglglessink sync=0

For tegra you will need to add nvegltransform before nveglglessink

For Tracker issue you mentioned, we are checking internally.

Hello Customer,

Could you try using KLT tracker instead and see if the bboxes are still moving?

The video pixel corruption is not a tracker issue, but could be the root cause of the bbox jitter issue.

bcao,

Please reproduce this issue and investigate the video pixel corruption issue first. Bbbox jitter issue should be revisited after fixing video corruption issue.

Hey customer, any result if you use KLT and would you mind to share your test stream with me if the issue still exist?

Hi,
We put in place a lot of tests and the more significant ones are the following


Test 1)

We use only 1 rtsp source camera using 15 FPS,
We had following setting to reduce a little GPU load
-) “interval=1” for Primary-Gie
So at the end we are processing in primary-gie about 7/8 real FPS
We are using YoloV3 network architecture because it has a good accuracy detection.
GPU was about 75%

We setup following chain where instead of sending stream to external rtspClient we write stream to a videoFile

 1-VideoCamera(Rtsp) -> | GstRtspBin -> GstDecodeBin -> Primary_Gie_Bin -> Tracking_Bin -> OSD_Bin -> Sink_to_file(recording) 

and results seems good, … in attachment you find file where we see only the camera
“Car_cam__AI_track__correct.PNG”

that represents a video frame obtained after some seconds of persons moving in our scenario.
So on this frame persons are detected and frame is not corrupted
So we could say that using dump to local file Inference/Detection and Tracking do not create frame corruption, … now we are checking this when sending to local network.


Test 2)

Using another TX2 card, … in the same time of test 1 we got the same identical video recording from rtspClient in local network but using 2 source cameras with 15 FPS for each cameras so having 30 FPS total.
But using this interval setting because we need to have 2 cameras and this is the only way to have it.

-) “interval=3” for Primary-Gie

So at the end we are processing in primary-gie about 7/8 real FPS because we do 1 frame every 4

2-VideoCamera(Rtsp) -> | GstRtspBin -> GstDecodeBin -> Primary_Gie_Bin -> Tracking_Bin -> OSD_Bin -> SinkBin(RtspServer) | -> RtspClient(recording)

And in this case we got the same old corruption represented in following videoFrame
“Car_cam__AI_track__corrupted.PNG”

So difference vs previous local-File chain is that here we have 2 cameras with more FPS, producing more GPU load ( about 99% ) and we have gstRtspServer, rtspClient and local network, … so now we are checking following variables to isolate the problem.

-) substituting rtspClient with VLC
-) reducing source camera frame rate, … to reduce TX2 high GPU load and frame latency
-) verify there is no latency on the network.

QUESTION-1: we have TX2 CPU at 30% but we have GPU that is very high up to 99% … so could it be that, when TX2 cards has heavy GPU load, frames are highly delayed by Inference/Tracking and so frames are lost by client ?

In this case is there some GstRtspServer setting (buffer or other) that could help to manage frame latency ?

Consider that we already have:
-) “batch-size=4” for streammuxer and Primary-Gie
-) “interval=3” for Primary-Gie

QUESTION-2: could it be that now having interval=3 instead of interval=1 so primary-gie could cut some frames ?

Thankyou,
M.


Ok, we will check this internally.

The fact that the corrupted regions in the image are rectangular implies the video frame corruption was caused by some RTSP packet loss, resulting in loss of some blocks in the image. As the customer mentioned, those lost blocks are recovered when the next key frame comes. Such observation also supports my argument.

Given that now we know the inference and tracking is carried out from non-corrupted video frames, would the output video received at the RTSP client be still required to be corruption-free? If so, maybe you can consider making sure rtsp packets are not lost in the network.

If so, maybe you can consider making sure rtsp packets are not lost in the network.

Ok from a base evaluation, it seems networks is not losing packets.

On the other size we keep on seeing GPU at very high level about 99% so we tried to reduce it setting in context [primary-gie] for example “interval” from 3 to “interval=4” and in this way we saw that GPU now is a little lower and frame corruption is a little lower, … but in the same time some persons and cars are detected/tracked with less continuity.

So we could think that when GPU is at 100% either there is too many time delay and our client has a buffer window time setting too reduced, or at 100% there is a direct frame suppression in Infer+tracker.

In any case considering we would like to reduce GPU without reducing “interval” is there a smart way to do it ? For example there are other useful parameters in [primary-gie] we can set to reduce GPU ?

M.

If the network is not losing packets, then we will need to find out where the packet drop occurs. It could be one of the rtsp-related plugins, but need to test and confirm.

bcao,

Could you investigate if any packet drops is happening inside deepstream especially when gpu usage is at 99%?

Could you investigate if any packet drops is happening inside deepstream
especially when gpu usage is at 99%?

It is not simple to do it, … I tried this way …
I setup a deepstream-app executing both rtsp-out and dump to a local card file in condition of 99% GPU

From first results it seems that also in local videoFile are present some corruptions.
Beside after corruptions take places, is seems that Tracking changes TrackId,
In attached videofile please follows carId-79 … and see that …

 at 2019-12-11 09:19:48 it has id "car-79"
 at 2019-12-11 09:19:50 corruption takes place
 at 2019-12-11 09:19:52 it has id changed to "car-81"

So my opinion is that frames are “highly delayed or lost” in NvInfer and this causes both final video corruption and changes in tracking.

Make confirm this analysis make sense ?

Beside on Rtsp stream could take place some amplification effect, so some frame highly delayed is in an y case saved to local file but it is not sent by rtsp because some internal timeout could drop it

In attachment you find videoFile “Car_cam__09_18.mp4” saved locally to TX2 card and with some corruption.

Conclusion: when GPU is at 99%, I think NvInfer looses some frame, … main problem are metadata and trackid that are not correct, …
One solution could be that at least for trackingID there should be the possibilities to have greater buffer to have a greater analyzing continuity. I know that there are some parameters inside file.yml like following

minDetectorConfidence:
minTrackerConfidence:
featureImgSizeLevel:
SearchRegionPaddingScale:
maxShadowTrackingAge:
probationAge:
earlyTerminationAge:
minVisibiilty4Tracking:

I have already increased some of them but results are not yet good.

Do you know the exact values we can set to previous.yml parameters to have a greater time continuity in tracking ??

Let’s know,
ThankYou
Maurizio
Car_cam__09_18.zip (23.2 MB)

Hello Maurizio,

Could you share with us your source video and exact settings you used to generate the video so that we can reproduce your issue? You can DM to bcao if you don’t want to make it public.

bcao,

Please reproduce this issue and file internal bug if needed.

Hey Customer,
would you mind to share with us the info as comment #13?

bcao,

Please reproduce this issue and file internal bug if needed.

Sure, will try to repro it.

Ok bug opened “Bug ID: 2790856” … we goon on Bug

I added all necessary information and VideoFiles.mp4 to reproduce the problem

Below some info
bye,
Maurizio

Please do not reply to this message
---------------------------------------------------
Requester: Maurizio Galimberti
Bug ID: 2790856
Date: 12/18/2019 7:30:22 AM
Synopsis: Deepstream detection and tracking changes track-Id, rtsp corrupted frames, and detected boxes continuously highly ascillating

Ok, let’s handle it on the bug.

Hello, any update on this topic?

All the changes will be included in next release. Pls create a new topic if any other issue or concern?