[DS4.0.1] DCF tracker not working with pgie output tensor data processed result !!!

Hi,

There is a knwon issue on NvDCF, which might cause jitters effect.
Are you facing the similar issue?
If yes, please check this comment for the workaround:
https://devtalk.nvidia.com/default/topic/1065481/deepstream-sdk/nvdcf-jitter/post/5400989/#5400989

Thanks.

Hi AastaLLL,

My issue is entirely different. Also, I tried that workaround you mentioned but it doesn’t solve the problem.

I have already shared the modified “deepstream-test2” which will reproduce my issue. Can you please try it in your setup ??

Regards
Pallab Sarkar

2 Likes

Hi,

Thanks for the check.
We are going to reproduce this issue and will update more information with you later.

Hi,

It looks like there are some extra libraries used in your sample, which make it cannot be compiled with original Makefile.
Would you mind to share your customized Makefile with us directly.

Thanks.

Hi

I have DM you the make file

Regards

Hi,

Could you share your environment and Deepstream version with us?
We try to reproduce this issue in our environment but the pipeline is broken due to incompatibility.

nvidia@xavier:~/topic_1067129/deepstream-test2$ ./deepstream-test2-app /opt/nvidia/deepstream/deepstream-4.0/samples/streams/sample_720p.h264 
With tracker
Unknown key 'gie-mode' for group [property]
Unknown key 'gie-mode' for group [property]
Unknown key 'gie-mode' for group [property]

(deepstream-test2-app:9458): GStreamer-WARNING **: 17:47:15.480: Trying to link elements primary-nvinference-engine and queue0 that don't share a common ancestor: queue0 hasn't been added to a bin or pipeline, and primary-nvinference-engine is in dstest2-pipeline

(deepstream-test2-app:9458): GStreamer-WARNING **: 17:47:15.481: Trying to link elements primary-nvinference-engine and queue0 that don't share a common ancestor: queue0 hasn't been added to a bin or pipeline, and primary-nvinference-engine is in dstest2-pipeline
Elements could not be linked. Exiting.

Our environment is JetPack4.2.3 with Deepstream4.0.1.

Thanks.

Hi

Code and makefile shared was from my Tesla Environment.
Now I have DM you latest code and makefile and pgie config file from Jetson Nano .

Regards
Pallab Sarkar

Hi,

Thanks for your sharing. We can compile and execute your application in our environment.

However, we could not see too much difference between DCF and KLT tracker.
Would you mind to point out more specifically your issue?

Here attached the video of DCF and KLT.
DCF: https://drive.google.com/open?id=1J7IhMIckU6f7mBa59K7aluIChyVBGjQY
KLT: https://drive.google.com/open?id=1zpr21PmEwggBx3YUqnO-s-yL0OVGmst1
Thanks.

Hi,

Thanks for your update.
We are at a critical milestone. If you need any more support from our side to speed up the solution please let me know!!

Regards
Pallab Sarkar

Hi,

Sorry that I have updated the comment.
Would you mind to answer the question in comment#9?

Thanks.

Hi,

Did you shared these videos from the code i shared ?
Because ran again in my code and able to reproduce the issue

Please see the attached videos.

KLT: https://drive.google.com/open?id=1v7gPEGp5TapVunaq1TDZUweXPG9dV8oF

DCF: https://drive.google.com/open?id=1FD06mYF5aoR4Y6g7n0G8o0npAf1ttPEI

I will also share the entire enviroment of code as a zip in DM!

Specifically nvDCF tracker doesn’t work(no tracking results) if we directly process tensor output as demonstrated in deepstream-infer-tensor-meta-test

Regards
Pallab Sarkar

Hi,

We can output the bounding box correctly with your source.
The only difference we made is the TensorRT engine path.
It looks like the shared config links to a FP16 engine but using a IN8 network mode(=1).

Could you update the engine file path to see if helps?
I also share our source via PM.

Thanks.

Hi,

Thanks for your reply
I have corrected the network_mode to fp16 as needed by Nano, But it doesn’t solve the problem.
I have compared your code with mine and below is the major difference why you are not able to reproduce the issue is that in your primary gie config you have not added below

## 0=Detector, 1=Classifier, 2=Segmentation, 100=Other
network-type=100
# Enable tensor metadata output
output-tensor-meta=1

This will enable tensor meta processing and then only issue will reproduce!!!

Enable this is dstest2_pgie_config.txt(AS SHARED by my code) and then test KLT and DCF , KLT will work but DCF will not work.

Regards
Pallab Sarkar

Hi,

Thanks for checking this with us.
We can reproduce this issue in our environment right now.

It looks like the bounding box can output correctly with following update:

## 0=Detector, 1=Classifier, 2=Segmentation, 100=Other
network-type=0
# Enable tensor metadata output
output-tensor-meta=1

May I know why you need to set the network-type=100 since it’s expected to 0 for pgie file.
We feedback this issue to our internal team. Will share more information with you once we got any news.

THansk.

Hi,

To keep network-type=100 and output-tensor-meta=1 is the configuration suggested to get output as a tensor in Deepstream SDk example in “deepstream-infer-tensor-meta-test”

From the Readme of deepstream-infer-tensor-meta-test I can see network-type= 100 is not mandatory

To enable output layers' tensor data, we need set property or attribute
    "output-tensor-meta=true".
In the sample code, We also set attribute "network-type=100" in config file but
this is not mandatory. "output-tensor-meta" will work with nvinfer configured
as detector/classifier as well.

But my proprietory code {NOt the one which i shared with you } only works when network-type=100 and output-tensor-meta=1

If I keep network-type=0 and output-tensor-meta=1 I get below error in my code

Creating LL OSD context new
0:00:04.033092473 21347   0x5589bd1400 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:parseBoundingBox(): Could not find output coverage layer for parsing objects
0:00:04.033239352 21347   0x5589bd1400 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:fillDetectionOutput(): Failed to parse bboxes
./runarm.sh: line 1: 21347 Segment

How the tracker KLT and DCF have different behavior with network-type=100 and output-tensor-meta=1

Regards

Hi,

Thanks for the update.

This issue is checking by our internal team.
We will update here once we got any feedback. Stay tuned.

Hi,

Is there any update on this issue ?

Regards

Hi,

Sorry for keeping you waiting.

We just got the response from internal team this morning.
You can fix this issue with the following workaround:

diff --git a/deepstream_test2_app.cpp b/deepstream_test2_app.cpp
index 50db65c..e6c2000 100644
--- a/deepstream_test2_app.cpp
+++ b/deepstream_test2_app.cpp
@@ -132,6 +132,10 @@
           l_frame = l_frame->next) {
         NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) l_frame->data;
 
+       nvds_acquire_meta_lock (batch_meta);
+        frame_meta->bInferDone = TRUE;
+        nvds_release_meta_lock (batch_meta);
+
         /* Iterate user metadata in frames to search PGIE's tensor metadata */
         for (NvDsMetaList * l_user = frame_meta->frame_user_meta_list;
             l_user != NULL; l_user = l_user->next) {

The root cause of this issue is that gstnvinfer won’t parse bbox when the network-type=100.
It’s user’s responsibility to attach bbox into NvDsFrameMeta for the generic model type.

Without the attachment, NvDCF believe that the inference is not done for all the input frames.
This won’t happen to KLT tracker just because it doesn’t check this field yet.

We are discussing to implement some simple APIs in gst-utils to improve the attach_metadata_xxx.
Currently, please use the above workaround to avoid this issue.

Thanks.

2 Likes

Hi,

Thanks for the great support on this issue. I verified it in my code and its working.
We will be looking forward to new API attach_metadata_xxx in future upgrades!!

Regards
Pallab Sarkar