Multiple ROIs bonding to one class_id in Deepstream-app action recognition?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Xavier NX
• DeepStream Version 6.1
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I’ve set up two non-overlapped ROIs in one video source. When action was detected in only one roi, I could get the correct class_id. But when I tried to get the corresponding roi’s coordinates by iterating “preprocess_batchmeta->roi_vector”, I found that the two ROIs bonded to the same label even though there is no target in the other roi.

I think the class_id is supposed to be bonded with the roi in which it is recognized. I tried setup another [group-x] and got the same result. Did I miss something in configuration or is it how the plugin is implemented? Please give me some advice. Thanks!

Hi, What’s your pipeline like? Did you run our demo code?

Hi, my pipeline includes cascaded object and action detectors and everything is alright with the object detector.

I just did a simple test with your demo code and found almost the same phenomenon. the relevant config of “config_preproocess_2d_custom.txt” are:

[group-0]
src-id=0
process-on-roi=1
roi-params-src-0=750;0;500;720

Attached please find the clipped images showing “Walk” is detected even though the person is out of the ROI. You can also see from the right part of the images that the class-id, score, and label are output at the same time.

OK, So there are more than one ROI bounding to the same class_id. When one of them got the right label, the others got it too. Is that right?

Yes! This is what I got.

By the way, from your demo code, I got the action labels before the object entered the ROI and you can see from the above image that the “walk” label is there. Could you explain that?

We’ll check this problem and notify you in time. Thanks

Hi, any update about this issue? Thanks!

I’m analyzing this problem. It may need to do some debugging from the source code. Especially the preprocess plugin and the custom preprocess source code custom_sequence_preprocess.

Hope to hear from you soon. Thanks!

Hi @AndySimcoe , This problem has nothing to do with the logic of the preprocess plugin to handle the ROI. The root cause is the accuracy of the model is not very good.The training data set of this model itself is very small. So even you send it a picture without person, it may also output the wrong results normally. It is just used to show the action recognition fuction. If you want to use it in your own project.We suggest you train your own model for this. Thanks

Hi @yuweiw , thank you for your reply.

I got a little confused about this conclusion because in both my deepstream-app pipeline and your demo code I have never seen random or wrong labels associated. It seems that either the “process-on-roi” does not come into effect or multi ROIs in the same channel bonded to just one label. However, I cannot prove it.

You can try to use the demo below to test preprocess plugin and prove the process-on-roi para:

deepstream-preprocess-test

Hi @yuweiw , I’ve checked the “deepstream-preprocess-test” and found “process-on-roi” worked! I’ve also noticed that libcustom2d_preprocess.so does process the “process-on-roi” key.

Now I doubt that “libnvds_custom_sequence_preprocess.so” within the sample “Deepstream-3D-Recognition” code does not consider ROIs. I’ll dive deep into the relevant code and hope you can give some advice. Thanks!

No, you can refer our open source code:sequence_image_process.cpp. ROI has been processed in the code. As I said before, there are few training data sets for this model. So the accuracy rate is not very good. It is just provided for the demo.

Hi @yuweiw , yes you are right. After adding some debug info in the “deepstream-preprocess-test”, I can get the correct roi’s para according to the “process-on-roi” setting.

I also replaced the “walk” video with the “ride_bike” and some wrong “walk” labels popped up. I think it seems that the model was trained with a biased dataset which misled me into a wrong conclusion when I used the “walk” video clip in the first place.

Next step I’ll focus on the training models as you suggested. Thanks!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.