Limitation of Tao Action Recognition Model and Deepstream Integration

This is a feature request to the engineering team based on discussion on this thread on Nivida tao toolkits recipes repo:

I want to integrate TAO Action recognition classifier to deepstream 6 as a SGIE after PGIE object detector, so that:

PGIE (Faster RCNN) → pass bounding box information of each cows in the frame → Call SGIE activity recognition model only on particular class (e.g class cow) of PGIE, only using bounding box regions as inputs.

Unfortunately, I was told that current action recognition sample in DS6 does not support or work with a detection model. It only supports manually setting region of interests. This is very limiting, because bounding boxes detected from SGIE is never the same in terms its size and location, and it is not possible to hard define region of interests. And there are obviously many objects detected per frame, and it sounds natural for PGIE to do the object localization and let SGIE action model to make subsequent classification decisions.

Classifier must be usuable as SGIE to be really useful in the production, but the current set up seems more like a video frame classifier, which is quite far from being usable in production. Is there any plan to improve this model?

Sorry for the late response, is this still an issue to support?


above was a question regarding a product feature. Have you had a read through it?

That’s due to batch update, sorry for my careless, will forward this request to internal team to triage.


thank you. please keep me posted on this.

Sorry for delay! After internal discussion, 6.0 does not have SGIE pre-processing support but pre-processor plugin, AR library and nvinfer are open sourced and you could modify them for your use case.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.