Could ActionRecoginitonNet be segmented by person?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version Deepstream 6.0

I’m currently messing around with the ActionRecoginitionNet sample apps in the Deepstream6.0 sample docker container. I was wondering if it’s possible to run the action inference on individual people in frame versus the whole frame, similar as to how it’s done in some of the other demos (Such as Deepstream sample app test2). Would the Gst-nvtracker plugin that other samples use still support the sequence batching needed for the action recognition model?

  1. you can use deepstream-test1’s model to detect people, then use ActionRecoginitionNet model as the second inference engine, please refer to deepstream-test2.
  2. yes, you can add nvtracker after primary inference engine, please refer to deepstream-test2.

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.