Multiple person action recognition pipeline on deepstream with PeopleNet as pgie

• Hardware Platform (Jetson / GPU) Jetson AGX ORIN
• DeepStream Version 6.1
• JetPack Version (valid for Jetson only)
Hello everyone,
I’m trying to create a multi-person action recognition deepstream application. Of course, the C++ application suggested in the SDK supports only single person action recognition. The idea I had is to serialize two models, the first one should be a PeopleNet model for detection persons in the scene and feeds the bounding boxes to the second model which should use these frames as an input for action recognition algorithm. Does anyone knows how to do so ? is there any examples showing how to feed the outputs of a pgie to a sgie ?
Thank you for your help,

Are you talking about the sample deepstream-3d-action-recognition?

Please upgrade to DeepStream 6.2. There is sample for nvdspreprocess work with SGIE - deepstream-preprocess-test

Hello Fiona,
Yes I was talking about the deepstream-3d-action-recognition. I have seen that you started supporting the nvds-preprocess for SGIE starting from Deepstream-6.2 so I guess I will move to to that release as you suggested.

I have also found on the documentation that the way to do is to use something like that:

gst-launch-1.0 filesrc location = /opt/nvidia/deepstream/deepstream-6.2/samples/streams/sample_1080p_h264.mp4 ! qtdemux ! \
h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! nvvideoconvert ! \
nvdspreprocess config-file= /opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-nvdspreprocess/config_preprocess.txt  ! \
nvinfer config-file-path= /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt unique-id=1 \
batch-size=2 input-tensor-meta=1 ! nvdspreprocess config-file= /opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-nvdspreprocess/config_preprocess_sgie.txt ! \
nvinfer config-file-path= /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_secondary_carcolor.txt input-tensor-meta=1 unique-id=3 ! \
nvmultistreamtiler width=1920 height=1080 ! nvvideoconvert ! nvdsosd ! nv3dsink

I have been wondering how to set the config_preprocess_sgie.txt configuration to get only the bounding boxes from the pgie as a ROI, is there a way to keep the NvDsMetadata through both stages of nvinfer ? Can I hope to see the action recognition prediction printed on top of each of the boxes ?

Thank you for your help,

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

There is “operate-on-gie-id” parameter of nvdspreprocess. Gst-nvdspreprocess (Alpha) — DeepStream 6.2 Release documentation

Please read the document and the sample deepstream-preprocess-test carefully for the details. Further more, nvdspreprocess is open source, you can refer to the code directly.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.