How to output peoplenet inference results to a text file

Hi

I can run the peoplenet sample software as below.

$ cd /opt/nvidia/deepstream/deepstream-6.1/samples/configs/tao_pretrained_models/
$ sudo deepstream-app -c deepstream_app_source1_peoplenet.txt

As a next step, I want to output the bunting box coordinates inferred by peoplenet to a text file.
Please tell me how to do it. I searched, but I didn’t understand at all.

  • Hardware Platform (Jetson / GPU)
    JETSON-AGX-ORIN-DEV-KIT
  • DeepStream Version
    6.1.1
  • JetPack Version (valid for Jetson only)
    5.0.2 (L4T 35.1.0)
  • TensorRT Version
    8.4.1.5

nvinfer plugin is opensource, you can output bboxes to a file in attach_metadata_detector of opt\nvidia\deepstream\deepstream\sources\gst-plugins\gst-nvinfer\gstnvinfer_meta_utils.cpp, please rebuild it and replace /opt/nvidia/deepstream/deepstream/lib/gst-plugins/libnvdsgst_infer.so

@fanzh

Thank you for your advice.

Does it mean that the attach_metadata_detector function can obtain the coordinates surrounding the person inferred by peoplenet?

Since the coordinates of the bounding box can be obtained with the attach_metadata_detector function, do you mean that you modify the source code so that you can save it in text yourself?

yes, please refer to
rect_params.left = obj.left;
rect_params.top = obj.top;
rect_params.width = obj.width;
rect_params.height = obj.height;

yes, nvinfer plugin is opensource, please refer to the last comment.

@fanzh

I’m happy to get advice so quickly.

step1. I modify gstnvinfer_meta_utils.cpp and make.
step2. libnvds_infer.so in lib directory is updated.

step3. I run peoplenet.

$ sudo deepstream-app -c deepstream_app_source1_peoplenet.txt

There is something I do not understand. How are you configuring it to use libnvds_infer.so?

please refer to my first reply, replace libnvdsgst_infer.so with the new one.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.