There are three different kinds of peoplenet now.
One is trained from detectnet_v2 network.
2nd is trained from D-DETR network.
3rd is trained from DINO network.
A simple way to run peoplenet inference is to use a deepstream docker.
See below steps.
$ docker run --runtime=nvidia -it --rm nvcr.io/nvidia/deepstream:6.3-triton-multiarch /bin/bash
# apt install libeigen3-dev && cd /usr/include && ln -sf eigen3/Eigen Eigen
# cd -
# git clone https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps.git
# cd deepstream_tao_apps
# export CUDA_VER=12.1
# make
# mkdir -p ./models/peoplenet && cd ./models/peoplenet && wget https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplenet/versions/deployable_quantized_v2.6.1/files/resnet34_peoplenet_int8.etlt && wget https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplenet/versions/deployable_quantized_v2.6.1/files/resnet34_peoplenet_int8.txt && wget https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplenet/versions/deployable_quantized_v2.6.1/files/labels.txt
# cd - && mkdir -p ./models/peoplenet_transformer && cd ./models/peoplenet_transformer && wget --content-disposition 'https://api.ngc.nvidia.com/v2/models/org/nvidia/team/tao/peoplenet_transformer/deployable_v1.1/files?redirect=true&path=resnet50_peoplenet_transformer_op12.onnx' -O resnet50_peoplenet_transformer_op12.onnx && wget --content-disposition 'https://api.ngc.nvidia.com/v2/models/org/nvidia/team/tao/peoplenet_transformer/deployable_v1.0/files?redirect=true&path=labels.txt' -O labels.txt && wget --content-disposition 'https://api.ngc.nvidia.com/v2/models/org/nvidia/team/tao/peoplenet_transformer_v2/deployable_v1.0/files?redirect=true&path=dino_fan_small_astro_delta.onnx' -O dino_fan_small_astro_delta.onnx
# cd -
# ./apps/tao_detection/ds-tao-detection -c configs/nvinfer/peoplenet_tao/config_infer_primary_peoplenet.txt -i file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4
For 2nd peoplenet,
$ vim configs/nvinfer/peoplenet_transformer_tao/pgie_peoplenet_transformer_tao_config.txt
onnx-file = ../../../models/peoplenet_transformer/resnet50_peoplenet_transformer_op12.onnx
model-engine-file=../../../models/peoplenet_transformer/resnet50_peoplenet_transformer_op12.onnx_b1_gpu0_fp16.engine
#model-engine-file=../../../models/peoplenet_transformer/resnet50_peoplenet_transformer.etlt_b1_gpu0_fp16.engine
#tlt-encoded-model=../../../models/peoplenet_transformer/resnet50_peoplenet_transformer.etlt
#tlt-model-key=nvidia_tao
$./apps/tao_detection/ds-tao-detection -c configs/nvinfer/peoplenet_transformer_tao/pgie_peoplenet_transformer_tao_config.txt -i file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4
For 3rd peoplenet,
update TensorRT8.5 to TensorRT 8.6
$ apt-get install sudo
$ wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/secure/8.6.1/local_repos/nv-tensorrt-local-repo-ubuntu2004-8.6.1-cuda-11.8_1.0-1_amd64.deb
$ sudo dpkg -i nv-tensorrt-local-repo-ubuntu2004-8.6.1-cuda-11.8_1.0-1_amd64.deb
$ sudo cp /var/nv-tensorrt-local-repo-ubuntu2004-8.6.1-cuda-11.8/nv-tensorrt-local-D7BB1B18-keyring.gpg /usr/share/keyrings/
$ sudo apt-get update
$ sudo apt-get install tensorrt
$ vim configs/nvinfer/peoplenet_transformer_tao/pgie_peoplenet_transformer_tao_config.txt
onnx-file = ../../../models/peoplenet_transformer/dino_fan_small_astro_delta.onnx
model-engine-file=../../../models/peoplenet_transformer/dino_fan_small_astro_delta.onnx_b1_gpu0_fp16.engine
#model-engine-file=../../../models/peoplenet_transformer/resnet50_peoplenet_transformer.etlt_b1_gpu0_fp16.engine
#tlt-encoded-model=../../../models/peoplenet_transformer/resnet50_peoplenet_transformer.etlt
#tlt-model-key=nvidia_tao
$./apps/tao_detection/ds-tao-detection -c configs/nvinfer/peoplenet_transformer_tao/pgie_peoplenet_transformer_tao_config.txt -i file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.mp4