How to run peoplenet on a folder of images?

• Hardware (T4/V100/Xavier/Nano/etc)
agx orin
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc)
peoplenet

Im not sure if this is the right place to ask but I’m using peoplenet quantized engine on my custom deepstream app. Now to compare with other models like yolov5 I want to run in on my custom dataset, that is a folder of images and get the inference results as txt. However I couldn’t find a reliable way of doing that. Can you pinpoint me to any guides, scripts etc? Thank you.

For running TAO inference against peoplenet with a folder of images, the default inference can support. Please refer to " Visualize inferences" section in tao_tutorials/notebooks/tao_launcher_starter_kit/detectnet_v2/detectnet_v2.ipynb at main · NVIDIA/tao_tutorials · GitHub.

Hi, Morganh I tried to run that on my jetson but teo seems to use a amd64 container for this which throws exec format error.

In Jetson, please follow tao_deploy/README.md at main · NVIDIA/tao_deploy · GitHub to install tao-deploy in Jetson.
Below is approved to be working.
Case1:
Jetpack5.0 + nvcr.io/nvidia/l4t-tensorrt:r8.5.2.2-devel should be working.

Case2:
In Jetpack6.0 + nvcr.io/nvidia/l4t-tensorrt:r8.5.2.2-devel

$ apt-get install vim
$ vim /etc/apt/sources.list   
to add below.
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ jammy main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ jammy-updates main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ jammy-security main restricted universe multiverse

$ apt update
$ apt install libc6
$ ldd --version
$ apt install libopenmpi-dev
$ pip install nvidia_tao_deploy==5.0.0.423.dev0
$ pip install https://files.pythonhosted.org/packages/f7/7a/ac2e37588fe552b49d8807215b7de224eef60a495391fdacc5fa13732d11/nvidia_eff_tao_encryption-0.1.7-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
$ pip install https://files.pythonhosted.org/packages/0d/05/6caf40aefc7ac44708b2dcd5403870181acc1ecdd93fa822370d10cc49f3/nvidia_eff-0.6.2-py38-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
$ detectnet_v2 --help
root@daef05df6f3a:/# detectnet_v2 inference   -e inference_spec.txt   -m peoplenet_deployable_quantized_v2.6.1/resnet34_peoplenet_int8.etlt   -r results/   -i inputset/val2017   --batch_size 1
Loading uff directly from the package source code
2025-06-27 11:21:50,623 [INFO] nvidia_tao_deploy.cv.common.logging.status_logging: Log file already exists at /results/status.json
2025-06-27 11:21:50,624 [INFO] root: Starting detectnet_v2 inference.
2025-06-27 11:21:50,628 [INFO] root: 21:3 : Message type "BboxHandlerConfig" has no field named "bbox_color".
Traceback (most recent call last):
  File "</usr/local/lib/python3.8/dist-packages/nvidia_tao_deploy/cv/detectnet_v2/scripts/inference.py>", line 3, in <module>
  File "<frozen cv.detectnet_v2.scripts.inference>", line 190, in <module>
  File "<frozen cv.common.decorators>", line 63, in _func
  File "<frozen cv.common.decorators>", line 48, in _func
  File "<frozen cv.detectnet_v2.scripts.inference>", line 47, in main
  File "<frozen cv.detectnet_v2.proto.utils>", line 42, in load_proto
  File "<frozen cv.detectnet_v2.proto.utils>", line 39, in _load_from_file
  File "/usr/local/lib/python3.8/dist-packages/google/protobuf/text_format.py", line 719, in Merge
    return MergeLines(
  File "/usr/local/lib/python3.8/dist-packages/google/protobuf/text_format.py", line 793, in MergeLines
    return parser.MergeLines(lines, message)
  File "/usr/local/lib/python3.8/dist-packages/google/protobuf/text_format.py", line 818, in MergeLines
    self._ParseOrMerge(lines, message)
  File "/usr/local/lib/python3.8/dist-packages/google/protobuf/text_format.py", line 837, in _ParseOrMerge
    self._MergeField(tokenizer, message)
  File "/usr/local/lib/python3.8/dist-packages/google/protobuf/text_format.py", line 967, in _MergeField
    merger(tokenizer, message, field)
  File "/usr/local/lib/python3.8/dist-packages/google/protobuf/text_format.py", line 1042, in _MergeMessageField
    self._MergeField(tokenizer, sub_message)
  File "/usr/local/lib/python3.8/dist-packages/google/protobuf/text_format.py", line 932, in _MergeField
    raise tokenizer.ParseErrorPreviousToken(
google.protobuf.text_format.ParseError: 21:3 : Message type "BboxHandlerConfig" has no field named "bbox_color".
2025-06-27 11:21:50,964 [WARNING] nvidia_tao_deploy.cv.common.entrypoint.entrypoint_proto: Telemetry data couldn't be sent, but the command ran successfully.
2025-06-27 11:21:50,965 [WARNING] nvidia_tao_deploy.cv.common.entrypoint.entrypoint_proto: [Error]: Not Supported
2025-06-27 11:21:50,965 [WARNING] nvidia_tao_deploy.cv.common.entrypoint.entrypoint_proto: Execution status: FAIL
root@daef05df6f3a:/# 

Im not sure what config to use here I found one in the model page? I also tried with onnx and etlt but didnt work.
config I used:

root@daef05df6f3a:/# cat inference_spec.txt 
inferencer_config {
  target_classes: "person"
  image_width: 960
  image_height: 544
  image_channels: 3
  batch_size: 1
  gpu_index: 0

  tensorrt_config {
    trt_engine: ""
    backend_data_type: INT8
    parser: ETLT
    etlt_model: "/workspace/peoplenet_deployable_quantized_v2.6.1/resnet34_peoplenet_int8.etlt"
  }
}

bbox_handler_config {
  kitti_dump: false
  disable_overlay: false
  overlay_linewidth: 1
  bbox_color { r: 255 g: 0 b: 0 }
  clustering_config {
    pre_cluster_threshold: 0.2
    nms_iou_threshold: 0.5
    topk: 20
  }
}

The bbox_color is not set into the correct place. Please double check in tao_tutorials/notebooks/tao_launcher_starter_kit/detectnet_v2/specs at main · NVIDIA/tao_tutorials · GitHub.

I still can’t find the correct file. I have switched to cloud to use x86 instance because I got so many errors in jetson.

b_karlik@research-berkay-tao-vm:~$ tao deploy detectnet_v2 inference   -e /workspace/peoplenet_infer_etlt.txt   -m /workspace/peoplenet_vdeployable_quantized_v2.6.1/resnet34_peoplenet_int8.etlt   -i /workspace/dataset/val2017   -r /workspace/peoplenet_output   -b 4
2025-06-30 14:27:59,556 [TAO Toolkit] [INFO] root 160: Registry: ['nvcr.io']
2025-06-30 14:27:59,632 [TAO Toolkit] [INFO] nvidia_tao_cli.components.instance_handler.local_instance 360: Running command in container: nvcr.io/nvidia/tao/tao-toolkit:5.5.0-deploy
2025-06-30 14:27:59,661 [TAO Toolkit] [WARNING] nvidia_tao_cli.components.docker_handler.docker_handler 288: 
Docker will run the commands as root. If you would like to retain your
local host permissions, please add the "user":"UID:GID" in the
DockerOptions portion of the "/home/b_karlik/.tao_mounts.json" file. You can obtain your
users UID and GID by using the "id -u" and "id -g" commands on the
terminal.
2025-06-30 14:27:59,661 [TAO Toolkit] [INFO] nvidia_tao_cli.components.docker_handler.docker_handler 301: Printing tty value True
[2025-06-30 14:28:01,769 - TAO Toolkit - matplotlib.font_manager - INFO] generated new fontManager
Loading uff directly from the package source code
2025-06-30 14:28:03,764 [INFO] nvidia_tao_deploy.cv.common.logging.status_logging: Log file already exists at /workspace/peoplenet_output/status.json
2025-06-30 14:28:03,765 [INFO] root: Starting detectnet_v2 inference.
[06/30/2025-14:28:03] [TRT] [E] 1: [runtime.cpp::parsePlan::314] Error Code 1: Serialization (Serialization assertion plan->header.magicTag == rt::kPLAN_MAGIC_TAG failed.)
2025-06-30 14:28:03,848 [INFO] root: 'NoneType' object has no attribute 'create_execution_context'
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/nvidia_tao_deploy/cv/detectnet_v2/scripts/inference.py", line 190, in <module>
    main(args)
  File "/usr/local/lib/python3.10/dist-packages/nvidia_tao_deploy/cv/common/decorators.py", line 63, in _func
    raise e
  File "/usr/local/lib/python3.10/dist-packages/nvidia_tao_deploy/cv/common/decorators.py", line 47, in _func
    runner(cfg, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/nvidia_tao_deploy/cv/detectnet_v2/scripts/inference.py", line 49, in main
    trt_infer = DetectNetInferencer(args.model_path,
  File "/usr/local/lib/python3.10/dist-packages/nvidia_tao_deploy/cv/detectnet_v2/inferencer.py", line 77, in __init__
    super().__init__(engine_path)
  File "/usr/local/lib/python3.10/dist-packages/nvidia_tao_deploy/inferencer/trt_inferencer.py", line 50, in __init__
    self.context = self.engine.create_execution_context()
AttributeError: 'NoneType' object has no attribute 'create_execution_context'
Exception ignored in: <function DetectNetInferencer.__del__ at 0x7ef11ff72e60>
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/nvidia_tao_deploy/cv/detectnet_v2/inferencer.py", line 161, in __del__
    if self.context:
AttributeError: 'DetectNetInferencer' object has no attribute 'context'
[2025-06-30 14:28:04,131 - TAO Toolkit - nvidia_tao_deploy.cv.common.entrypoint.entrypoint_proto - INFO] Sending telemetry data.
[2025-06-30 14:28:04,131 - TAO Toolkit - root - INFO] ================> Start Reporting Telemetry <================
[2025-06-30 14:28:04,131 - TAO Toolkit - root - INFO] Sending {'version': '5.5.0', 'action': 'inference', 'network': 'detectnet_v2', 'gpu': ['Tesla-T4'], 'success': False, 'time_lapsed': 1.7269489765167236} to https://api.tao.ngc.nvidia.com.
[2025-06-30 14:28:05,572 - TAO Toolkit - root - INFO] Telemetry sent successfully.
[2025-06-30 14:28:05,572 - TAO Toolkit - root - INFO] ================> End Reporting Telemetry <================
[2025-06-30 14:28:05,572 - TAO Toolkit - nvidia_tao_deploy.cv.common.entrypoint.entrypoint_proto - INFO] Execution status: FAIL
2025-06-30 14:28:05,927 [TAO Toolkit] [INFO] nvidia_tao_cli.components.docker_handler.docker_handler 363: Stopping container.

I use the config in https://raw.githubusercontent.com/NVIDIA/tao_tutorials/refs/heads/main/notebooks/tao_launcher_starter_kit/detectnet_v2/specs/detectnet_v2_inference_kitti_tlt.txt
but changed the model name. Im not sure if anything has to change.
Im using the model out of box can you provide me model, config and working command to use please?

If you use this config file, the inference is running against the xxx.hdf5 file(It is generated during training process).

You can not replace it with xxx.etlt file( .etlt file is actually an encrypted uff file for peoplenet).

So, suggest you download peoplenet from https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/models/peoplenet/files?version=trainable_unencrypted_v2.6. Then use the https://raw.githubusercontent.com/NVIDIA/tao_tutorials/refs/heads/main/notebooks/tao_launcher_starter_kit/detectnet_v2/specs/detectnet_v2_inference_kitti_tlt.txt to run.
The command is mentioned in tao_tutorials/notebooks/tao_launcher_starter_kit/detectnet_v2/detectnet_v2.ipynb at main · NVIDIA/tao_tutorials · GitHub.

# Running inference for detection on n images
!tao model detectnet_v2 inference -e $SPECS_DIR/detectnet_v2_inference_kitti_tlt.txt \
                            -r $USER_EXPERIMENT_DIR/tlt_infer_testing \
                            -i $DATA_DOWNLOAD_DIR/test_samples

1 Like

This works thank you for all the help. I have a question about the model file formats. Is hdf5 format and other formats for the same version of the model perform the same? because I use engine file with the deepstream for performance.

Another question just out of curiosity is what is the encryption is for?

Yes, it is expected to be similar.

Before TAO 5.0, the TAO sourced code is not open-sourced. So, the trained result is usually an encrypted .hdf5 file, i.e., .tlt file. The exported file(uff/onnx file) format is .etlt.
After(included) TAO 5.0, TAO is already open-sourced. The .tlt or .etlt format is deprecated.

okay thank you for everything. marked as a solution.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.