Deepstream6.3 tao_pretrained_models | peopleNet .onnx to .engine

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson (Jetson Orin Nano - Dev Kit)
• DeepStream Version 6.3
• JetPack Version (valid for Jetson only) 5.1.2
• TensorRT Version 8.5.2.2
• NVIDIA GPU Driver Version (valid for GPU only)
**• Issue Type( questions, new requirements, bugs) questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Questions with tao_pretrained_model in deepstream-6.3

/opt/nvidia/deepstream/deepstream-6.3/samples/models/tao_pretrained_models/peopleNet
resnet34_peoplenet_int8.onnx

Tried running the below to create tensorRT engine file from onnx but not working

/usr/src/tensorrt/bin/trtexec --onnx=resnet34_peoplenet_int8.onnx --explicitBatch --saveEngine=resnet34_peoplenet_int8.engine --int8 --allowGPUFallback --useSpinWait

[01/14/2024-01:18:18] [I] [TRT] [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 10 MiB, GPU 498 MiB
[01/14/2024-01:18:18] [I] [TRT] [BlockAssignment] Started assigning block shifts. This will take 51 steps to complete.
[01/14/2024-01:18:18] [I] [TRT] [BlockAssignment] Algorithm ShiftNTopDown took 1.01485ms to assign 4 blocks to 51 nodes requiring 8551936 bytes.
[01/14/2024-01:18:18] [I] [TRT] Total Activation Memory: 8551936
[01/14/2024-01:18:18] [I] [TRT] [MemUsageChange] TensorRT-managed allocation in building engine: CPU +2, GPU +4, now: CPU 2, GPU 4 (MiB)
[01/14/2024-01:18:18] [E] Saving engine to file failed.
[01/14/2024-01:18:18] [E] Engine set up failed

  1. Any reasons it fails at the saving engine part?
  2. Also do i need to specific the --shapes flag when running /usr/src/tensorrt/bin/trtexec

Does the current account have write permission in current folder?

Can you try the trtexec command here: deepstream_tao_apps/build_triton_engine.sh at master · NVIDIA-AI-IOT/deepstream_tao_apps (github.com)

The pretrained peoplenet onnx model PeopleNet | NVIDIA NGC is dynamic batch input.

You can also consult Latest Deep Learning (Training & Inference)/TensorRT topics - NVIDIA Developer Forums for the usage of trtexec tool.

works now @Fiona.Chen.

tyvm folks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.