Request: Pre-built TensorRT Engine for PeopleNet ResNet34 (A100 + TRT 10.13)

Hi NVIDIA team,

I’m working with PeopleNet ResNet34 INT8 and need help building a TensorRT
engine from the encrypted .etlt file.

Environment:

  • GPU: NVIDIA A100 (Compute 8.0)
  • TensorRT: 10.13.0
  • CUDA: 12.2
  • Model: resnet34_peoplenet_int8.etlt

Issue: Cannot build engine locally due to tao-converter version compatibility.

Request: Could someone from NVIDIA provide a pre-built .engine file, or
guidance on the correct tao-converter version for TensorRT 10.13?

Build parameters needed:

  • Input dims: 3,544,960
  • INT8 precision
  • Outputs: output_cov/Sigmoid, output_bbox/BiasAdd
  • Calibration file available

Thanks in advance!

are you testing in DeepStream docker? what is the DeepStream version? the component versions need to meet the requires of this table. PeopleNet supports onnnx model Please refer to this code build_triton_engine.sh for converting with the native tool trtexec.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.