Run PeopleNet Tranformer on TensorRT

I am trying to run PeopleNet Transformer using TensorRT. I’ve gotten PeopleNet to work fine with TensorRT but I can’t generate a .engine file for PeopleNet Transformer.

I’ve downloaded resnet50_peoplenet_transformer.etlt from here but when I try to convert the model with the following command:

./tlt-converter resnet50_peoplenet_transformer.etlt -k nvidia_tao -d 3,544,960

I get the following error:

[INFO] ----------------------------------------------------------------
[INFO] Input filename:   /tmp/filew2FMih
[INFO] ONNX IR version:  0.0.8
[INFO] Opset version:    12
[INFO] Producer name:    pytorch
[INFO] Producer version: 1.13.0
[INFO] Domain:           
[INFO] Model version:    0
[INFO] Doc string:       
[INFO] ----------------------------------------------------------------
[WARNING] /home/jenkins/workspace/OSS/L0_MergeRequest/oss/parsers/onnx/onnx2trt_utils.cpp:226: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[ERROR] /home/jenkins/workspace/OSS/L0_MergeRequest/oss/parsers/onnx/onnx2trt_utils.cpp:475: Found unsupported datatype (11) when importing initializer: 
terminate called after throwing an instance of 'std::runtime_error'
  what():  Unable to convert ONNX weights
Aborted (core dumped)

So it seems TensorRT doesn’t support INT64 but the documentation claims PeopleNet Transformer can be run on TensorRT. So I am wondering how do I work around this? How do I convert the model to a .engine file so that can be run with TensorRT?

Help appreciated.

Hi @jedda ,
I believe you can get better assistance on Deepstream Forum on the topic.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.