Use jetson-inference TensorRT to infer from custom models like PeopleNet

Description

I followed this guide (https://github.com/dusty-nv/jetson-inference) to do some inference on the Jetson Nano device. The object detection pretrained models like mobilenet, inception etc work fine. Now, I want to do inference on this PeopleNet model available on the NGC cloud. I’ve downloaded the model and it was in etlt format and so I converted into .engine file using tlt-converter. I want to know how I can use the PeopleNet model with jetson-inference library which uses tensorRT. Can anyone guide me to some documentation or procedure to do so.

Environment

TensorRT Version: 7.1
CUDA Version: 10.2
CUDNN Version: 8.0
Operating System + Version: Ubuntu 18.04

Hi @meghpatel,
Jetson Nano team will help you further on this issue.
Thanks!

Hi @meghpatel, I have not tried using this model with jetson-inference library before. It is possible that it may require some customization to the pre/post-processing code depending on the inputs/outputs expected by the model.

I do know that the pre-trained TLT models work with DeepStream, so you may want to try that way first.

1 Like