Use jetson-inference TensorRT to infer from custom models like PeopleNet

Description

I followed this guide (https://github.com/dusty-nv/jetson-inference) to do some inference on the Jetson Nano device. The object detection pretrained models like mobilenet, inception etc work fine. Now, I want to do inference on this PeopleNet model available on the NGC cloud. I’ve downloaded the model and it was in etlt format and so I converted into .engine file using tlt-converter. I want to know how I can use the PeopleNet model with jetson-inference library which uses tensorRT. Can anyone guide me to some documentation or procedure to do so.

Environment

TensorRT Version: 7.1
CUDA Version: 10.2
CUDNN Version: 8.0
Operating System + Version: Ubuntu 18.04

Hi @meghpatel,
Jetson Nano team will help you further on this issue.
Thanks!

Hi @meghpatel, I have not tried using this model with jetson-inference library before. It is possible that it may require some customization to the pre/post-processing code depending on the inputs/outputs expected by the model.

I do know that the pre-trained TLT models work with DeepStream, so you may want to try that way first.

1 Like

Hello @meghpatel, I’m trying to do the same !
Use PeopleNet on jetson inference.

I read the documentation to convert te model at etlt format at .engine.

I follow all steps but I got multiples error :

[ERROR] UffParser: Could not parse MetaGraph from /tmp/file4SOSqv
[ERROR] Failed to parse the model, please check the encoding key to make sure it's correct
[ERROR] Network must have at least one output
[ERROR] Network validation failed.
[ERROR] Unable to create engine
Segmentation fault (core dumped)

I download PeoplNet_pruned_v2.0.zip, unzip and I use this command :

export API_KEY=dm5iNGU4YWVwa2gzODBhYXFqa2NsYnNpdWY6NzZmMzQ5NzAtNGQwNy00OTQ4LTg2NTUtNTBmNDg0NjU0OWU2
export OUTPUT_NODES=output_bbox/BiasAdd,output_cov/Sigmoid
export INPUT_DIMS=3,544,960
export D_TYPE=fp16
export ENGINE_PATH=/home/charly/resnet34_peoplenet_pruned.engine
export MODEL_PATH=/home/charly/resnet34_peoplenet_pruned.etlt

tlt-converter -k $API_KEY
-o $OUTPUT_NODES
-d $INPUT_DIMS
-e $ENGINE_PATH
$MODEL_PATH

I don’t understand what is wrong.

If you want, I can create a new topic on the forum. But you have experience in the field.

Thank you and Have a good day !

Hi @Charly, there isn’t support for the TLT models (ala PeopleNet) included in jetson-inference yet - there would need to be addition pre/post-processing code added to interpret the tensors that PeopleNet inputs and outputs. It’s recommended to use the TLT models through DeepStream.

Hello @dusty_nv.
Thank you for your quick reponse.
I try DeepStream and it work perfectly the day.
but at night my cams are in grayscale.
i read in another post a solution : add sepia filter.
But i’m unable to add sepia fikter in DeepStream examples.
I look everywhere to know how to add color filter.
i think it is with Gst.ElementFactory but…
I’m using python. And I don’t have enough skills to find a solution.
I’m working for persnnal projet.

Thank you again.
Have a good day

Hello @dusty_nv
For pre/post process code, I think it will not be a problem. Somes topics spoke about it.

But I encounter another problem.

tlt-converter -k dm5iNGU4YWVwa2gzODBhYXFqa2NsYnNpdWY6NzZmMzQ5NzAtNGQwNy00OTQ4LTg2NTUtNTBmNDg0NjU0OWU2
-o output_bbox/BiasAdd,output_cov/Sigmoid
-d 3,544,960
-e /home/charly/resnet34_peoplenet_pruned.engine
/home/charly/resnet34_peoplenet_pruned.etlt

And i got :

[ERROR] UffParser: Unsupported number of graph 0
[ERROR] Failed to parse the model, please check the encoding key to make sure it’s correct
[ERROR] Network must have at least one output
[ERROR] Network validation failed.
[ERROR] Unable to create engine
Segmentation fault (core dumped)

I check api key gived to me on my ngc account
I should miss something
Thank you

Hi @Charly, I’m not very familiar with the tlt-converter tool myself, sorry about that - so I recommend that you post your issue to the Transfer Learning Toolkit forum here:

https://forums.developer.nvidia.com/c/accelerated-computing/intelligent-video-analytics/transfer-learning-toolkit/17