I followed this guide (https://github.com/dusty-nv/jetson-inference) to do some inference on the Jetson Nano device. The object detection pretrained models like mobilenet, inception etc work fine. Now, I want to do inference on this PeopleNet model available on the NGC cloud. I’ve downloaded the model and it was in etlt format and so I converted into .engine file using tlt-converter. I want to know how I can use the PeopleNet model with jetson-inference library which uses tensorRT. Can anyone guide me to some documentation or procedure to do so.
Environment
TensorRT Version: 7.1 CUDA Version: 10.2 CUDNN Version: 8.0 Operating System + Version: Ubuntu 18.04
Hi @meghpatel, I have not tried using this model with jetson-inference library before. It is possible that it may require some customization to the pre/post-processing code depending on the inputs/outputs expected by the model.
I do know that the pre-trained TLT models work with DeepStream, so you may want to try that way first.
Hello @meghpatel, I’m trying to do the same !
Use PeopleNet on jetson inference.
I read the documentation to convert te model at etlt format at .engine.
I follow all steps but I got multiples error :
[ERROR] UffParser: Could not parse MetaGraph from /tmp/file4SOSqv
[ERROR] Failed to parse the model, please check the encoding key to make sure it's correct
[ERROR] Network must have at least one output
[ERROR] Network validation failed.
[ERROR] Unable to create engine
Segmentation fault (core dumped)
I download PeoplNet_pruned_v2.0.zip, unzip and I use this command :
Hi @Charly, there isn’t support for the TLT models (ala PeopleNet) included in jetson-inference yet - there would need to be addition pre/post-processing code added to interpret the tensors that PeopleNet inputs and outputs. It’s recommended to use the TLT models through DeepStream.
Hello @dusty_nv.
Thank you for your quick reponse.
I try DeepStream and it work perfectly the day.
but at night my cams are in grayscale.
i read in another post a solution : add sepia filter.
But i’m unable to add sepia fikter in DeepStream examples.
I look everywhere to know how to add color filter.
i think it is with Gst.ElementFactory but…
I’m using python. And I don’t have enough skills to find a solution.
I’m working for persnnal projet.
[ERROR] UffParser: Unsupported number of graph 0
[ERROR] Failed to parse the model, please check the encoding key to make sure it’s correct
[ERROR] Network must have at least one output
[ERROR] Network validation failed.
[ERROR] Unable to create engine
Segmentation fault (core dumped)
I check api key gived to me on my ngc account
I should miss something
Thank you
Hi @Charly, I’m not very familiar with the tlt-converter tool myself, sorry about that - so I recommend that you post your issue to the Transfer Learning Toolkit forum here: