Frozen Object Detection Model to TensorRT for Faster Jetson Inference: (pb)-(.uff)-(engine file)

Hello,

I am trying to convert a custom trained frozen object detection model (.pb) to a TensorRT model for faster inference on the NVIDIA Jetson Xavier-NX (or the Nano). Thus far, I have trained a 2-class object detector using Tensorflow’s Object Detection API with Mobilenet_v1 assuming this would generate the fastest inference possible on the Xavier-NX/Nano. the fastest inference possible. Right now, when I run the output frozen file (.pb) on the Jetson Xavier NX, I get about 1-FPS. I followed this tutorial:

To increase the Jetson Xavier-NX FPS, I read that if the frozen model (.pb) is converted to TensorRT, you can get much faster inference time. From what I understand, this requires converting the frozen model (.pb) to a UFF model (.uff). This UFF model is then used to create an ‘engine’ for which you can do inference. I was successful in converting my frozen (.pb) to UFF (.uff) using the following command when running on the Xavier-NX:

python3 convert-to-uff.py frozen_inference_graph.pb -O NMS -p config.py

That said, I am stuck as to what the next step is. I know I have to somehow convert my UFF file to an ‘engine file’–and after which I can somehow perform inference from this engine file. This is where I am looking for guidance–how do you convert the generated UFF file to an ENGINE file for NVIDIA Xavier-NX inference?

I am a beginner with Machine-Learning, and as such, if you are offering suggestions, please do not skip steps to reduce the difficulty in following.

Thank you.

Hi @Alma11,
UFF parser has been deprecated from TRT 7 onwards,
hence we recommend you to use ONNX parser.
pb << ONNX << trt engine
Alternatively you can also use TF-TRT


Thanks!

Thank you for the reply.

So UFF is no longer the preferred method to build an engine for faster Jetson Inference? Is this what you mean by deprecated?

I see your link but more detail than just the hyperlink would be appreciated. Can you provide a general outline to this process? How exactly do you convert the .pb file to ONNX? Then trt engine? And how do you use that engine for performing inference? I have seen that link before and quite honestly, it is overwhelming for someone starting to learn this stuff. Additional step by step would be appreciated.

Hi @Alma11,
For conversion of TF to ONNX model, you may check this.
You can then convert your ONNX model to trt engine by taking reference from the shared link.
Alternatively, you can refer to the below detailed doc to generate engine file


For inferencing, please check the given doc.

For the entire working flow, you can check the C++/Python samples which may help you get started.

Thanks!

AakankshaS,

Really appreciate your added steps. I will try this out and get back with my results!

Hi @AakankshaS!

I’ve tried the suggested procedure from GitHub - onnx/tensorflow-onnx: Convert TensorFlow models to ONNX. However it fails when trying to pip install tf2onnx:

tf2onnx.txt (6.3 KB)

Any suggestions, please?

Thanks,
Michael

PS: I also tried first doing this: pip install "onnx<1.8.0". The ONNX build of this version fails in the same way.