Tensorflow Object Detection API, TensorRT and Jetson Nano

Hi,

I’ve been trying for a while to compress Tensorflow Object Detection API models from the model zoo for TF2 to TensorRT and deploy them in my Jetson Nano.

I’ve been using the official conversion repo . I’ve been able to install and run this repo on my PC, but to either run the resulting .ONNX or .TRT files on Jetson I need to have the TFOD Object Detection API installed.

When trying to install, both tensorflow-addons and tensorflow-text have been impossible to install, even from source. I’ve tried to follow both this issue and this issue, but I guess I’m missing some step.

Could you provide a full guide on how to compress TFOD API models to TensorRT on Jetson?
Thanks.

Hi,

Have you tried to convert the ONNX model on a desktop environment, and create the engine file from ONNX on Nano?
Thanks.

Yes, I get this error when I try:

Should I retrain the model somehow?

Hi,

How do you create the engine file?
Could you try it with trtexec binary?

$ /usr/src/tensorrt/bin/trtexec --onnx=[model]

Thanks.

Hi,

I was using this script (as far as I know the official one) for the engine file.

When I use trtexec I get:

Hi,

By compressing again the model to ONNX and using trtexec I managed to build a .trt file. No my issue is that I can’t run the model, neither using trtexec nor the inference script. The error is the following:

Hi @AastaLLL , could you please give me a hand on this? Thanks.