I’ve been trying for a while to compress Tensorflow Object Detection API models from the model zoo for TF2 to TensorRT and deploy them in my Jetson Nano.
I’ve been using the official conversion repo . I’ve been able to install and run this repo on my PC, but to either run the resulting .ONNX or .TRT files on Jetson I need to have the TFOD Object Detection API installed.
When trying to install, both tensorflow-addons and tensorflow-text have been impossible to install, even from source. I’ve tried to follow both this issue and this issue, but I guess I’m missing some step.
Could you provide a full guide on how to compress TFOD API models to TensorRT on Jetson?
By compressing again the model to ONNX and using trtexec I managed to build a .trt file. No my issue is that I can’t run the model, neither using trtexec nor the inference script. The error is the following: