Questions about implementing RetinaNet with TensorRT on Jetson TX2

Hello,

My goal is to run Retinanet with pytorch implementation and tensorRT on the Jetson TX2 (currently limited to Jetpack 4.2) to do object detection.

I noticed NVIDIA has a Retinanet implementation as example on Github (GitHub - NVIDIA/retinanet-examples: Fast and accurate object detection with end-to-end GPU optimization). Because Nvidia Docker is not supported on the Jetson, I plan to train the model on another desktop machine (16.04, cuda 10.0, tensorRT 5.0.2). I was able to run that example, train the model, and do inference. But when I move the tensorRT engine plan exported from the desktop machine onto the Jetson TX2, it gave me a tensorRT version mismatch error. However, there is no other available version for the TX2.

I also cannot parse the ONNX model (generated by another machine) successfully on the TX2 to generate the engine plan mostly because the input size of the Upsampling layer is not correct. I am wondering whether it is possible to use the tensorRT engine plan generated from another machine on Jetson TX2?

Generally, what would be the best way to run Retinanet on Jetson TX2 with tensorRT support?

Hi,

Sorry that we don’t have an experience on the RetinaNet with Jetson.

But it looks like you only need pyTorch and TensorRT to get the infer mode work:
[url]https://github.com/NVIDIA/retinanet-examples/blob/master/retinanet/main.py#L129[/url]

It’s worthy to try if you can run the script directly after installing the pyTorch package.
Here is a pyTorch package for JetPack 4.2 for your reference:
[url]https://devtalk.nvidia.com/default/topic/1049071/pytorch-for-jetson-nano/[/url]

Thanks.