My goal is to run Retinanet with pytorch implementation and tensorRT on the Jetson TX2 (currently limited to Jetpack 4.2) to do object detection.
I noticed NVIDIA has a Retinanet implementation as example on Github (https://github.com/NVIDIA/retinanet-examples). Because Nvidia Docker is not supported on the Jetson, I plan to train the model on another desktop machine (16.04, cuda 10.0, tensorRT 5.0.2). I was able to run that example, train the model, and do inference. But when I move the tensorRT engine plan exported from the desktop machine onto the Jetson TX2, it gave me a tensorRT version mismatch error. However, there is no other available version for the TX2.
I also cannot parse the ONNX model (generated by another machine) successfully on the TX2 to generate the engine plan mostly because the input size of the Upsampling layer is not correct. I am wondering whether it is possible to use the tensorRT engine plan generated from another machine on Jetson TX2?
Generally, what would be the best way to run Retinanet on Jetson TX2 with tensorRT support?