Making inference faster on Jetson TX2 using TensorRT


I am a deep learning engineer and inference time is an important aspect for my use case. I have build my model in TensorFlow and optimized the inference time using TensorRT3.0 But all this was on my local machine, now I want to export this model to Jetson and use it in the same way as I was using it on my local machine but for TensorRT3.0, there is no python interface if you use it on ARM architectures like on Jetson. Is there any workaround to this?

Hi aakashnain, Python interface for TensorRT on Jetson is currently unavailable (pyCUDA for ARM issue). Please continue using local machine to export TensorFlow to UFF, and then load the UFF on Jetson using TensorRT’s NvUffImporter C++ object (see my reply with sample location here)