I flashed my jetson TX2 board with new jetpack 3.2 developer preview. Previously I had jetpack 3.1 with TensorRT 2.1.
Now jetpack 3.2 has TensorRT 3. I know C++ API for tensorRT can be used by including the header NVinfer.h. I don’t know how to use Python Api for TensorRT which packages I need to import.
Can anyone please explain the workflow of using TensorRT Python Api on Jetson TX2?
Are there any sample examples available for tensorRT python Api for Jetson TX2?
Python API is only available on x86 Linux platform, can’t be used on Jetson.
Here is the support information:
Thanks, but i want to use python api on jetson. Is there any other way to do this?
Currently, python API doesn’t support Jetson platform.
Please wait for our future release.
May i know when it will be released.
Sorry for that we cannot disclose our schedule here.
Python API is not implemented in our next JetPack release but is in our plan.
Please pay attention to our announcement for an update.
Can I ask:
As the Jetpack 3.2 formal version released, Does python API support Jetson platform now?
Not yet. Python API is not available for Jetson currently.
If you want to convert a TensorFlow model into TensorRT, here is a good tutorial for your reference:
I’ve created a TensorRT GoogLeNet example, in which I used Cython to wrap C++ code so that I could do TensorRT inferencing using python directly. Hope it helps.
- Running TensorRT Optimized GoogLeNet on Jetson Nano: https://jkjung-avt.github.io/tensorrt-googlenet/
- jkjung-avt/tensorrt_demos: https://github.com/jkjung-avt/tensorrt_demos
The code was tested on Jetson Nano, but it should work on Jetson TX2 too.