When can jetson TX2 support tensorRT python API? Or how to use onnx-tensorrt on TX2?

I succeed to use onnx-tensorrt to accelerate my yolov3 model on 1080Ti according to this:

[url]https://github.com/onnx/onnx-tensorrt[/url]

and I would like to implement this on jetson TX2,but I find that maybe I need python API of TensorRT,so when can jetson TX2 support it?

And if you have experience on using onnx-tensorrt on TX2,please share your suggestion with me.

Thank you!!!

Hi,

TensorRT python source includes parser and infer library.
The inference part doesn’t support Jetson platform but parser does.
More detail can be found here:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-support-matrix/index.html#platform-matrix

Not sure the onnx-tensorrt sample you shard uses what kind of python source.
But if c interface is acceptable, you can try our official tensorRT sample for ONNX:
/usr/src/tensorrt/samples/sampleOnnxMNIST/

Thanks.