Jetson TX2 vs Coral

Hi

I am working on making a benchmark between Google’s Coral vs NVIDIA Jetson TX2.
Coral ONLY works with INT8, so I need to quantize in the TX2. Some questions:

  • Any advice on how to do it?
  • Does TX2 supports INT8?
  • Where can I find a good tutorial which explains the procedure from capturing image (or uploading from console) to quantizing to INT8 and making inference?

Currently I am planning to run the inference 200 times to get an average.
Also I need to activate DVFS using

sudo jetson_clocks

Thank you

Hi,

1.
Please check this slide for information:
http://on-demand.gputechconf.com/gtc/2017/presentation/s7310-8-bit-inference-with-tensorrt.pdf

2.
Sorry that TX2 doesn’t support INT8 but Xavier does. You can find the supported matrix here:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt-601/tensorrt-support-matrix/index.html#hardware-precision-matrix

3.
It’s recommended to use our Deepstream SDK for the camera + INT8 inference usecase.
https://developer.nvidia.com/deepstream-sdk

Thanks.

Thank you for the answer. I have another question:

Which are the latest versions of TensorRT and TensorFlow that I can use with the TX2 and Nano?
Will the Jetpack4.3 work with the TX2 and Nano?

YES.

JetPack4.3 support TX2 and Nano.

Please noticed that INT8 required 6.1 or 7.x GPU architecture.
In the Jetson series, you will need to get a Xavier for the INT8 inferencing.

Thanks.