Converting Caffe to TensorRT using int8

Hi,

I saw here that TensorRT suports 8 bit fixed point computations.

Currently I use TensorRT to convert my pretrained Caffe model into a TensorRT model based on the detectnet-console.cpp example.

My questions are:

  1. I assume the default settings convert the Caffe model from FP32 to FP16, correct?
  2. How would I go about converting to int8?

Thanks,
Roman

Hi,

INT8 requires both software and hardware support.
Although TensorRT supports INT8 already, TX2 cannot support INT8 operation.

If you are finding an INT8 platform, you can check our latest Jetson device:
[url]https://developer.nvidia.com/embedded/buy/jetson-xavier-devkit[/url]

Thanks.

Thanks @aastall. So be clarify, the Jetson TX2 does NOT support int8, but the Xavier will?

YES