About dusty-nv / jetson-inference Project

Hi.
Recently I run some demos from the repo:

https://github.com/dusty-nv/jetson-inference

I followed the way step by step and run the demo successfully.
But one thing is that when I do this command

./segnet-console input.jpg output.jpg

It could create a new .engine named fcn_resnet18.onnx.1.1.GPU.FP16.engine

It shows that this engine is float16.
How can I get the engine which is INT8?
Many thanks~

Hi,

Nano doesn’t support INT8 operation.
You can find more detail here:
(Nano is GPU architecture 53)
https://docs.nvidia.com/deeplearning/sdk/tensorrt-support-matrix/index.html#hardware-precision-matrix

As a result, the model will use fp16 instead.

Thanks.

1 Like