Requesting INT8 data type but platform has no support, ignored

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson nano
• DeepStream Version Deepstream 6.*
• JetPack Version (valid for Jetson only) 4.6.*
• TensorRT Version 8.*
• Issue Type( questions, new requirements, bugs) questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Hi, I tried to convert peoplenet .etlt model to tensorrt engine, the cli is shown below:

./tao-converter resnet34_peoplenet_int8.etlt -k tlt_encode -d 3,544,960 -o output_cov/Sigmoid,output_bbox/BiasAdd -c resnet34_peoplenet_int8.txt -e resnet34_int8_jetson.engine -m 8 -t int8

In DS6, I got a warning that there is no support int8 in jetson nano in this version. And the final model is much bigger than .eltl model.

Maybe, tao-conveter tries to re-convert int8-etlt model to fp16/32 tensorrt engine model.
how do i keep int8 and model size?

Jetson nano doesn’t support INT8 acceleration by tensor cores, you can still use fp16 for inference.

1 Like

Yes, it’s a GPU HW limiation.

Also could refer to Why jetson nano not support int8 - #2 by dusty_nv

Hi @mchi, Thank you for your information.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.