Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson nano
• DeepStream Version Deepstream 6.*
• JetPack Version (valid for Jetson only) 4.6.*
• TensorRT Version 8.*
• Issue Type( questions, new requirements, bugs) questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Hi, I tried to convert peoplenet .etlt model to tensorrt engine, the cli is shown below:
./tao-converter resnet34_peoplenet_int8.etlt -k tlt_encode -d 3,544,960 -o output_cov/Sigmoid,output_bbox/BiasAdd -c resnet34_peoplenet_int8.txt -e resnet34_int8_jetson.engine -m 8 -t int8
In DS6, I got a warning that there is no support int8 in jetson nano in this version. And the final model is much bigger than .eltl model.
Maybe, tao-conveter tries to re-convert int8-etlt model to fp16/32 tensorrt engine model.
how do i keep int8 and model size?