I want to use model trained by TLT2.0 in Int8 mode on nx platform,but the two platforms have different trt version (trt7.0.0 、trt7.1.3),how to get int8 calibration table working on trt7.1.3?
I tested the trt7.0 calibration table,it got poor results
Could you please share more info about the “poor results”?
Is it the result when run inference with deepstream? Can you share some logs or results?
Which network did you train?
I’m afraid the int8 cal.bin is not the root cause of “poor results”. Suggest to narrow down.
The network is resnet18 detectnetV2.
The same test video and deepstream 5.0 configuration file, compared with the pc (deepstream 5.0 docker trt7.0 int8) test result, the nx( deepstream 5.0 trt 7.1 int8) result has missed detections. But in fp16 mode I can get similar detect results
Can cal.bin of trt7 be used directly in trt7.1?
Thanks
What do you mean by “has missed detections”? No bboxes?
Unable to detect some bboxes
One idea, you can try to uninstall the trt7.0 and install trt7.1 inside the docker.