How to measure accuracy of a TensorRT engine

Description

Hello,
I am trying to benchmark multiple standard classification and detection models (mobilenet, resnet, squeezenet, ssd, etc) on Jetson AGX Xavier. I was able to build the engines for all the models and run inference with no problem (using either trtexec manually or Jetson-Benchmarks wrapper).
What I am looking for now however is to validate the accuracy of the INT8 and FP16 engines to quantify the quantization loss. Is there any tool or scripts to measure TOP1/TOP5 accuracy for classification using a built TensorRT engine?
I tried to search online and in the documentation for any provided tools with no success. Thanks.

Environment

TensorRT Version: 7.1.3.0
GPU Type: Jetson Xavier
Nvidia Driver Version: from JetPack 4.4.1
CUDA Version: 10.2.89
CUDNN Version: 8.0.0.180
Operating System + Version: Ubuntu 18.04
Python Version (if applicable): Python 3.6
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag): Baremetal

Hi, Request you to share your model and script, so that we can help you better.

Alternatively, you can try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec

Thanks!

Did you even read my post? or is this a BOT answer?

Hi @youcef4tak,

Please refer the following doc. For accuracy you need to define logic.
You can use Nvidia Nsight or Nvidia visual profiler for performance analysis.
https://docs.nvidia.com/deeplearning/tensorrt/best-practices/index.html#measure-performance

Thank you.