How to evaluate .etlt file

Hello sir,

I need to run tlt_evaluate on an .etlt file that was generated from a INT8 quantization (tlt_export). However, i realized that there is no way to evaluate a .etlt file?

In this case, what would you suggest i do to check the post-quantization accuracy drop?


Raymond Wong

Please refer to jupyter detectnet_v2 sample. You can run evaluation against trt engine instead.

!tlt-evaluate detectnet_v2 -e $SPECS_DIR/detectnet_v2_retrain_resnet18_kitti_qat.txt
-m $USER_EXPERIMENT_DIR/experiment_dir_final/resnet18_detector_qat.trt.int8
-f tensorrt

11. QAT workflow

D. Evaluate a QAT trained model using the exported TensorRT engine

Thanks for the reply Morgan. I apologize for not being very specific.

I have an .etlt file exported from a classification model (and not detectnet_v2). The -m option for tlt-evaluate seems to be built only for detectnet_v2 (or maybe some others) but not for classification models. I went through your metropolis documentation and i couldnt find any support for evaluating a classification model based on its .etlt or .trt engine files.

Thanks in advance for your help.

Raymond Wong

In classification, after pruned and retraining, actually you can still run tlt-evalutate against the tlt model. And also the tlt-infer can generate the output labels/images. For etlt model or trt engine, yes, seems that the classification does not proivde the evaluation ways as detectnet_v2. Currently, you can deploy etlt file or trt engine in Deepstream to verify.
Suggest you check as below.
a. After prune and retrain, get a tlt model, then run tlt-infer. Test one image.
b. Then generate etlt or trt engine, deploy and run in DS with a h264 file(generate based on above image). Check the output.
c. Compare a and b.

Hi Morgan,

Ok thank you very much for the clarification. I will use the recommended suggestions.

Raymond Wong