Tlt with tensorrt inference and evaluation

Hi.
In the faster rcnn documentation, states that:

If the TensorRT inference data type is not INT8, the calibration_cache sub-field that provides the path to the INT8 calibration cache is not required. In INT8 case, the calibration cache should be generated via the tlt faster_rcnn export command line in INT8 mode.

Do wee need calibration cache for trt-inference and trt-evaluation?

In INT8 mode, if we generated calibration cache with export command, how do we use it in trt-inference or trt-evaluation?
I just know for trt-inference and trt-evaluation, I have to add this config to the spec file for faster rcnn:

trt_evaluation {
trt_engine: '/workspace/tlt-experiments/data/faster_rcnn/trt.int8.engine'
}

and set

-m /workspace/tlt/local_dir/export/trt.engine

in other object detection algorithms.

In faster_rcnn, it is not needed to set calibration cache for trt-inference and trt-evaluation.

1 Like

what about other object detection algorithms?

Please check in the user guide.
For example, for yolo_v4 network, see YOLOv4 — Transfer Learning Toolkit 3.0 documentation,
if a trt engine is already generated, it is not needed to set calibration for tlt inference.

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.