Running int8 engine in DeepStream

This tutorial discusses about deploying tlt trained model to deepstream using etlt file.

For me I like to train in tlt and convert to int8 engine and want to deploy in deepstream in batch of images.

Which sample can I use to run tensorrt engine in deepstream?

Hi,

tlt is the model format.
In deepstream, inference is using TensorRT.
Inference precision can be set here directly.
https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/blob/master/pgie_yolov3_tlt_config.txt#L57

Thanks.