TensorRT C++ API to deploy tlt model

Hi,

I have managed to train LPD model on TLT docker following the documents. And now I have the *ckzip and *tlt models.

Is there any C++ API for TensorRT to infer the TLT trained model?

Thanks.

LPD is based on TLT detectnet_v2 network. By default, users can run tlt detectnet_v2 infer against the tlt model or trt engine. Or users can deploy the etlt model or trt engine in deepstream and run the inference.

BTW, there is postprocess code which is exposed in C++ in /opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer_customparser/nvdsinfer_custombboxparser.cpp

And also some other user write python code to run inference against the trt engine. Run PeopleNet with tensorrt