Tlt yolov3 tensorrt c++ inference

engine build by
./tlt-converter -k nvidia_tlt -d 3,544,960 -e trt.fp16.engine -t fp16 -p Input,1x3x544x960,8x3x544x960,16x3x544x960 yolov3_resnet18.etlt

Is there a similar C++ sample ?

In TensorRT user guide files, you can find some useful examples for trt engine.
https://docs.nvidia.com/deeplearning/tensorrt/sample-support-guide/index.html#c_samples_section

https://docs.nvidia.com/deeplearning/tensorrt/sample-support-guide/index.html#sample_ssd
GitHub: sampleSSD/README.md
→ After the engine is built, the next steps are to serialize the engine and run the inference with the deserialized engine. For more information about these steps, see Serializing A Model In C++
→ After deserializing the engine, you can perform inference. To perform inference, see Performing Inference In C++

For postprocessing of yolo_v3, please see deepstream_tlt_apps/nvdsinfer_custombboxparser_tlt.cpp at master · NVIDIA-AI-IOT/deepstream_tlt_apps · GitHub