How to do inference in own code?

After training in TLT for detectnet_v2. There is inference test api provided for test from console as

tlt-infer detectnet_v2 -e /workspace/tlt-experiments/detectnet_v2/resnet18/detectnet_v2_inference_kitti_tlt.txt -i /workspace/tlt-experiments/ObjectDetectionData/ -o /workspace/tlt-experiments/ObjectDetectionData/outputs -k xxxxxxxx

That is for testing from console. Is it possible to have source code? So that I can do inference from my source code.

Sorry, the tlt is not open-source yet. For inference, you can refer to DS inference.
Or, you can convert etlt model into trt engine, and do inference against trt engine. For this, you can search material from TRT samples or Nvidia github.

Please see the samples for TRT and there is no sample for detectnet_V2. Object detection samples are for SSD and Frcnn. How to test detectnet_v2 in TensorRT

After getting the etlt modle or trt engine, you can use Deepstream to do inference.

For trt engine, you can also refer to preprocess/postprocess code in DS.
Reference: TRT engine deployment

TLT official user guide tells users how to run inference with DS for all networks.
https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/index.html#intg_model_deepstream

IC now I understood. We need to go all the way to DeepStream. Ok thanks

Not exactly. Except DS, users can write code to do inference against the trt engine.