Source code/documentation for `tlt-infer` and TLT->TensorRT

Hi,
I’ve commented on a thread at I would like to know how to run tlt converter generated .trt engine file in jetson-inference · Issue #824 · dusty-nv/jetson-inference · GitHub discussing how I could leverage the great detectNet.cpp application in jetson-inference to help me demonstrate use of TLT models like PeopleNet with TensorRT, which I’d ultimately use for use in an existing proprietary software application. The part I haven’t been able to figure out how to do is the pre/post processing discussed there.

One of the suggestions on that thread was to look at TLTs tlt-infer and how this application handles pre/post processing. I I can see how to use tlt-infer with PeopleNet at this link. However, when I look at the source for the tlt-infer script in nvcr.io/nvidia/tlt-streamanalytics:v2.0_py3 I see a reference to iva.common.magnet_infer which is referencing a compiled python file.

Is there a way to access the python source with relevant dependencies so I can figure out how to leverage this example to support PeopleNet in jetson-inference?

Alternatively, can you provide the information needed to modify the prepropcess source - and postprocess source? It looks like for the preprocess source I need to know the parameters which need to be passed to cudaTensorXXX and for the postprocess source I need to know how to map DetectNet mOutputs to detections.

Thanks!

Reference: Run PeopleNet with tensorrt - #21 by carlos.alvarez

1 Like