I’ve commented on a thread at https://github.com/dusty-nv/jetson-inference/issues/824 discussing how I could leverage the great
detectNet.cpp application in
jetson-inference to help me demonstrate use of TLT models like PeopleNet with TensorRT, which I’d ultimately use for use in an existing proprietary software application. The part I haven’t been able to figure out how to do is the pre/post processing discussed there.
One of the suggestions on that thread was to look at TLTs
tlt-infer and how this application handles pre/post processing. I I can see how to use
tlt-infer with PeopleNet at this link. However, when I look at the source for the
tlt-infer script in
nvcr.io/nvidia/tlt-streamanalytics:v2.0_py3 I see a reference to
iva.common.magnet_infer which is referencing a compiled python file.
Is there a way to access the python source with relevant dependencies so I can figure out how to leverage this example to support PeopleNet in
Alternatively, can you provide the information needed to modify the prepropcess source - and postprocess source? It looks like for the preprocess source I need to know the parameters which need to be passed to
cudaTensorXXX and for the postprocess source I need to know how to map DetectNet mOutputs to detections.