How to use tlt trained model on Jetson Nano

Hi, I trained an object-detection model in TLT following examples/detectnet_v2 ipynb. Train & prune & inference & export work well and I got .tlt/.etlt file. How to use these files on Jetson Nano without Deepstream? Is there a python solution likes detectnet_console.py in jetson-inference? Thanks.

Reference:


For preprocess or postprocess, refer to Run PeopleNet with tensorrt

Hi Morganh, thanks for your reference. I migrate the .trt engine to SSD example, and after do inference there are two outputs with correct dims. But I don’t know how to translate the outputs to bbox or label or something else. Maybe I missed any document about detectnet_v2 or I should look forward source code of tlt-infer. Any suggestion?

Is there any document about the network structure of detectnet_v2? I have to extract bbox/labels from the outputs of tensorrt.

I am afraid https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps can help you.

2. Detectnet_v2

The model has the following two outputs:

  • output_cov/Sigmoid : A [batchSize, Class_Num, gridcell_h, gridcell_w] tensor contains the number of gridcells that are covered by an object
  • output_bbox/BiasAdd : a [batchSize, Class_Num, 4] contains the normalized image coordinates of the object (x1, y1) top left and (x2, y2) bottom right with respect to the grid cell