Inference of tao trained yolov3 model(kitti format) using onnx runtime on my pc

I have trained yolov3 model on tao in kitti format and also removed BatchedNMSDynamic_TRT layer and then export to onnx format. But i got issue showing many bounding box on inference part using onnx runtime. What are the post processing part should i Include. Also I attached my code below
1949361601f95790d8c911534206ad935d42b6f7_2_690x256


txt11.txt (5.1 KB)

Suggest you to debug onnx file without any trimming firstly. After exporting, you can generate tensorrt engine with tao-deploy. TAO-deploy provides code to generate tensorrt engine and run inference. See tao_deploy/nvidia_tao_deploy/cv/yolo_v3/scripts/inference.py at main · NVIDIA/tao_deploy · GitHub.
You can run inference against one test image. If inference is correct, please save the input to an .npy file.
This npy file will be the golden input for you to debug your modified onnx file.

You can use polygraphy to debug both onnx file and tensorrt engine.

For onnxruntime,
$ /usr/local/bin/polygraphy run xxx.onnx --onnxrt --data-loader-script data_loader.py --save-inputs inputs.json --onnx-outputs mark all --save-outputs onnx_output.json

For tensorrt engine,
$ /usr/local/bin/polygraphy run xxx.onnx --trt --data-loader-script data_loader.py --trt-outputs mark all --save-outputs trt_output.json --save-engine test.engine

BTW, to save an npy file.

            with open('xxx.npy', 'wb') as f:
                    np.save(f, infer_batch)

And data_loader.py is as below.

import numpy as np

def load_data():
    with open('xxx.npy', 'rb') as f:
        data = np.load(f)

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.