Onnx yolo; config for deepstream-image-meta-test

Please provide the following information when requesting support.

• Hardware (jetson nano orin 8 gb)
• Network Type (/Yolo_v4_tiny)
• TLT Version (nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5)
• Training spec file(
yolo_v4_tiny_retrain_kitti_seq.txt (1.9 KB)
)
• How to reproduce: I have trained a yolo_v4_tiny onnx model, which can tao infer. How can I infer with deepstream-image-meta-test? What should my pgie_config.txt be? The one I found has libnvds_infercustomparser_tlt.so, but this so is created with tensor_oss. As I have deepstream 6.3, do I need libnvds_infercustomparser_tlt.so? how can I produce it?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Please run with official deepstream_tao_apps. GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream

The lib is libnvds_infercustomparser_tao.so now. See https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/blob/master/configs/nvinfer/yolov4_tao/pgie_yolov4_tao_config.txt#L48

For pgie_config file, please refer to https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/blob/master/configs/nvinfer/yolov4_tao/pgie_yolov4_tao_config.txt.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.