Please provide the following information when requesting support.
• Hardware (jetson nano orin 8 gb)
• Network Type (/Yolo_v4_tiny)
• TLT Version (nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5)
• Training spec file(
yolo_v4_tiny_retrain_kitti_seq.txt (1.9 KB)
)
• How to reproduce: I have trained a yolo_v4_tiny onnx model, which can tao infer. How can I infer with deepstream-image-meta-test? What should my pgie_config.txt be? The one I found has libnvds_infercustomparser_tlt.so, but this so is created with tensor_oss. As I have deepstream 6.3, do I need libnvds_infercustomparser_tlt.so? how can I produce it?