Yolov4 on tao gives batchedNMS error

Please provide the following information when requesting support.

• Hardware (T4/V100/Xavier/Nano/etc)
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc)
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here)
• Training spec file(If have, please share here)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)

| Model | Version | Status |
±------------------±--------±----------------------------------------------------------------------------------------------------------------------+
| tao_yolo_v4_model | 1 | UNAVAILABLE: Internal: onnx runtime error 10: Load model from /models/tao_yolo_v4_model/1/model.onnx failed:This is a |
| | | n invalid model. In Node, (“BatchedNMS_N”, BatchedNMSDynamic_TRT, “”, -1) : (“box”: tensor(float),“cls”: tensor(float |
| | | ),) → (“BatchedNMS”: tensor(int32),“BatchedNMS_1”: tensor(float),“BatchedNMS_2”: tensor(float),“BatchedNMS_3”: tenso |
| | | r(float),) , Error No Op registered for BatchedNMSDynamic_TRT with domain_version of 12

this is while we use the custom trianed yolov4 and try to load on the triton server.

UNAVAILABLE: Internal: onnx runtime error 10: Load model from /models/tao_yolo_v4_model/1/model.onnx failed:This is a |
| | | n invalid model. In Node, (“BatchedNMS_N”, BatchedNMSDynamic_TRT, “”, -1) : (“box”: tensor(float),“cls”: tensor(float |
| | | ),) → (“BatchedNMS”: tensor(int32),“BatchedNMS_1”: tensor(float),“BatchedNMS_2”: tensor(float),“BatchedNMS_3”: tenso |
| | | r(float),) , Error No Op registered for BatchedNMSDynamic_TRT with domain_version of 15

Please generate to tensorrt engine(i.e., model.plan). Refer to GitHub - NVIDIA-AI-IOT/tao-toolkit-triton-apps: Sample app code for deploying TAO Toolkit trained models to Triton.

Just wanted to confirm if this solution works for yolov4 and on a AGX orin development kit?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Yes, we can run inference while using tensorrt engine in Orin device.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.