I’ve been working on training a YOLOv4 network using my own dataset and I know that I can export an onnx file using the command “tao model yolo_v4 export”.
However, I wonder if an exported onnx file can be corrrectly used by the onnxruntime module?
I’d like to use an exported onnx file to label the unannotated data so that I can increase my training data faster.
A simple code snippet looks like this:
import onnxruntime as ort
detector_onnx = "exported_file.onnx"
detector = ort.InferenceSession(detector_onnx)
I encountered the following error message as a result:
onnxruntime.capi.onnxruntime_pybind11_state.InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : Load model from pretrained_model\Smoke/Face_and_hand.onnx failed:This is an invalid model. In Node, ("BatchedNMS_N", BatchedNMSDynamic_TRT, "", -1) : ("box": tensor(float),"cls": tensor(float),) -> ("BatchedNMS": tensor(int32),"BatchedNMS_1": tensor(float),"BatchedNMS_2": tensor(float),"BatchedNMS_3": tensor(float),) , Error No Op registered for BatchedNMSDynamic_TRT with domain_version of 12
I wrote a tool to succesfully trim the model and have been spending some time on trying to do inference using the trimmed model via onnxruntime. There are still some things going on, however.
I traced the source code of inference.py and managed to know how images are preprocessed whenever I run tao yolov4 inference and would like to write a tool using onnxruntime for inference.
import cv2
import numpy as np
import os
import onnxruntime as rt
The output of the function image_preprocess(), i,e, image_data, is exactly in the NxCxHxW dimension.
I can feed the preprocessed image into sess.run and it does give me results.
But I keep getting the following warning messages:
Is it a sign of something going wrong?
[W:onnxruntime:, execution_frame.cc:857 onnxruntime::ExecutionFrame::VerifyOutputSizes] Expected shape from model of {} does not match actual shape of {1,10647,1,4} for output box
[W:onnxruntime:, execution_frame.cc:857 onnxruntime::ExecutionFrame::VerifyOutputSizes] Expected shape from model of {} does not match actual shape of {1,10647,2,1} for output cls
Output shape: [(1, 10647, 1, 4), (1, 10647, 2, 1)]