Do you want me to convert onnx to “tensorRT_model.bin” and try to validate this with our python code or simply convert onnx to tensorrt which has extension as “.engine” and verify?
Yes. Convert ONNX-> TRT model and use TRT APIs to perform inference to compare the outputs for same input image.
Had already converted onnx to TRT (.engine extension) and validated. The behaviour is same for ONNX & TRT.
Issue is observed when convert ONNX to TensortRT_model.bin using trtexec and use it with sample_object_detector_tracker.
I have feed the same input to both TRT and your repo
Added below in prepare_input() method at ONNX-YOLOv8-Object-Detection/yolov8/YOLOv8.py
to get input buffer data
input_tensor = input_img[np.newaxis, :, :, :].astype(np.float32)
input_tensor.tofile("input.dat")
feed the same buffer as input to TRT model using trtexec
/usr/src/tensorrt/bin/trtexec --onnx=/home/nvidia/yolov8/ONNX-YOLOv8-Object-Detection/yolov8n.onnx --saveEngine=/home/nvidia/yolo8n.trt
/usr/src/tensorrt/bin/trtexec --loadInputs='images:input.dat' --loadEngine=/home/nvidia/yolo8n.trt --dumpOutput
and noticed the TRT model outputs are matching with some error in decimal position.
def inference(self, input_tensor):
start = time.perf_counter()
outputs = self.session.run(self.output_names, {self.input_names[0]: input_tensor})
print("output")
for i in range(84):
for j in range(8400):
print(outputs[0][0][i][j])
Please double check postprocessing steps.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.