Description
According to following snipped:
Custom trained SSD inception model in tensorRT c++ version - #14 by AastaLLL
It should be possible to convert opencv mat image files such that the tensorRT engine can work with it. However, my onnx models produce not the same output if I use them in tensorRT vs usage with onnxruntime: (only shape is relevant, colors differ is due to different plot methods)
onnxruntime:
tensorRT:
Environment
TensorRT Version: 7.1.3-1
GPU Type: Jetson Nano
Runtime: nvcr.io/nvidia/l4t-base:r32.2
Steps To Reproduce
Here is the code i used to process the input images for tensorRT:
// Prepare input according to:
// - https://forums.developer.nvidia.com/t/custom-trained-ssd-inception-model-in-tensorrt-c-version/143048/14
float* image_1 = static_cast<float*>(buffers.getHostBuffer("image_1:0"));
float* image_2 = static_cast<float*>(buffers.getHostBuffer("image_2:0"));
cv::Vec3b bgr;
unsigned i, j, k, volImg, volChl;
volImg = inputH * inputW;
volChl = inputH * inputW;
for (i = 0; i < batchSize; i++)
{
for (j = 0; j < inputH; j++)
{
for(k = 0; k < inputW; k++)
{
bgr = prevImage.at<cv::Vec3b>(j, k);
image_1[i * volImg + 0 * volChl + j * inputW + k] = float(bgr[2]);
image_1[i * volImg + 1 * volChl + j * inputW + k] = float(bgr[1]);
image_1[i * volImg + 2 * volChl + j * inputW + k] = float(bgr[0]);
bgr = currImage.at<cv::Vec3b>(j, k);
image_2[i * volImg + 0 * volChl + j * inputW + k] = float(bgr[2]);
image_2[i * volImg + 1 * volChl + j * inputW + k] = float(bgr[1]);
image_2[i * volImg + 2 * volChl + j * inputW + k] = float(bgr[0]);
}
}
}
And here is the python code to produce the correct output with the same onnx model:
import numpy as np
import onnx
import onnxruntime
import cv2
input_1 = 'target/data/1.jpg'
input_2 = 'target/data/2.jpg'
size = 128
session = onnxruntime.InferenceSession('m1.onnx', None)
input_name_1 = session.get_inputs()[0].name
input_name_2 = session.get_inputs()[1].name
output_name = session.get_outputs()[0].name
print(input_name_1)
print(input_name_2)
print(output_name)
prev = cv2.imread(input_2)
curr = cv2.imread(input_1)
curr = cv2.resize(curr, dsize=(size, size), interpolation=cv2.INTER_AREA)
prev = cv2.resize(prev, dsize=(size, size), interpolation=cv2.INTER_AREA)
curr.resize((1, size, size, 3))
prev.resize((1, size, size, 3))
curr = np.array(curr).astype('float32')
prev = np.array(prev).astype('float32')
result = session.run([output_name], {input_name_1:curr, input_name_2:prev})
mask = result[0].reshape(size, size)
Any information about how to correct put data into the tensorRT engine buffers would be appreciated!