Description
A clear and concise description of the bug or issue.
Environment
TensorRT Version: 8.0.1-1+cuda11.3
GPU Type: RTX 3090
Nvidia Driver Version: NVIDIA-SMI 510.85.02 Driver Version: 510.85.02 CUDA Version: 11.6
Operating System + Version: Ubunto 20.04
Steps To Reproduce
I exported a tao unet binary semantic segmentation model to work with tensorRT with input images 1280X704 Grayscale.
Before loading the image frames to the CUDA buffer to feed the model inference engine, I am normalizing the frame as follows:
Mat image(Size(w, h), CV_8UC3, (void*)video.get_data(), Mat::AUTO_STEP);
cv::Mat imageGray;
cvtColor(image, imageGray, CV_BGR2GRAY);
cv::resize(imageGray, imageGray, cv::Size(1280,704));
cv::subtract(image, cv::Scalar(127.5f, 127.5f, 127.5f), imageGray, cv::noArray(), -1);
cv::divide (image, cv::Scalar(127.5f, 127.5f, 127.5f), imageGray, 1, -1);
And that is loaded into the host data buffer. But the model is not producing any good segmentation classes…
Is it the normalization of the image? Is it the proper order of loading the CV::Mat object to the host data buffer?
I have another multiclass semantic segmentation fed colog images, that is working well with a similar architecture,
Many thanks!!