sampleUffMaskRCNN - Get input image from OpenCV (cv::mat) instead of ppm file

Hi,

I’m trying to change the example code “sampleUffMaskRCNN” to inference from an OpenCV input image (cv::mat) instead of a PPM file.

The code runs but it doesn’t return any detection if the input image is from a cv::mat. If i get the image from a PPM file it works well.

My processInput function is this (based on Custom trained SSD inception model in tensorRT c++ version - #14 by AastaLLL):

bool MaskRCNN::processInput(const samplesCommon::BufferManager& buffers)
{
		try
		{	
			const int inputC = mInputDims.d[0];
			const int inputH = mInputDims.d[1];
			const int inputW = mInputDims.d[2];
			const int batchSize = MaskRCNNConfig::BATCH_SIZE;

			cv::Mat origin_image = cv::imread(MaskRCNNConfig::IMAGE_TEST);
			
			if (!origin_image.data)
			{
				std::cout << "Error : could not load image." << std::endl;
				return false;
			}
			cv::Mat resize_image;
			cv::resize(origin_image, resize_image, cv::Size(inputH, inputW), cv::INTER_CUBIC);

			// Fill data buffer
			float* hostDataBuffer = static_cast<float*>(buffers.getHostBuffer(MaskRCNNConfig::MODEL_INPUT));
			for (int i = 0, volImg = inputH * inputW; i < batchSize; ++i)
			{
				for (unsigned j = 0, volChl = inputH * inputW; j < inputH; ++j)
				{
					for (unsigned k = 0; k < inputW; ++ k)
					{
						cv::Vec3b bgr = resize_image.at<cv::Vec3b>(j,k);
						hostDataBuffer[i * volImg + 0 * volChl + j * inputW + k] = (2.0 / 255.0) * float(bgr[2]) - 1.0;
						hostDataBuffer[i * volImg + 1 * volChl + j * inputW + k] = (2.0 / 255.0) * float(bgr[1]) - 1.0;
						hostDataBuffer[i * volImg + 2 * volChl + j * inputW + k] = (2.0 / 255.0) * float(bgr[0]) - 1.0;
					}
				}	
			}

			return true;
		}
		catch (std::exception& e)
		{ 
			//Exit Function
			return false;
		}
}

Thanks,
Luis Silva

Hi,
Please refer to the installation steps from the below link if in case you are missing on anything
https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html
However suggested approach is to use TRT NGC containers to avoid any system dependency related issues.
https://ngc.nvidia.com/catalog/containers/nvidia:tensorrt

In order to run python sample, make sure TRT python packages are installed while using NGC container.
/opt/tensorrt/python/python_setup.sh

In case, if you are trying to run custom model, please share your model and script with us, so that we can assist you better.
Thanks!

Hi NVES,

Thanks for the quick response.

The samples of TensorRT are working, it’s not a problem with the installation.

In the example “sampleUffMaskRCNN” the images to run the inference have the extension .ppm, instead of loading images from a .ppm file I wanted to load from a cv:mat.

I’m trying to change the function “ProcessInput” to achieve that goal.

The only change that i’ve made in the “sampleUffMaskRCNN” example was the function of the first post and the includes from OpenCV:

//OpenCV
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/core/core.hpp>
#include <opencv2/core/utils/logger.hpp>

Thanks,
Luis

Hi @luissilva.lfrs,

Hope following will help you.
https://docs.opencv.org/4.5.2/d4/da8/group__imgcodecs.html

If you need further assistance we recommend you to post on opencv related platform to get better help.

Thank you.