accuracy drops when using FP32 in faster rcnn

Hi,

I am using faster-rcnn in FP32 mode with TensorRT-3.0.

However, as compared to caffe framework, the accuracy drops about 10%.

As far as I know, FP32 is full precision, and caffe also uses FP32 by default. I have no idea why the accuracy is different a lot with caffe.

I can’t find any accuracy benchmark about faster-rcnn on TensorRT.

So, I am not sure if this is the problem of TensorRT.

thanks.

Hi,

Do you use our faster-rcnn sample? Which is located at /usr/src/tensorrt/samples/.

This sample only supports ppm image; another image format may lead to wrong decode results.

Hi,

I am using opencv to read jpeg images.

and then tried to convert it to bgr array.

the code is something like

cv::Mat img_in = cv::imread("sample.jpg");
img_in.convertTo(img_in, CV_32FC3);
cv::Mat mean(img_in.rows, img_in.cols, CV_32FC3, cv::Scalar(102.9801f, 115.9465f, 122.7717f));
cv::subtract(img_in, mean, img_in);

// wrap into bgr array
float* data = new float[3 * img_in.rows * img_in.cols];

std::vector<cv::Mat> input_channels;
for (int i = 0; i < 3; ++i) {
  cv::Mat channel(img_in.rows, img_in.cols, CV_32FC1, data);
  input_channels->push_back(channel);
  data += img_in.rows * img_in.cols;
}

cv::split(img_in, input_channels);

// forward data to inferernce
// ...

Is something wrong with above code?

btw, Do you have tested faster rcnn with TensorRT on VOC benchmark? How is the accuracy as compared to the paper?

Hi,

We don’t have accuracy score since it’s this sample is to demonstrate plugin API.
It should be similar to the original fasterRCNN results.

For image → tensorRT, please check our jetson_inference sample for details:
[url]https://github.com/dusty-nv/jetson-inference/blob/master/imageNet.cpp#L295[/url]