Hello,
I am trying to process batches on my custom caffe model.
I downloaded this github repository https://github.com/yankailab/jetson-inference-batch and it works fine with usual model like googlenet, but not with mine, so I might have wrongly set something.
Let me explain, when I infer with a batch of 1 image, the output is fine but I noticed that when I infer with a batch of 2 or more images, there is an offset of 18 in the array
mOutputs[0].CUDA
, even though its length is correct.
I have 6 labels, so the offset is 3 times my number of labels and I noticed a factor 3 in the code of the function
gpuPreImageNetMeanBatch
which I tried to remove. It worked well, but googlenet didn’t work anymore.
So I assume that googlenet should work fine and that I am setting something wrong. I am using basically the source code of the github repo, except from the loading of the image, which I do using openCV.
My images are 80x80 and B&W.
googlenet images have different size and 3 or 4 color channels, but I process them with the same code.
My openCV loading source code :
// I get the filename(s) from the command line and do this for each one
cv::Mat image;
image = cv::imread(filename);
uchar* camData = new uchar[image.total()*4];
cv::Mat imgRGBA(image.size(), CV_8UC4, camData);
cv::cvtColor(image, imgRGBA, CV_BGR2RGBA, 4);
cv::Mat imgFloat;
imgRGBA.convertTo(imgFloat, CV_32F);
// Then I use imgFloat to set the image width, height and to set cpuPtr content like in the original loadImageRGBA function
Is this here that I am doing some wrong stuff?
I can provide more details, I just don’t know what right now.
Thanks in advance.