I am using the sampleUffMNIST.cpp as a reference to develop a C++ ros node that accepts images (uint8 array with length = CHW), passes it into the inference engine generated from a UFF, then publishes the output.
I see that the image is being passed here:
buffers[bindingIdxInput] = createMnistCudaBuffer(bufferSizesInput.first,bufferSizesInput.second, run);
and inside the function createMnistCudaBuffer, the image is stored here:
assert(eltCount == INPUT_H * INPUT_W);
float* inputs = new float[eltCount];
...
uint8_t fileData[INPUT_H * INPUT_W];
readPGMFile(std::to_string(run) + ".pgm", fileData);
...
for (int i = 0; i < eltCount; i++)
inputs[i] = 1.0 - float(fileData[i]) / 255.0;
CHECK(cudaMemcpy(deviceMem, inputs, memSize, cudaMemcpyHostToDevice));
In what order should I flatten the RGB image to make it into a flat array?