In what format (array dimensions, object) should I give an image to the CNN?

I am using the sampleUffMNIST.cpp as a reference to develop a C++ ros node that accepts images (uint8 array with length = CHW), passes it into the inference engine generated from a UFF, then publishes the output.

I see that the image is being passed here:

buffers[bindingIdxInput] = createMnistCudaBuffer(bufferSizesInput.first,bufferSizesInput.second, run);

and inside the function createMnistCudaBuffer, the image is stored here:

assert(eltCount == INPUT_H * INPUT_W);
float* inputs = new float[eltCount];
...
uint8_t fileData[INPUT_H * INPUT_W];
    readPGMFile(std::to_string(run) + ".pgm", fileData);
...
for (int i = 0; i < eltCount; i++)
        inputs[i] = 1.0 - float(fileData[i]) / 255.0;
CHECK(cudaMemcpy(deviceMem, inputs, memSize, cudaMemcpyHostToDevice));

In what order should I flatten the RGB image to make it into a flat array?

Hi,

It’s recommended to use NCHW format although NHWC is also supported.
Check this topic for more information:
[url]https://devtalk.nvidia.com/default/topic/1036701/jetson-tx2/tensorrt-support-nhwc-model-/post/5267369/#5267369[/url]

Thanks.