How to feed jpeg image to tensorrt

I have read the samples about how to read ppm image to tensorrt. What about jpeg image?

Hi,

Please check our Jetson_inference example:
[url]https://github.com/dusty-nv/jetson-inference/blob/master/imagenet-console/imagenet-console.cpp#L68[/url]

Thanks.

Hi AastaLLL,

I just want to organize the code like the example:

void* createMnistCudaBuffer(int64_t eltCount, DataType dtype, int run)
{
/* in that specific case, eltCount == INPUT_H * INPUT_W */
assert(eltCount == INPUT_H * INPUT_W);
assert(elementSize(dtype) == sizeof(float));

size_t memSize = eltCount * elementSize(dtype);
float* inputs = new float[eltCount];

/* read PGM file */
uint8_t fileData[INPUT_H * INPUT_W];
readPGMFile(std::to_string(run) + ".pgm", fileData);

/* display the number in an ascii representation */ 
std::cout << "\n\n\n---------------------------" << "\n\n\n" << std::endl;
for (int i = 0; i < eltCount; i++)
    std::cout << (" .:-=+*#%@"[fileData[i] / 26]) << (((i + 1) % INPUT_W) ? "" : "\n");

/* initialize the inputs buffer */
for (int i = 0; i < eltCount; i++)
    inputs[i] = 1.0 - float(fileData[i]) / 255.0;

void* deviceMem = safeCudaMalloc(memSize);
CHECK(cudaMemcpy(deviceMem, inputs, memSize, cudaMemcpyHostToDevice));

delete[] inputs;
return deviceMem;

}

put the jpeg data in float* data = new float[NINPUT_CINPUT_H*INPUT_W];
and then use cudaMemcpy(deviceMem, data, memSize, cudaMemcpyHostToDevice))
But I don not know how to put the rgb data into the allocated data memory.

Hi,

Jpeg is compressed image format; you need to decode image first to get the correct RGB data.
The function you posted is used for non-compressed image format, like ppm.

If you want to read image with this function, please convert an image from jpeg to ppm first.
If you want to read image with jpeg format, please refer to our jetson_inference example which uses QT as a decoder.

Thanks.