TensorRT ONNX Image Classification sample

Hi, I have an ONNX model and I want to classify image using TensorRT. My target OS is Windows and I will code in C++. Unfortunately only MNIST sample that is available https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/sampleOnnxMNIST .
In my case I have to process image (jpg or png) file instead of PGM file. I’ve tried to modify the code to input image file but failed and I couldn’t find any image classification sample using TensorRT ONNX, is there any guide that I can follow?
Thank you

Hi,

Please check our Jetson_inference example for feeding jpeg image to TensorRT:
https://github.com/dusty-nv/jetson-inference/blob/master/examples/imagenet-console/imagenet-console.cpp

For sample example on how to use TensorRT in numerous use cases, please refer below link:

Thanks

@SunilJB Hi, the first link doesn’t have function of how to process ONNX model. The second link is exactly the same as the link that I gave on my first post. The example doesn’t give us explanation on how to process jpg images and using it with the onnx model

Hi,

You should use third-party lib to read jpeg image into bytes and save as TensorRT’s input during image preprocessing.

Please refer below link to work with dynamic shapes:

If you just want to try your model, you can use trtexec command line tool in link https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec#example-4-running-an-onnx-model-with-full-dimensions-and-dynamic-shapes

Thanks

@SunilJB do you have recommendation of the third party that actually works? I have tried different methods to pre-process the image but all giving false and strange result

About the dynamic shape, I think if I can resize the image using opencv to the network input size then I won’t need to use it right?

Hi,
You can easily use the command-line tools such as ImageMagick to resize and perform the conversion from JPEG images to obtain PPM images.
If you choose to use off-the-shelf image processing libraries similar to ImageMagick to preprocess the inputs, ensure that the TensorRT inference engine sees the input data in the form that it is supposed to.

Thanks