About --loadInputs in trtexec

I have tried --loadInputs with the following reference, but it does not appear to be working.

First, I converted the image to a dat file with this code.
Also, the image resolution is 4k.

import PIL.Image
import numpy as np

im = PIL.Image.open('/images/01.jpg')
data = np.asarray(im, dtype=np.float32)
data.tofile('/images/01.dat')

Then I ran trtexec.
/usr/src/tensorrt/bin/trtexec --loadEngine=$engine_path/model.engine --fp16 --batch=1 --useSpinWait --loadInputs='Input_tensor_1:/images/01_P8260008.dat'

However, thethroughput is almost the same as if the input image were not specified.
(It is no different than using a random image.)
I am thinking that throughput will be lower because I am inputting 4k images, am I doing something wrong?

Environment

TensorRT Version: 7.0.0

1 Like

Hi,
Please refer to below links related custom plugin implementation and sample:

While IPluginV2 and IPluginV2Ext interfaces are still supported for backward compatibility with TensorRT 5.1 and 6.0.x respectively, however, we recommend that you write new plugins or refactor existing ones to target the IPluginV2DynamicExt or IPluginV2IOExt interfaces instead.

Thanks!

I looked at the documentation but could not resolve the issue.

I want to run a trtexec benchmark given an input image.
I am aware that --loadInputs can be used for this purpose, but could you please tell me how to use it?

Hi,

trtexec --loadInputs expect the input file to be raw binary data
In practice, you can save the binary data from the numpy array using array.tofile(file)
https://numpy.org/doc/stable/reference/generated/numpy.ndarray.tofile.html

For example, if the input is an image, you could use a python script like this:

import PIL.Image
import numpy as np
im = PIL.Image.open("input_image.jpg").resize((512, 512))
data = np.asarray(im, dtype=np.float32)
data.tofile("input_tensor.dat")

This will convert an image to that .dat file which is basically just a raw binary buffer of datatype fp32. Or if it’s not an image, whatever other data source you use, just load it with numpy, cast it to the correct data type and shape that TensorRT expects to use as input (usually but not always float32) and write it out with numpy’s .tofile() function as above.
Then on trtexec, you can load it like this:
trtexec ... ... ... --loadInputs='input_tensor:input_tensor.dat'

Where input_tensor is the name of the input binding/tensor in the TensorRT engine, and input_tensor.dat is the path to the file generated from numpy above.

Thank you.