Wanted to know about how to make use of --loadInputs=spec option in trtexec

Hi all, I want to know how to use the --loadInputs=spec option in trtexec
It’s description as per the trtexec --help is as below
–loadInputs=spec Load input values from files (default = generate random inputs). Input names can be wrapped with single quotes (ex: ‘Input:0’)
Input values spec ::= Ival[",“spec]
Ival ::= name”:"file
Are these inputs are the filenames with extension .jpg, jpeg etc?

I have following clarifications

  1. I am not clear about what is mean by Input names can be wrapped with single quotes (ex: ‘Input:0’)
    2 Want to have more clarifications about
    Input values spec ::= Ival[",“spec]
    Ival ::= name”:“file
    What is mean by
    spec::=lval[”,“spec] and what is mean by Ival ::= name”:"file
  2. Does it mean that we can mention either only one file name or more than one file name
  3. Suppose I have an image to infer of shape [1, 28, 28]. For this do I need to first store a tensor of this shape into a file and then mention the file name. What if I have more than one images to be inferred, in that case do I need to store them in sequence?
  4. What should be the extension of the filename
  5. Suppose I have 10 images to infer. How should I specify on the command line that there are 10 images to be inferred
  6. Are these file names are the image filenames? like .jpg .jpeg etc

Please clarify all these doubts considering one (practical) example.

Thanks and Regards

Nagaraj Trivedi

Dear @trivedi.nagaraj,
Please see About --loadInputs in trtexec - #5 by spolisetty if it helps.

The input file is raw binary data file. if the input to model is [1,28,28], you can only infer one image at a time. So input file contains only one image data. If you want to infer 10 images, you have to call it separately.

Thank you SivaRamKrishnan. I will try it and let you know the result.

Thanks and Regards

Nagaraj Trivedi

Hi SivaRamaKrishna, I tried the link you have mentioned and it worked.

But I have a clarification. It only said PASS but never mentioned that the image is predicted successfully or not and. How to verify this

I tried inferencing a resent50 onnx model. The command format and its output is as below. Let me know how to confirm that it has successfully predicted or not. Please see the word PASSED in bold letters.

&&&& PASSED TensorRT.trtexec [TensorRT v8001] # ./trtexec --onnx=…/data/resnet50/ResNet50.onnx --int8 --loadInputs=~/program/nagaraj/tensor_rt_practice/pytorch_to_trt/input_tensor.dat

Thanks and Regards

Nagaraj Trivedi

Hi, please update me on this.

Thanks and Regards

Nagaraj Trivedi

Dear @trivedi.nagaraj,
Could you check using dumpOutput flag to see get the outputs from output tensor and verify the expected output.

Hi SivaRamaKrishna, thank you for your response, it worked.

These are probable [1 X 1000] values. How to convert them to the predicted label and display it. May I know the where is the code for it so that I can make use of it.

Thanks and Regards

Nagaraj Trivedi

Hi SivaRamaKrishna, please update me on this.

Thanks and Regards

Nagaraj Trivedi

Dear @trivedi.nagaraj,
You can find the MNIST postprocessing function at https://github.com/NVIDIA/TensorRT/blob/release/8.6/samples/sampleOnnxMNIST/sampleOnnxMNIST.cpp#L296. The post processing steps vary from model to model and you are expected to implement the post processing as per the need.

Thanks. Sure I will read and try it out.

Thanks and Regards

Nagaraj Trivedi

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.