Want to know how to feed the test images to trtexec for inference

Hi all, this query I am writing to understand following

  1. When we infer any model using trtexec, how we should ensure that it has done inferencing on a particular test image. For example assume that I have this line of trtexec execution
    /usr/src/tensorrt/bin/trtexec --loadEngine=data/resnet50/resnet_engine_pytorch.trt

How I should find which image it has considered for inferencing

  1. How should I make trtexec to infer on more than one different images say for example 1000 different images

  2. How can I feed a batch of (say 100) images at once for inferencing

  3. What is the difference between loadEngine and deploy options
    Because in one of the example it is mentioned as
    ./bin/trtexec --deploy=data/mnist/mnist.prototxt --output=prob --useDLACore=0 --fp16 --allowGPUFallback
    which works only for the mnist.prototxt model but when I use the same option ‘deploy’ for inferencing a resnet model it gives error.
    But the inferencing of resnet model works with option --loadEngine
    as below
    ./bin/trtexec --loadEngine=data/resnet50/resnet_engine_pytorch.trt

I am interested to know more information on this. Please clarify.

Thanks and Regards

Nagaraj Trivedi

Dear @trivedi.nagaraj,
--loadEngine is used to load an already generated TRT engine. --deploy is used for caffe models and support for caffe model is deprecated and we suggested to move to ONNX based models.
Did you check loadInputs params to feed input data?

but when I use the same option ‘deploy’ for inferencing a resnet model it gives error

Did you try with ResNet Caffe prototxt file?

Hi SivaRamaKrishnan, thank you for your reply. I will try the option loadInputs as well as try using ResNetCaffe prototxt file.

Thanks and Regards

Nagaraj Trivedi

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.