Hi all, this query I am writing to understand following
- When we infer any model using trtexec, how we should ensure that it has done inferencing on a particular test image. For example assume that I have this line of trtexec execution
/usr/src/tensorrt/bin/trtexec --loadEngine=data/resnet50/resnet_engine_pytorch.trt
How I should find which image it has considered for inferencing
-
How should I make trtexec to infer on more than one different images say for example 1000 different images
-
How can I feed a batch of (say 100) images at once for inferencing
-
What is the difference between loadEngine and deploy options
Because in one of the example it is mentioned as
./bin/trtexec --deploy=data/mnist/mnist.prototxt --output=prob --useDLACore=0 --fp16 --allowGPUFallback
which works only for the mnist.prototxt model but when I use the same option ‘deploy’ for inferencing a resnet model it gives error.
But the inferencing of resnet model works with option --loadEngine
as below
./bin/trtexec --loadEngine=data/resnet50/resnet_engine_pytorch.trt
I am interested to know more information on this. Please clarify.
Thanks and Regards
Nagaraj Trivedi