Jetson Nano benchmarking using TensorRT. What do TensorRT do with 'prototxt'?

I benchmarking Jetson Nano it by referring to this site.

What I did referring to the site above is to run ‘TensorRT’ using multiple ‘prototxt’ files, such as ‘Inception_v4.prototxt’, to get the ‘Inference’ time .
ex) ./trtexec --output=prob --deploy=…/data/googlenet/inception_v4.prototxt --fp16 --batch=1

What I’m curious about is how the ‘prototxt’ file is used.
I was wondering what the ‘prototxt’ file was, so I checked the file and found the structure of the model.
There was no sign of trained, like weight, in my view, except that the structure of the model existed.

How did the inference time output with no weight, just the structure of the model?
Is it setting random weights and printing out the time it takes by computations such as inference?
I can’t understand how ‘prototxt’ is used.

Hi,

If you used a Caffe prototxt file and a model is not supplied, random weights are generated.
Below refer to below section for more details:
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec#description

Please refer below example to run code with model/weight file:
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec#example-1-simple-mnist-model-from-caffe

Thanks