I benchmarking Jetson Nano it by referring to this site.
What I did referring to the site above is to run ‘TensorRT’ using multiple ‘prototxt’ files, such as ‘Inception_v4.prototxt’, to get the ‘Inference’ time .
ex) ./trtexec --output=prob --deploy=…/data/googlenet/inception_v4.prototxt --fp16 --batch=1
What I’m curious about is how the ‘prototxt’ file is used.
I was wondering what the ‘prototxt’ file was, so I checked the file and found the structure of the model.
There was no sign of trained, like weight, in my view, except that the structure of the model existed.
How did the inference time output with no weight, just the structure of the model?
Is it setting random weights and printing out the time it takes by computations such as inference?
I can’t understand how ‘prototxt’ is used.