The amount of parameters in TensorRT model

Greetings. I have understood that the number of FLOPS does not change when converting a neural network from PyTorch or ONNX to TensorRT. Source: Number of operations in a TensorRT model - #2 by cmehrshad

Question: how about the number of parameters? Is that reduced upon a conversion from PyTorch or ONNX model to TensorRT?

I have understood that the number of parameters does not change - only layers are fused but I may have misunderstood. Thank you in advance!

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

I am able to calculate the number of parameters for my ONNX model. How about for the TensorRT model? I checked all the flags for trtexec but it seems it’s not supported. I don’t think the engine builder API does help either unless it has been updated relating to this issue.

I think it’s kind of basic knowledge whether TensorRT optimizations change the amount of parameters or not and I am missing that basic knowledge. That’s why I’m here :)

My ONNX model as an attachment: yolov5s.onnx. It has 7.2M parameters. I have downloaded it from yolov5 github repo.

yolov5s.onnx (28.0 MB)

I was able to calculate the exact number of parameters with this tool.

I have converted the TensorRT model from the yolo model with this command

/usr/src/tensorrt/bin/trtexec --onnx=yolov5s.onnx --saveEngine=yolov5s-test.engine --fp16 --workspace=16000 --buildOnly --verbose

The output:

trtexec.txt (7.0 MB)

It doesn’t have exact info amount number of parameters. A lot of info about the layers though but it’s a lot of work to calculate the number of params from the layers manually. Is there a tutorial somewhere on how to interpret the output? I’m not exactly sure how many parameters Conv_304[Half(1,512,20,20)] does have for example even though I know the size of its output and the accuracy of the elements.

Just as a bonus info, I am using TensorRT 8.4GA.

Hi,

Please refer to the following similar post, which may help you.

Thank you.