I find that the model file size of the detection model (49MB) converted using ONNX-Tensorrt 6.0 on the desktop is slightly larger than the ONNX model (44MB). However, after converting the same ONNX model (44MB) on Jetson Nano using ONNX-Tensorrt 6.0-full-dims, the output Tensorrt model file size increase to 170 MB. I am wondering what is the problem with the significant model file size increase on Jetson Nano?
The command I am using is:
onnx2trt -o detection_model.trt -b 1 -d 16 -l model.onnx
Here, I have set the max batch size to be 1, the model data type is float16.
The onnx-tensorrt code I am using:
The jetson nano is installed with Jetpack 4.3 with tensorrt 6.