Description
A clear and concise description of the bug or issue.
I keep getting this error, no matter which model I’m converting
The bias tensor is required to be an initializer for the Conv operator.
I’m in the process of researching/selecting “toy” segmentation model architects by following the concepts in this guide https://github.com/NVIDIA-AI-IOT/jetson_dla_tutorial
Here’s the pseudo code that I used to convert pytorch models to onnx models
I’m using pretrained models from https://smp.readthedocs.io/en/latest/index.html and https://huggingface.co/docs/transformers/model_doc/segformer#segformer
Is there something I’m missing? How can I fix this ?
import torch
from torch_models import SegformerForSemanticSegmentation
import segmentation_models_pytorch as smp
"""
1) Instansiate a Model that needs to be exported
"""
models = {
"model_segformer": SegformerForSemanticSegmentation.from_pretrained("nvidia/mit-b0", num_labels=14, id2label=id2label, label2id=label2id).eval(),
"model_transformerunet": smp.Unet(encoder_name="mit_b0", encoder_weights="imagenet", in_channels=3, classes=14).eval(),
"model_mobilenetv2unet": smp.Unet(encoder_name="mobilenet_v2", encoder_weights="imagenet", in_channels=3, classes=14).eval(),
"model_mobilenetv3unet": smp.Unet(encoder_name="timm-mobilenetv3_large_100", encoder_weights="imagenet", in_channels=3, classes=14).eval(),
"model_resnetunet": smp.Unet(encoder_name="resnet18", encoder_weights="imagenet", in_channels=3, classes=14).eval()
}
"""
2) Create a dummy input with the same shape as the real input
(batch, channels, height, width)
"""
data = torch.zeros(1, 3, 576, 960)
"""
3) Export torch model to onnx
"""
for name, model in models.items():
torch.onnx.export(model, data, name+'.onnx', opset_version=11,
input_names=['input'],
output_names=['output'],
dynamic_axes={
'input': {0: 'batch_size'},
'output': {0: 'batch_size'}
}
)
Here’s a sample of the command line that I use to convert the exported onnx to engine file
trtexec --onnx=model_segformer.onnx --shapes=input:1x3x576x960 --saveEngine=model_segformer.engine --exportProfile=model_segformer.json --int8 --useDLACore=0 --allowGPUFallback --useSpinWait --separateProfileRun > model_segformer.log
Environment
TensorRT Version: 8.4.1.5
GPU Type: ORIN AGX
Nvidia Driver Version:
CUDA Version: cuda_11.4.r11.4
CUDNN Version: 8.4.1.50
Operating System + Version: Jetpack 5.0.2
Python Version (if applicable): 3.8.10
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 1.12.0a0+2c916ef.nv22.3
Baremetal or Container (if container which image + tag):
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
model_mobilenetv2unet.onnx (25.2 MB)
model_mobilenetv3unet.onnx (25.5 MB)
model_resnetunet.onnx (54.7 MB)
model_segformer.onnx (14.3 MB)
model_transformerunet.onnx (21.3 MB)
Steps To Reproduce
Please include:
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encount
model_mobilenetv2unet.log (5.0 KB)
model_mobilenetv3unet.log (5.0 KB)
model_resnetunet.log (5.0 KB)
model_segformer.log (5.0 KB)
model_transformerunet.log (5.0 KB)
ered