opened 03:55AM - 19 Feb 21 UTC
closed 06:44PM - 21 Mar 22 UTC
duplicate
triaged
Hi, I try to convert a model of onnx with normal data type float32 not QAT model… s. But it gives me this error message:
```
[8] Assertion failed: ctx->network()->hasExplicitPrecision() && "TensorRT only supports multi-input conv for explicit precision QAT networks!"
```
And I can reproduce this error with this minimal code:
```python
class MG(nn.Module):
def __init__(self):
super().__init__()
# for test if torch.cat([bool, bool]) can convert
def forward(self, x, b):
# x, b = x
preds = F.conv2d(x, b,
stride=1)
preds = preds.to(torch.float)
preds = preds.sigmoid().float()
seg_masks = preds > torch.tensor(0.03, dtype=torch.float)
return seg_masks
torch_model = MG()
x = torch.randn([1, 4, 24, 24])
b = torch.randn([8, 4, 3, 3])
torch_out = torch_model(x, b)
# Export the model
torch.onnx.export(torch_model, # model being run
(x, b),
"a.onnx",
export_params=True, # store the trained parameter weights inside the model file
opset_version=11, # the ONNX version to export the model to
do_constant_folding=True,
verbose=True)
print('Done!')
```
If you export onnx with pytorch 1.7, and try convert to trt engine, it will shows this error:
```
[8] Assertion failed: ctx->network()->hasExplicitPrecision() && "TensorRT only supports multi-input conv for explicit precision QAT networks!"
```
You might will ask why using `torch.tensor(0.03, dtype=torch.float)` in `>` op, it was **because if not, it will cast float to double and invoke a double data type in onnx**.
Which will make onnx2trt raise another error called `unsupported datatype 11`.
So how should we solve this awkward situation?
I met the error as title, it can not convert a simple onnx model to trt engine using onnx2trt.
I posted replicated script on git repo. No one response…
Please help!!
Hi @LucasJin ,
The weights needs to be initialized in the ONNX model. You can try ONNX Optimizer and ONNX Simplifier with the ONNX model, and run TensorRT with the processed ONNX model.
For you reference,
Thank you.
@spolisetty I tried, onnx-simplifier and optimizer doesn’t do optimization on this model, in other words, the model optimized still same.
And simplified model raise same error.
Hi @LucasJin ,
We are looking into this issue. Please follow up regarding this issue here to get better help.
opened 03:55AM - 19 Feb 21 UTC
closed 06:44PM - 21 Mar 22 UTC
duplicate
triaged
Hi, I try to convert a model of onnx with normal data type float32 not QAT model… s. But it gives me this error message:
```
[8] Assertion failed: ctx->network()->hasExplicitPrecision() && "TensorRT only supports multi-input conv for explicit precision QAT networks!"
```
And I can reproduce this error with this minimal code:
```python
class MG(nn.Module):
def __init__(self):
super().__init__()
# for test if torch.cat([bool, bool]) can convert
def forward(self, x, b):
# x, b = x
preds = F.conv2d(x, b,
stride=1)
preds = preds.to(torch.float)
preds = preds.sigmoid().float()
seg_masks = preds > torch.tensor(0.03, dtype=torch.float)
return seg_masks
torch_model = MG()
x = torch.randn([1, 4, 24, 24])
b = torch.randn([8, 4, 3, 3])
torch_out = torch_model(x, b)
# Export the model
torch.onnx.export(torch_model, # model being run
(x, b),
"a.onnx",
export_params=True, # store the trained parameter weights inside the model file
opset_version=11, # the ONNX version to export the model to
do_constant_folding=True,
verbose=True)
print('Done!')
```
If you export onnx with pytorch 1.7, and try convert to trt engine, it will shows this error:
```
[8] Assertion failed: ctx->network()->hasExplicitPrecision() && "TensorRT only supports multi-input conv for explicit precision QAT networks!"
```
You might will ask why using `torch.tensor(0.03, dtype=torch.float)` in `>` op, it was **because if not, it will cast float to double and invoke a double data type in onnx**.
Which will make onnx2trt raise another error called `unsupported datatype 11`.
So how should we solve this awkward situation?
Thank you.