DLA bindOutputTensor failed when inferring. TensorRT 7.1.3

After we transform our model form ONNX using TensorRT7.1.3 on Jetson AGX, we got the error likes below when inferring:

NVMEDIA_DLA :  495, ERROR: bindOutputTensor failed. err: 0xB
NVMEDIA_DLA : 1920, ERROR: BindOutputTensorArgs failed (Output). status: 0x7.
../rtExt/dla/native/dlaUtils.cpp (194) - DLA Error in submit: 7 (Failure to submit program to DLA engine.)
FAILED_EXECUTION: std::exception
NVMEDIA_DLA :  885, ERROR: runtime registerEvent failed. err: 0x4.
NVMEDIA_DLA : 1849, ERROR: RequestSubmitEvents failed. status: 0x7.
../rtExt/dla/native/dlaUtils.cpp (194) - DLA Error in submit: 7 (Failure to submit program to DLA engine.)
FAILED_EXECUTION: std::exception
...

the struct of our model likes this:

import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
# define network
class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(256, 2, 3, 1, 1)
        self.conv2 = nn.Conv2d(256, 2, 3, 1, 1)
    def forward(self, x):
        x = torch.relu(x)
        x1 = self.conv1(x)
        x2 = self.conv2(x)
        x = torch.cat([x1, x2], 1)
        return x
net = Net()
a = torch.randn(1,256,40,40)
torch.onnx.export(net, a, "concat.onnx", verbose=True, opset_version=11)

The generated ONNX model is here:

error1.onnx (36.6 KB)

Hi,
Please check the below links, as they might answer your concerns.
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#dla_topic
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#dla_layers
Thanks!

Hi,
Please check the below links, as they might answer your concerns.
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#dla_topic
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#dla_layers
Thanks!

Thanks for your reply.
Maybe we are careless, Although we check the double link, we don’t know witch layer or resource cause this error.
After we remove the relu-opt or double conv-opts or concat-opts, the error are disappeared.
And the function “canRunOnDLA” also return true for every layers.

Hi,
Please check the below links, as they might answer your concerns.
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#dla_topic
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#dla_layers
Thanks!

I’m already check the links,but I can’t find the answer. (T_T)

@spolisetty Please help.

Hi @opluss,

We recommend you post your query on Jetson AGX dev forum. You may get better help here.

Thank you.

1 Like