[Need help] An error occurred while using DLA

When I enabled DLA to run my own model on nx, an error occurred:
[TRT Warning] DLA supports only 8 subgraphs per DLA core. Switching to GPU for layer (Unnamed Layer* 157) [Convolution]
[TRT Warning] DLA supports only 8 subgraphs per DLA core. Switching to GPU for layer (Unnamed Layer* 1) [Convolution]
[TRT Warning] DLA supports only 8 subgraphs per DLA core. Switching to GPU for layer (Unnamed Layer* 185) [Fully Connected]
[TRT Error] …/rtSafe/cuda/cudaActivationRunner.cpp (38) - Assertion Error in getCudnnActivationMode: 0 (This activation type is not handled in this layer)

1 Like

Hi,

Please noted that DLA has limited resources.
The warning indicates the model’s computation tasks are over the DLA capability and fallbacks to GPU resources instead.

The last error implies that a particular layer may have incorrect parameters.
Since we don’t see this error before, would you mind to enable the VERBOSE and share the output log with us?

$ /usr/src/tensorrt/bin/trtexec --onnx=[file] --useDLACore=0 --allowGPUFallback --verbose

More, could you also test if TensorRT + GPU mode can run normally?

Thanks.

Thanks for your quik reply,

trace.log (275.9 KB) here is the log.

This model can run correctly on trt + gpu.

By the way, what should i do if I want to use 2 DLA cores on nx at the same time?

thanks

Hi,

It seems that there is some issue when fallback the activation layer into the GPU.
Would you mind to share the onnx model with us so we can check it deeper?

To use DLA, you will need to create two separate TensorRT engines for each DLA core.
A simplest way is to launch two trtexec with --useDLACore=0 and --useDLACore=1 respectively.

Thanks.

The model is so big that upload failed.

In fact, I found that the error occurred in mobilenetv2, which is part of my model.

Thanks.

Hi,

Would you mind sharing the model on a third-party DRIVE and share the link with us?

Thanks.

Of course not, you can get the model follow this link:

Hi,

Thanks for the sharing.

Confirmed that we can reproduce this issue in our environment.
Our internal team is working on this issue right now.

Will share more information with you later.

Hi,

Thanks for your patience.
We confirmed that this issue is fixed in our next JetPack release.
Will share with you once it is available.

Thanks.

The fix will be included at JetPack 4.6 which will be available late July, 2021.