DLA fuse layer problem

Hello , I run onnx model via DLA & GPU

the network is brief UNet, which construct with conv/bn/relu layer,

When I run in GPU, the process is normal ,
Actually when I run in DLA, it would fuse all layer in one like :

[10/12/2020-19:40:28] [E] [TRT] …/builder/tacticOptimizer.cpp (1715) - TRTInternal Error in computeCosts: 0 (Could not find any implementation for node {(Unnamed Layer* 0) [Convolution],(Unnamed Layer* 1) [Scale],(Unnamed Layer* 2) [Activation],(Unnamed Layer* 3) [Convolution],(Unnamed Layer* 4) [Scale],(Unnamed Layer* 5) [Activation],(Unnamed Layer* 6) [Convolution],(Unnamed Layer* 7) [Scale],(Unnamed Layer* 8) [Activation],(Unnamed Layer* 9) [Convolution],(Unnamed Layer* 10) [Scale],(Unnamed Layer* 11) [Activation],(Unnamed Layer* 12) [Convolution],(Unnamed Layer* 13) [Scale],(Unnamed Layer* 14) [Activation],(Unnamed Layer* 15) [Concatenation],(Unnamed Layer* 16) [Convolution],(Unnamed Layer* 17) [Scale],(Unnamed Layer* 18) [Activation],(Unnamed Layer* 19) [Convolution],(Unnamed Layer* 20) [Scale],(Unnamed Layer* 21) [Activation],(Unnamed Layer* 22) [Concatenation],(Unnamed Layer* 23) [Convolution],(Unnamed Layer* 24) [Scale],(Unnamed Layer* 25) [Activation],(Unnamed Layer* 26) [Concatenation],(Unnamed Layer* 27) [Convolution],(Unnamed Layer* 28) [Scale],(Unnamed Layer* 29) [Activation],(Unnamed Layer* 31) [Pooling],(Unnamed Layer* 32) [Convolution],(Unnamed Layer* 33) [Scale],(Unnamed Layer* 34) [Activation],(Unnamed Layer* 35) [Concatenation],(Unnamed Layer* 36) [Convolution],(Unnamed Layer* 37) [Scale],(Unnamed Layer* 38) [Activation],(Unnamed Layer* 39) [Convolution],(Unnamed Layer* 40) [Scale],(Unnamed Layer* 41) [Activation],(Unnamed Layer* 42) [Concatenation],(Unnamed Layer* 43) [Convolution],(Unnamed Layer* 44) [Scale],(Unnamed Layer* 45) [Activation],(Unnamed Layer* 46) [Concatenation],(Unnamed Layer* 47) [Convolution],(Unnamed Layer* 48) [Scale],(Unnamed Layer* 49) [Activation],(Unnamed Layer* 51) [Pooling],(Unnamed Layer* 52) [Convolution],(Unnamed Layer* 53) [Scale],(Unnamed Layer* 54) [Activation],(Unnamed Layer* 55) [Concatenation],(Unnamed Layer* 56) [Convolution],(Unnamed Layer* 57) [Scale],(Unnamed Layer* 58) [Activation],(Unnamed Layer* 59) [Convolution],(Unnamed Layer* 60) [Scale],(Unnamed Layer* 61) [Activation],(Unnamed Layer* 62) [Concatenation],(Unnamed Layer* 63) [Convolution],(Unnamed Layer* 64) [Scale],(Unnamed Layer* 65) [Activation],(Unnamed Layer* 66) [Convolution],(Unnamed Layer* 67) [Scale],(Unnamed Laye

balabala

that leads to memory crash
so how can I solve it?

Hi,

Would you mind to share the detail log from trtexec with --verbose flag with us first?

/usr/src/tensorrt/bin/trtexec --onnx=[your/model] --verbose --useDLACore=0 --allowGPUFallback

Thanks.

too much log.

reply only support 99000 below.

which part you need?

log.txt (336.1 KB)

Hi,

Based on the log, this is a known issue as Cannot build a TensorRT engine for DLA from a large ONNX file.
We already fix this issue in our internal package and it will be included in our future release.

Thanks.

Is that software or chip issue?

Would I use it without change my hardware?

Hi,

It is software issue and fixed in our future DLA library.
Thanks.

Hi,

To verify your issue is the same as Cannot build a TensorRT engine for DLA from a large ONNX file.
Would you mind to share a model with us so we can check it internally?

Thanks.

Could you share me a private email ?

You can share it via private message.

Hi,

We didn’t receive your model in the private message.
Have you fixed this issue?

Thanks.

“We already fix this issue in our internal package and it will be included in our future release.”

@AastaLLL, I’m also seeing this. Is there any due date or version for this fix?

thanks
Eyal

Hi,

We do fix an issue related to the original post:

[10/12/2020-19:40:28] [E] [TRT] …/builder/tacticOptimizer.cpp (1715) - TRTInternal Error in computeCosts: 0 (Could not find any implementation for node ...

Since DLA error usually has similar log, it will be good if we can verify on the user’s model directly.
However, we haven’t received the model yet.

Thanks.