Why LEAKY_RELU not supported for DLA

Description

When I converting an onnx model to trt model with DLA support, I got the following warning.
Could you give me some advice?

LeakyRelu_2: ActivationLayer (with ActivationType = LEAKY_RELU) not supported for DLA.
Default DLA is enabled but layer LeakyRelu_2 is not supported on DLA, falling back to GPU.

According to Developer Guide :: NVIDIA Deep Learning TensorRT Documentation
I think LEAKY_RELU should be supported by default.

Environment

Xavier
TensorRT Version: 7.1.3-1
CUDA Version: cuda10.2
CUDNN Version: 8.0

Hi,
Please check the below links, as they might answer your concerns.
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#dla_topic
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#dla_layers
Thanks!

Hi、the links mentioned that :

Layer specific restrictions

Activation layer

  • Only 2 spatial dimension operations are supported.
  • Both FP16 and INT8 are supported.
  • Functions supported: ReLU, Sigmoid, TanH, Clipped ReLU and Leaky ReLU.
    • Negative slope is not supported for ReLU.
    • Clipped ReLU only supports values in the range [1, 127].
  • TanH, Sigmoid INT8 support is supported by auto-upgrading to FP16.

So I think Leaky ReLU should be supported by DLA.
But why I got an unsupport warning.

LeakyRelu_2: ActivationLayer (with ActivationType = LEAKY_RELU) not supported for DLA.
Default DLA is enabled but layer LeakyRelu_2 is not supported on DLA, falling back to GPU.

It seems that TensorRT Version : 7.1.3-1 doesn’t support LEAKY_RELU.

May I upgrade the TensorRT Version to 8.x on my Xavier?

Hi @zhaofengming.zfm,

Upgrading to TensorRT 8.0 EA also may not help you. In future we may support, Please refer following link,
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#dla_layers
If we check above doc activation layers section, Its mentioned that Negative slope is not supported for ReLU for DLA hardware.

As it is unsupported operation, execution is falling back to GPU, there may be performance loss because of this. If you’re concerned about performance, try training the network with ReLU instead of LeakyReLU.

Regarding upgrading the TensorRT Version, may be you need to wait for Jetpack update.
we recommend you to post your concern on Jetson Xavier forum to get better help. Jetson AGX Xavier - NVIDIA Developer Forums

Thank you.

Thanks @spolisetty you did me a favor.