How to debug Internal DLA error when conv2D layer or conv2DTranspose layers are falling back on GPU

I am working on platform :
Xavier Hardware


CUDA 10.0
CUDNN version
Python version [if using python]
Tensorflow version
TensorRT 5.1.5.

My model has only input layer, conv and then convTranspose layer in channel first format.
According to documentation tensorRT5.1.5 supports convTranspose on DLA, however TensorRT still can not run the convTranspose layer on DLA even if I am using fp16 format.

It throws following error :

Internal DLA error for layer conv2d_transpose_1/conv2d_transpose. Switching to GPU fallback

The keras model with tensorflow backend is very simple and as follows :

input_tensor = layers.Input((3, 100, 200))
        dec = layers.Conv2D(16, (3, 3), padding='valid', strides=(2, 2), data_format=used_data_format)(input_tensor)
        dec = layers.Conv2DTranspose( 13,kernel_size=(4, 4), strides=(2, 2), padding="va<a target='_blank' rel='noopener noreferrer' href=''></a>lid", data_format=used_data_format)(dec)

        model = Model(inputs=[input_tensor], outputs=[dec])

In the attachment you can find the converted uff file of the model.

I would also like to know what does it mean by output map in the following documentation comment:

Number of output maps must be in the range [1, 8192]

Does it mean output dimentions of layer should be max 8192 i.e widthheightbatch < 8192 ?

I would also appreciate if someone can tell me how to debug these internal DLA errors because a lot of layers are throwing me same error and I would like to make them work on DLA. (14.9 KB)


Could you please check if layers satisfies DLA layer specific restrictions?
Please refer below link for more details: