I am working on platform :
DISTRIB_DESCRIPTION=“Ubuntu 18.04.3 LTS”
Python version [if using python]
My model has only input layer, conv and then convTranspose layer in channel first format.
According to documentation https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt-515/tensorrt-support-matrix/index.html tensorRT5.1.5 supports convTranspose on DLA, however TensorRT still can not run the convTranspose layer on DLA even if I am using fp16 format.
It throws following error :
Internal DLA error for layer conv2d_transpose_1/conv2d_transpose. Switching to GPU fallback
The keras model with tensorflow backend is very simple and as follows :
input_tensor = layers.Input((3, 100, 200)) dec = layers.Conv2D(16, (3, 3), padding='valid', strides=(2, 2), data_format=used_data_format)(input_tensor) dec = layers.Conv2DTranspose( 13,kernel_size=(4, 4), strides=(2, 2), padding="va<a target='_blank' rel='noopener noreferrer' href=''></a>lid", data_format=used_data_format)(dec) model = Model(inputs=[input_tensor], outputs=[dec]) model.summary() model.save("/home/models/keras_mobilenetv3_model_convTranspose_chFirst.h5") exit()
In the attachment you can find the converted uff file of the model.
I would also like to know what does it mean by output map in the following documentation comment:
Number of output maps must be in the range [1, 8192]
Does it mean output dimentions of layer should be max 8192 i.e widthheightbatch < 8192 ?
I would also appreciate if someone can tell me how to debug these internal DLA errors because a lot of layers are throwing me same error and I would like to make them work on DLA.
convTranspose_chFirst.zip (14.9 KB)