Missing dynamic range for tensor output_bbox/BiasAdd. DLA requires all tensors dynamic range to be known

NVIDIA Jetson AGX Xavier [16GB]
L4T 32.4.3 [ JetPack 4.4 ]
Ubuntu 18.04.4 LTS
Kernel Version: 4.9.140-tegra
CUDA 10.2.89
CUDA Architecture: 7.2
OpenCV version: 4.1.1
OpenCV Cuda: NO
CUDNN: 8.0.0.180
TensorRT: 7.1.3.0
Vision Works: 1.6.0.501
VPI: 0.4.4
Vulcan: 1.2.70

I am trying to run LPDnet on Xavier with Int8 precision on Dla0 as SGIE0. My config file is like this:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=1
#tlt-encoded-model=LPDnet/resnet18_detector.etlt
#model-engine-file=usa_pruned.etlt_b8_gpu0_fp16.engine
#model-engine-file=usa_pruned.etlt_b8_dla0_int8.engine
labelfile-path=labels.txt
tlt-encoded-model=usa_pruned.etlt
tlt-model-key=nvidia_tlt
int8-calib-file=usa_lpd_cal.bin
#For us model, set to 3;480;640;0 For ccpd model, set to 3;1168;720;0
uff-input-dims=3;480;640;0
#uff-input-dims=3;384;1248;0  
uff-input-blob-name=input_1
batch-size=8
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
num-detected-classes=1
##1 Primary 2 Secondary
process-mode=2
interval=0
gie-unique-id=4
network-type=0  # Classifier #
operate-on-gie-id=1
operate-on-class-ids=0
cluster-mode=2
output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd
input-object-min-height=30
input-object-min-width=40
#enable-dla=1
enable-dla=1
use-dla-core=0
scaling-compute-hw=0


[class-attrs-0]
threshold=0.1
#eps=0.2
#minBoxes=1
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=18
detected-min-h=14
detected-max-w=600
detected-max-h=600

When building the engine, deepstream returns the following error:

0:00:03.324709123  3877   0x55bff72d30 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 4]: Trying to create engine from model files
INFO: [FullDims Engine Info]: layers num: 3
0   INPUT  kFLOAT image_input     3x48x96         min: 1x3x48x96       opt: 4x3x48x96       Max: 16x3x48x96      
1   OUTPUT kINT32 tf_op_layer_ArgMax 24              min: 0               opt: 0               Max: 0               
2   OUTPUT kFLOAT tf_op_layer_Max 24              min: 0               opt: 0               Max: 0               

WARNING: [TRT]: Default DLA is enabled but layer output_bbox/bias is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer conv1/kernel is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer conv1/bias is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer bn_conv1/moving_variance is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer bn_conv1/Reshape_1/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer bn_conv1/batchnorm/add/y is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer bn_conv1/gamma is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer bn_conv1/Reshape_3/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer bn_conv1/beta is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer bn_conv1/Reshape_2/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer bn_conv1/moving_mean is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer bn_conv1/Reshape/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1a_conv_1/kernel is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1a_conv_1/bias is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1a_bn_1/moving_variance is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1a_bn_1/Reshape_1/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1a_bn_1/batchnorm/add/y is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1a_bn_1/gamma is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1a_bn_1/Reshape_3/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1a_bn_1/beta is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1a_bn_1/Reshape_2/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1a_bn_1/moving_mean is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1a_bn_1/Reshape/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1a_conv_2/kernel is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1a_conv_2/bias is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1a_bn_2/moving_variance is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1a_bn_2/Reshape_1/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1a_bn_2/batchnorm/add/y is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1a_bn_2/gamma is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1a_bn_2/Reshape_3/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1a_bn_2/beta is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is e
nabled but layer block_1a_bn_2/Reshape_2/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1a_bn_2/moving_mean is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1a_bn_2/Reshape/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1a_conv_shortcut/kernel is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1a_conv_shortcut/bias is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1a_bn_shortcut/moving_variance is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1a_bn_shortcut/Reshape_1/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1a_bn_shortcut/batchnorm/add/y is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1a_bn_shortcut/gamma is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1a_bn_shortcut/Reshape_3/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1a_bn_shortcut/beta is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1a_bn_shortcut/Reshape_2/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1a_bn_shortcut/moving_mean is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1a_bn_shortcut/Reshape/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1b_conv_1/kernel is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1b_conv_1/bias is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1b_bn_1/moving_variance is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1b_bn_1/Reshape_1/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1b_bn_1/batchnorm/add/y is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1b_bn_1/gamma is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1b_bn_1/Reshape_3/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1b_bn_1/beta is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1b_bn_1/Reshape_2/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1b_bn_1/moving_mean is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1b_bn_1/Reshape/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1b_conv_2/kernel is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1b_conv_2/bias is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1b_bn_2/moving_variance is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1b_bn_2/Reshape_1/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1b_bn_2/batchnorm/add/y is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1b_bn_2/gamma is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1b_bn_2/Reshape_3/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1b_bn_2/beta is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1b_bn_2/Reshape_2/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1b_bn_2/moving_mean is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_1b_bn_2/Reshape/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2a_conv_1/kernel is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2a_conv_1/bias is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2a_bn_1/moving_variance is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2a_bn_1/Reshape_1/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2a_bn_1/batchnorm/add/y is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2a_bn_1/gamma is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2a_bn_1/Reshape_3/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2a_bn_1/beta is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2a_bn_1/Reshape_2/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2a_bn_1/moving_mean is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2a_bn_1/Reshape/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2a_conv_2/kernel is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2a_conv_2/bias is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2a_bn_2/moving_variance is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2a_bn_2/Reshape_1/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2a_bn_2/batchnorm/add/y is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2a_bn_2/gamma is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2a_bn_2/Reshape_3/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2a_bn_2/beta is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2a_bn_2/Reshape_2/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2a_bn_2/moving_mean is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2a_bn_2/Reshape/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2a_conv_shortcut/kernel is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2a_conv_shortcut/bias is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2a_bn_shortcut/moving_variance is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2a_bn_shortcut/Reshape_1/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2a_bn_shortcut/batchnorm/add/y is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2a_bn_shortcut/gamma is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2a_bn_shortcut/Reshape_3/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2a_bn_shortcut/beta is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2a_bn_shortcut/Reshape_2/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2a_bn_shortcut/moving_mean is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2a_bn_shortcut/Reshape/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2b_conv_1/kernel is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2b_conv_1/bias is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2b_bn_1/moving_variance is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2b_bn_1/Reshape_1/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2b_bn_1/batchnorm/add/y is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2b_bn_1/gamma is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2b_bn_1/Reshape_3/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2b_bn_1/beta is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2b_bn_1/Reshape_2/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2b_bn_1/moving_mean is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2b_bn_1/Reshape/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2b_conv_2/kernel is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2b_conv_2/bias is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2b_bn_2/moving_variance is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2b_bn_2/Reshape_1/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2b_bn_2/batchnorm/add/y is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2b_bn_2/gamma is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2b_bn_2/Reshape_3/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2b_bn_2/beta is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2b_bn_2/Reshape_2/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2b_bn_2/moving_mean is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_2b_bn_2/Reshape/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3a_conv_1/kernel is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3a_conv_1/bias is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3a_bn_1/moving_variance is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3a_bn_1/Reshape_1/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3a_bn_1/batchnorm/add/y is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3a_bn_1/gamma is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3a_bn_1/Reshape_3/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3a_bn_1/beta is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3a_bn_1/Reshape_2/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3a_bn_1/moving_mean is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3a_bn_1/Reshape/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3a_conv_2/kernel is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3a_conv_2/bias is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3a_bn_2/moving_variance is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3a_bn_2/Reshape_1/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3a_bn_2/batchnorm/add/y is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3a_bn_2/gamma is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3a_bn_2/Reshape_3/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3a_bn_2/beta is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3a_bn_2/Reshape_2/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3a_bn_2/moving_mean is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3a_bn_2/Reshape/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3a_conv_shortcut/kernel is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3a_conv_shortcut/bias is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3a_bn_shortcut/moving_variance is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3a_bn_shortcut/Reshape_1/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3a_bn_shortcut/batchnorm/add/y is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3a_bn_shortcut/gamma is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3a_bn_shortcut/Reshape_3/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3a_bn_shortcut/beta is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3a_bn_shortcut/Reshape_2/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3a_bn_shortcut/moving_mean is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3a_bn_shortcut/Reshape/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3b_conv_1/kernel is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3b_conv_1/bias is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3b_bn_1/moving_variance is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3b_bn_1/Reshape_1/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3b_bn_1/batchnorm/add/y is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3b_bn_1/gamma is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3b_bn_1/Reshape_3/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3b_bn_1/beta is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3b_bn_1/Reshape_2/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3b_bn_1/moving_mean is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3b_bn_1/Reshape/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3b_conv_2/kernel is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3b_conv_2/bias is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3b_bn_2/moving_variance is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3b_bn_2/Reshape_1/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3b_bn_2/batchnorm/add/y is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3b_bn_2/gamma is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3b_bn_2/Reshape_3/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3b_bn_2/beta is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3b_bn_2/Reshape_2/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3b_bn_2/moving_mean is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_3b_bn_2/Reshape/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4a_conv_1/kernel is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4a_conv_1/bias is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4a_bn_1/moving_variance is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4a_bn_1/Reshape_1/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4a_bn_1/batchnorm/add/y is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4a_bn_1/gamma is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4a_bn_1/Reshape_3/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4a_bn_1/beta is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4a_bn_1/Reshape_2/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4a_bn_1/moving_mean is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4a_bn_1/Reshape/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4a_conv_2/kernel is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4a_conv_2/bias is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4a_bn_2/moving_variance is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4a_bn_2/Reshape_1/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4a_bn_2/batchnorm/add/y is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4a_bn_2/gamma is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4a_bn_2/Reshape_3/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4a_bn_2/beta is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4a_bn_2/Reshape_2/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4a_bn_2/moving_mean is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4a_bn_2/Reshape/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4a_conv_shortcut/kernel is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4a_conv_shortcut/bias is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4a_bn_shortcut/moving_variance is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4a_bn_shortcut/Reshape_1/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4a_bn_shortcut/batchnorm/add/y is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4a_bn_shortcut/gamma is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4a_bn_shortcut/Reshape_3/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4a_bn_shortcut/beta is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4a_bn_shortcut/Reshape_2/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4a_bn_shortcut/moving_mean is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4a_bn_shortcut/Reshape/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4b_conv_1/kernel is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4b_conv_1/bias is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4b_bn_1/moving_variance is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4b_bn_1/Reshape_1/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4b_bn_1/batchnorm/add/y is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4b_bn_1/gamma is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4b_bn_1/Reshape_3/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4b_bn_1/beta is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4b_bn_1/Reshape_2/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4b_bn_1/moving_mean is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4b_bn_1/Reshape/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4b_conv_2/kernel is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4b_conv_2/bias is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4b_bn_2/moving_variance is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4b_bn_2/Reshape_1/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4b_bn_2/batchnorm/add/y is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4b_bn_2/gamma is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4b_bn_2/Reshape_3/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4b_bn_2/beta is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4b_bn_2/Reshape_2/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4b_bn_2/moving_mean is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer block_4b_bn_2/Reshape/shape i
ERROR: [TRT]: Missing dynamic range for tensor output_bbox/BiasAdd. DLA requires all tensors dynamic range to be known.
ERROR: [TRT]: ../builder/cudnnBuilder2.cpp (3575) - Misc Error in createRegionRanges: -1 (Could not find dynamic range for tensor output_bbox/BiasAdd. )
ERROR: [TRT]: ../builder/cudnnBuilder2.cpp (3575) - Misc Error in createRegionRanges: -1 (Could not find dynamic range for tensor output_bbox/BiasAdd. )
ERROR: Build engine failed from config file

When I run it with FP16 mode, it builds the engine fine

TRT]: Default DLA is enabled but layer block_4b_bn_2/Reshape/shape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer output_bbox/kernel is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer output_cov/kernel is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer output_cov/bias is not supported on DLA, falling back to GPU.
INFO: [TRT]: 
INFO: [TRT]: --------------- Layers running on DLA: 
INFO: [TRT]: {conv1/convolution,conv1/BiasAdd,bn_conv1/batchnorm/mul_1,bn_conv1/batchnorm/add_1,activation_1/Relu6,block_1a_conv_1/convolution,block_1a_conv_1/BiasAdd,block_1a_bn_1/batchnorm/mul_1,block_1a_bn_1/batchnorm/add_1,block_1a_relu_1/Relu6,block_1a_conv_2/convolution,block_1a_conv_2/BiasAdd,block_1a_bn_2/batchnorm/mul_1,block_1a_bn_2/batchnorm/add_1,block_1a_conv_shortcut/convolution,block_1a_conv_shortcut/BiasAdd,block_1a_bn_shortcut/batchnorm/mul_1,block_1a_bn_shortcut/batchnorm/add_1,add_1/add,block_1a_relu/Relu6,block_1b_conv_1/convolution,block_1b_conv_1/BiasAdd,block_1b_bn_1/batchnorm/mul_1,block_1b_bn_1/batchnorm/add_1,block_1b_relu_1/Relu6,block_1b_conv_2/convolution,block_1b_conv_2/BiasAdd,block_1b_bn_2/batchnorm/mul_1,block_1b_bn_2/batchnorm/add_1,add_2/add,block_1b_relu/Relu6,block_2a_conv_1/convolution,block_2a_conv_1/BiasAdd,block_2a_bn_1/batchnorm/mul_1,block_2a_bn_1/batchnorm/add_1,block_2a_relu_1/Relu6,block_2a_conv_2/convolution,block_2a_conv_2/BiasAdd,block_2a_bn_2/batchnorm/mul_1,block_2a_bn_2/batchnorm/add_1,block_2a_conv_shortcut/convolution,block_2a_conv_shortcut/BiasAdd,block_2a_bn_shortcut/batchnorm/mul_1,block_2a_bn_shortcut/batchnorm/add_1,add_3/add,block_2a_relu/Relu6,block_2b_conv_1/convolution,block_2b_conv_1/BiasAdd,block_2b_bn_1/batchnorm/mul_1,block_2b_bn_1/batchnorm/add_1,block_2b_relu_1/Relu6,block_2b_conv_2/convolution,block_2b_conv_2/BiasAdd,block_2b_bn_2/batchnorm/mul_1,block_2b_bn_2/batchnorm/add_1,add_4/add,block_2b_relu/Relu6,block_3a_conv_1/convolution,block_3a_conv_1/BiasAdd,block_3a_bn_1/batchnorm/mul_1,block_3a_bn_1/batchnorm/add_1,block_3a_relu_1/Relu6,block_3a_conv_2/convolution,block_3a_conv_2/BiasAdd,block_3a_bn_2/batchnorm/mul_1,block_3a_bn_2/batchnorm/add_1,block_3a_conv_shortcut/convolution,block_3a_conv_shortcut/BiasAdd,block_3a_bn_shortcut/batchnorm/mul_1,block_3a_bn_shortcut/batchnorm/add_1,add_5/add,block_3a_relu/Relu6,block_3b_conv_1/convolution,block_3b_conv_1/BiasAdd,block_3b_bn_1/batchnorm/mul_1,block_3b_bn_1/batchnorm/add_1,block_3b_relu_1/Relu6,block_3b_conv_2/convolution,block_3b_conv_2/BiasAdd,block_3b_bn_2/batchnorm/mul_1,block_3b_bn_2/batchnorm/add_1,add_6/add,block_3b_relu/Relu6,block_4a_conv_1/convolution,block_4a_conv_1/BiasAdd,block_4a_bn_1/batchnorm/mul_1,block_4a_bn_1/batchnorm/add_1,block_4a_relu_1/Relu6,block_4a_conv_2/convolution,block_4a_conv_2/BiasAdd,block_4a_bn_2/batchnorm/mul_1,block_4a_bn_2/batchnorm/add_1,block_4a_conv_shortcut/convolution,block_4a_conv_shortcut/BiasAdd,block_4a_bn_shortcut/batchnorm/mul_1,block_4a_bn_shortcut/batchnorm/add_1,add_7/add,block_4a_relu/Relu6,block_4b_conv_1/convolution,block_4b_conv_1/BiasAdd,block_4b_bn_1/batchnorm/mul_1,block_4b_bn_1/batchnorm/add_1,block_4b_relu_1/Relu6,block_4b_conv_2/convolution,block_4b_conv_2/BiasAdd,block_4b_bn_2/batchnorm/mul_1,block_4b_bn_2/batchnorm/add_1,add_8/add,block_4b_relu/Relu6,output_bbox/convolution,output_bbox/BiasAdd,output_cov/convolution,output_cov/BiasAdd,output_cov/Sigmoid}, 
INFO: [TRT]: --------------- Layers running on GPU: 
INFO: [TRT]: 
INFO: [TRT]: Detected 1 inputs and 2 output network tensors.
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x480x640       
1   OUTPUT kFLOAT output_bbox/BiasAdd 4x30x40         
2   OUTPUT kFLOAT output_cov/Sigmoid 1x30x40

I wonder if I’m missing something?

Hi,

It looks like there are some layers that are not supported by DLA.
Does it work correctly with the GPU backend?

Thanks.

Yes it works correctly in GPU. Also works good in DLA on FP16 mode as well but why not on INT8 mode in DLA? I see there is usa_lpd_cal_dla.bin in NGC which I use for building engine in INT8 mode.

Thanks for your feedback.

We are trying to reproduce this issue internally.
Will share more information with you later.

Hi,

We test the usa_pruned.etlt with tao-converter on JetPack4.5.1 and it can work correctly.
Would you mind upgrading your environment and giving it a try?

$ wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/tao/lpdnet/versions/pruned_v1.0/zip -O lpdnet_pruned_v1.0.zip
$ unzip lpdnet_pruned_v1.0.zip 

$ wget https://developer.nvidia.com/tao-converter-jp4.5
$ mv tao-converter-jp4.5 tao-converter-jp4.5.zip
$ unzip tao-converter-jp4.5.zip 
$ sudo chmod +x jp4.5/tao-converter

$ ./jp4.5/tao-converter usa_pruned.etlt -k nvidia_tlt -d 3,480,640 -u 0 -t int8 -c usa_lpd_cal_dla.bin 
[WARNING] Default DLA is enabled but layer output_bbox/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer conv1/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer conv1/bias is not supported on DLA, falling back to GPU.
...
[WARNING] Default DLA is enabled but layer block_4b_bn_2/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer output_bbox/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer output_cov/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer output_cov/bias is not supported on DLA, falling back to GPU.
[INFO] Reading Calibration Cache for calibrator: EntropyCalibration2
[INFO] Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
[INFO] To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
[INFO] 
[INFO] --------------- Layers running on DLA: 
[INFO] {conv1/convolution,conv1/BiasAdd,bn_conv1/batchnorm/mul_1,bn_conv1/batchnorm/add_1,activation_1/Relu6,block_1a_conv_1/convolution,block_1a_conv_1/BiasAdd,block_1a_bn_1/batchnorm/mul_1,block_1a_bn_1/batchnorm/add_1,block_1a_relu_1/Relu6,block_1a_conv_2/convolution,block_1a_conv_2/BiasAdd,block_1a_bn_2/batchnorm/mul_1,block_1a_bn_2/batchnorm/add_1,block_1a_conv_shortcut/convolution,block_1a_conv_shortcut/BiasAdd,block_1a_bn_shortcut/batchnorm/mul_1,block_1a_bn_shortcut/batchnorm/add_1,add_1/add,block_1a_relu/Relu6,block_1b_conv_1/convolution,block_1b_conv_1/BiasAdd,block_1b_bn_1/batchnorm/mul_1,block_1b_bn_1/batchnorm/add_1,block_1b_relu_1/Relu6,block_1b_conv_2/convolution,block_1b_conv_2/BiasAdd,block_1b_bn_2/batchnorm/mul_1,block_1b_bn_2/batchnorm/add_1,add_2/add,block_1b_relu/Relu6,block_2a_conv_1/convolution,block_2a_conv_1/BiasAdd,block_2a_bn_1/batchnorm/mul_1,block_2a_bn_1/batchnorm/add_1,block_2a_relu_1/Relu6,block_2a_conv_2/convolution,block_2a_conv_2/BiasAdd,block_2a_bn_2/batchnorm/mul_1,block_2a_bn_2/batchnorm/add_1,block_2a_conv_shortcut/convolution,block_2a_conv_shortcut/BiasAdd,block_2a_bn_shortcut/batchnorm/mul_1,block_2a_bn_shortcut/batchnorm/add_1,add_3/add,block_2a_relu/Relu6,block_2b_conv_1/convolution,block_2b_conv_1/BiasAdd,block_2b_bn_1/batchnorm/mul_1,block_2b_bn_1/batchnorm/add_1,block_2b_relu_1/Relu6,block_2b_conv_2/convolution,block_2b_conv_2/BiasAdd,block_2b_bn_2/batchnorm/mul_1,block_2b_bn_2/batchnorm/add_1,add_4/add,block_2b_relu/Relu6,block_3a_conv_1/convolution,block_3a_conv_1/BiasAdd,block_3a_bn_1/batchnorm/mul_1,block_3a_bn_1/batchnorm/add_1,block_3a_relu_1/Relu6,block_3a_conv_2/convolution,block_3a_conv_2/BiasAdd,block_3a_bn_2/batchnorm/mul_1,block_3a_bn_2/batchnorm/add_1,block_3a_conv_shortcut/convolution,block_3a_conv_shortcut/BiasAdd,block_3a_bn_shortcut/batchnorm/mul_1,block_3a_bn_shortcut/batchnorm/add_1,add_5/add,block_3a_relu/Relu6,block_3b_conv_1/convolution,block_3b_conv_1/BiasAdd,block_3b_bn_1/batchnorm/mul_1,block_3b_bn_1/batchnorm/add_1,block_3b_relu_1/Relu6,block_3b_conv_2/convolution,block_3b_conv_2/BiasAdd,block_3b_bn_2/batchnorm/mul_1,block_3b_bn_2/batchnorm/add_1,add_6/add,block_3b_relu/Relu6,block_4a_conv_1/convolution,block_4a_conv_1/BiasAdd,block_4a_bn_1/batchnorm/mul_1,block_4a_bn_1/batchnorm/add_1,block_4a_relu_1/Relu6,block_4a_conv_2/convolution,block_4a_conv_2/BiasAdd,block_4a_bn_2/batchnorm/mul_1,block_4a_bn_2/batchnorm/add_1,block_4a_conv_shortcut/convolution,block_4a_conv_shortcut/BiasAdd,block_4a_bn_shortcut/batchnorm/mul_1,block_4a_bn_shortcut/batchnorm/add_1,add_7/add,block_4a_relu/Relu6,block_4b_conv_1/convolution,block_4b_conv_1/BiasAdd,block_4b_bn_1/batchnorm/mul_1,block_4b_bn_1/batchnorm/add_1,block_4b_relu_1/Relu6,block_4b_conv_2/convolution,block_4b_conv_2/BiasAdd,block_4b_bn_2/batchnorm/mul_1,block_4b_bn_2/batchnorm/add_1,add_8/add,block_4b_relu/Relu6,output_bbox/convolution,output_bbox/BiasAdd,output_cov/convolution,output_cov/BiasAdd,output_cov/Sigmoid}, 
[INFO] --------------- Layers running on GPU: 
[INFO] 
[INFO] Detected 1 inputs and 2 output network tensors.

Then you can get the serialized TensorRT engine file called saved.engine in the executed folder.
You can run it with Deepstream without converting directly.

Thanks.

Is it normal to see the warnings? Does all the LPDnet layers not supported in DLA?

Hi,

These are layer-level warning messages.
Some LPDnet layers are not supported by DLA.

Since DLA is a hardware-based engine and limited capacity, not all the TensorRT layers can run on DLA.
You can find more details in the below document:

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.