Fail to convert .uff to .engine File when DLA is Enabled(Out of Memory)

Hi,

I have encountered errors when trying to run a gstreamer with deepstream running on DLA.

0:00:00.422954019 22974   0x5582881a90 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
WARNING: DLA does not support FP32 precision type, using FP16 mode.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_0/weights/read/_166__cf__169 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/BatchNorm/batchnorm/sub/_165__cf__168 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_1_depthwise/depthwise_weights/read/_163__cf__166 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/BatchNorm/batchnorm/mul/_161__cf__164 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/BatchNorm/batchnorm/sub/_162__cf__165 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_1_pointwise/weights/read/_160__cf__163 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_pointwise/BatchNorm/batchnorm/sub/_159__cf__162 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_2_depthwise/depthwise_weights/read/_157__cf__160 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_2_depthwise/BatchNorm/batchnorm/mul/_155__cf__158 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_2_depthwise/BatchNorm/batchnorm/sub/_156__cf__159 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_2_pointwise/weights/read/_154__cf__157 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_2_pointwise/BatchNorm/batchnorm/sub/_153__cf__156 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_3_depthwise/depthwise_weights/read/_151__cf__154 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_3_depthwise/BatchNorm/batchnorm/mul/_149__cf__152 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_3_depthwise/BatchNorm/batchnorm/sub/_150__cf__153 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_3_pointwise/weights/read/_148__cf__151 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_3_pointwise/BatchNorm/batchnorm/sub/_147__cf__150 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_4_depthwise/depthwise_weights/read/_145__cf__148 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_4_depthwise/BatchNorm/batchnorm/mul/_143__cf__146 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_4_depthwise/BatchNorm/batchnorm/sub/_144__cf__147 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_4_pointwise/weights/read/_142__cf__145 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_4_pointwise/BatchNorm/batchnorm/sub/_141__cf__144 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_5_depthwise/depthwise_weights/read/_139__cf__142 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_5_depthwise/BatchNorm/batchnorm/mul/_137__cf__140 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_5_depthwise/BatchNorm/batchnorm/sub/_138__cf__141 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_5_pointwise/weights/read/_136__cf__139 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_5_pointwise/BatchNorm/batchnorm/sub/_135__cf__138 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_6_depthwise/depthwise_weights/read/_133__cf__136 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_6_depthwise/BatchNorm/batchnorm/mul/_131__cf__134 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_6_depthwise/BatchNorm/batchnorm/sub/_132__cf__135 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_6_pointwise/weights/read/_130__cf__133 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_6_pointwise/BatchNorm/batchnorm/sub/_129__cf__132 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_7_depthwise/depthwise_weights/read/_127__cf__130 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_7_depthwise/BatchNorm/batchnorm/mul/_125__cf__128 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_7_depthwise/BatchNorm/batchnorm/sub/_126__cf__129 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_7_pointwise/weights/read/_124__cf__127 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_7_pointwise/BatchNorm/batchnorm/sub/_123__cf__126 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_8_depthwise/depthwise_weights/read/_121__cf__124 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_8_depthwise/BatchNorm/batchnorm/mul/_119__cf__122 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_8_depthwise/BatchNorm/batchnorm/sub/_120__cf__123 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_8_pointwise/weights/read/_118__cf__121 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_8_pointwise/BatchNorm/batchnorm/sub/_117__cf__120 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_9_depthwise/depthwise_weights/read/_115__cf__118 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_9_depthwise/BatchNorm/batchnorm/mul/_113__cf__116 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_9_depthwise/BatchNorm/batchnorm/sub/_114__cf__117 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_9_pointwise/weights/read/_112__cf__115 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_9_pointwise/BatchNorm/batchnorm/sub/_111__cf__114 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_10_depthwise/depthwise_weights/read/_109__cf__112 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_10_depthwise/BatchNorm/batchnorm/mul/_107__cf__110 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_10_depthwise/BatchNorm/batchnorm/sub/_108__cf__111 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_10_pointwise/weights/read/_106__cf__109 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_10_pointwise/BatchNorm/batchnorm/sub/_105__cf__108 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_11_depthwise/depthwise_weights/read/_103__cf__106 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_11_depthwise/BatchNorm/batchnorm/mul/_101__cf__104 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_11_depthwise/BatchNorm/batchnorm/sub/_102__cf__105 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_11_pointwise/weights/read/_100__cf__103 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_11_pointwise/BatchNorm/batchnorm/sub/_99__cf__102 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_0/BoxEncodingPredictor/weights/read/_178__cf__181 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_0/BoxEncodingPredictor/biases/read/_177__cf__180 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_0/strided_slice/stack is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_0/strided_slice/stack_1 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_0/strided_slice/stack_2 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_0/stack/1 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_0/stack/2 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_0/stack/3 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer (Unnamed Layer* 147) [Shuffle] is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_0/Reshape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_12_depthwise/depthwise_weights/read/_97__cf__100 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_12_depthwise/BatchNorm/batchnorm/mul/_95__cf__98 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_12_depthwise/BatchNorm/batchnorm/sub/_96__cf__99 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_12_pointwise/weights/read/_94__cf__97 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_12_pointwise/BatchNorm/batchnorm/sub/_93__cf__96 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_13_depthwise/depthwise_weights/read/_91__cf__94 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_13_depthwise/BatchNorm/batchnorm/mul/_89__cf__92 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_13_depthwise/BatchNorm/batchnorm/sub/_90__cf__93 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_13_pointwise/weights/read/_88__cf__91 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_13_pointwise/BatchNorm/batchnorm/sub/_87__cf__90 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_1/BoxEncodingPredictor/weights/read/_176__cf__179 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_1/BoxEncodingPredictor/biases/read/_175__cf__178 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_1/strided_slice/stack is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_1/strided_slice/stack_1 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_1/strided_slice/stack_2 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_1/stack/1 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_1/stack/2 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_1/stack/3 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer (Unnamed Layer* 183) [Shuffle] is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_1/Reshape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_1_Conv2d_2_1x1_256/weights/read/_85__cf__88 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_1_Conv2d_2_1x1_256/BatchNorm/batchnorm/sub/_84__cf__87 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_2_Conv2d_2_3x3_s2_512/weights/read/_82__cf__85 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_2_Conv2d_2_3x3_s2_512/BatchNorm/batchnorm/sub/_81__cf__84 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_2/BoxEncodingPredictor/weights/read/_174__cf__177 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_2/BoxEncodingPredictor/biases/read/_173__cf__176 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_2/strided_slice/stack is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_2/strided_slice/stack_1 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_2/strided_slice/stack_2 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_2/stack/1 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_2/stack/2 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_2/stack/3 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer (Unnamed Layer* 205) [Shuffle] is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_2/Reshape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_1_Conv2d_3_1x1_128/weights/read/_79__cf__82 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_1_Conv2d_3_1x1_128/BatchNorm/batchnorm/sub/_78__cf__81 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_2_Conv2d_3_3x3_s2_256/weights/read/_76__cf__79 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_2_Conv2d_3_3x3_s2_256/BatchNorm/batchnorm/sub/_75__cf__78 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_3/BoxEncodingPredictor/weights/read/_172__cf__175 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_3/BoxEncodingPredictor/biases/read/_171__cf__174 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_3/strided_slice/stack is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_3/strided_slice/stack_1 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_3/strided_slice/stack_2 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_3/stack/1 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_3/stack/2 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_3/stack/3 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer (Unnamed Layer* 227) [Shuffle] is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_3/Reshape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_1_Conv2d_4_1x1_128/weights/read/_73__cf__76 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_1_Conv2d_4_1x1_128/BatchNorm/batchnorm/sub/_72__cf__75 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_2_Conv2d_4_3x3_s2_256/weights/read/_70__cf__73 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_2_Conv2d_4_3x3_s2_256/BatchNorm/batchnorm/sub/_69__cf__72 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_4/BoxEncodingPredictor/weights/read/_170__cf__173 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_4/BoxEncodingPredictor/biases/read/_169__cf__172 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_4/strided_slice/stack is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_4/strided_slice/stack_1 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_4/strided_slice/stack_2 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_4/stack/1 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_4/stack/2 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_4/stack/3 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer (Unnamed Layer* 249) [Shuffle] is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_4/Reshape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_1_Conv2d_5_1x1_64/weights/read/_67__cf__70 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_1_Conv2d_5_1x1_64/BatchNorm/batchnorm/sub/_66__cf__69 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_2_Conv2d_5_3x3_s2_128/weights/read/_64__cf__67 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer FeatureExtractor/MobilenetV1/Conv2d_13_pointwise_2_Conv2d_5_3x3_s2_128/BatchNorm/batchnorm/sub/_63__cf__66 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_5/BoxEncodingPredictor/weights/read/_168__cf__171 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_5/BoxEncodingPredictor/biases/read/_167__cf__170 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_5/strided_slice/stack is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_5/strided_slice/stack_1 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_5/strided_slice/stack_2 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_5/stack/1 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_5/stack/2 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_5/stack/3 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer (Unnamed Layer* 271) [Shuffle] is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_5/Reshape is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer Squeeze is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer strided_slice_6/stack is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer strided_slice_6/stack_1 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer strided_slice_6/stack_2 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer strided_slice_7/stack is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer strided_slice_7/stack_1 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer strided_slice_7/stack_2 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer GridAnchor is not supported on DLA, falling back to GPU.
WARNING: [TRT]: concat_priorbox: DLA only supports concatenation on the C dimension.
WARNING: [TRT]: Default DLA is enabled but layer concat_priorbox is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_0/ClassPredictor/weights/read/_52__cf__55 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_0/ClassPredictor/biases/read/_51__cf__54 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_0/stack_1/1 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_0/stack_1/2 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer (Unnamed Layer* 288) [Shuffle] is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_0/Reshape_1 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_1/ClassPredictor/weights/read/_50__cf__53 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_1/ClassPredictor/biases/read/_49__cf__52 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_1/stack_1/1 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_1/stack_1/2 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer (Unnamed Layer* 296) [Shuffle] is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_1/Reshape_1 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_2/ClassPredictor/weights/read/_48__cf__51 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_2/ClassPredictor/biases/read/_47__cf__50 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_2/stack_1/1 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_2/stack_1/2 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer (Unnamed Layer* 304) [Shuffle] is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_2/Reshape_1 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_3/ClassPredictor/weights/read/_46__cf__49 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_3/ClassPredictor/biases/read/_45__cf__48 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_3/stack_1/1 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_3/stack_1/2 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer (Unnamed Layer* 312) [Shuffle] is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_3/Reshape_1 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_4/ClassPredictor/weights/read/_44__cf__47 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_4/ClassPredictor/biases/read/_43__cf__46 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_4/stack_1/1 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_4/stack_1/2 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer (Unnamed Layer* 320) [Shuffle] is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_4/Reshape_1 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_5/ClassPredictor/weights/read/_42__cf__45 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_5/ClassPredictor/biases/read/_41__cf__44 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_5/stack_1/1 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_5/stack_1/2 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer (Unnamed Layer* 328) [Shuffle] is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer BoxPredictor_5/Reshape_1 is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer concat_box_conf is not supported on DLA, falling back to GPU.
WARNING: [TRT]: Default DLA is enabled but layer NMS is not supported on DLA, falling back to GPU.
INFO: [TRT]: 
INFO: [TRT]: --------------- Layers running on DLA: 
INFO: [TRT]: {FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/BatchNorm/batchnorm/mul_1,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/BatchNorm/batchnorm/add_1,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Relu6,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/depthwise,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/BatchNorm/batchnorm/mul_1,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/BatchNorm/batchnorm/add_1,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/Relu6,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_pointwise/BatchNorm/batchnorm/mul_1,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_pointwise/BatchNorm/batchnorm/add_1,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_pointwise/Relu6,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_2_depthwise/depthwise,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_2_depthwise/BatchNorm/batchnorm/mul_1,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_2_depthwise/BatchNorm/batchnorm/add_1,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_2_depthwise/Relu6,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_2_pointwise/BatchNorm/batchnorm/mul_1,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_2_pointwise/BatchNorm/batchnorm/add_1,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_2_pointwise/Relu6,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_3_depthwise/depthwise,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_3_depthwise/BatchNorm/batchnorm/mul_1,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_3_depthwise/BatchNorm/batchnorm/add_1,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_3_depthwise/Relu6,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_3_pointwise/BatchNorm/batchnorm/mul_1,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_3_pointwise/BatchNorm/batchnorm/add_1,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_3_pointwise/Relu6,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_4_depthwise/depthwise,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_4_depthwise/BatchNorm/batchnorm/mul_1,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_4_depthwise/BatchNorm/batchnorm/add_1,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_4_depthwise/Relu6,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_4_pointwise/BatchNorm/batchnorm/mul_1,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_4_pointwise/BatchNorm/batchnorm/add_1,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_4_pointwise/Relu6,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_5_depthwise/depthwise,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_5_depthwise/BatchNorm/batchnorm/mul_1,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_5_depthwise/BatchNorm/batchnorm/add_1,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_5_depthwise/Relu6,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_5_pointwise/BatchNorm/batchnorm/mul_1,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_5_pointwise/BatchNorm/batchnorm/add_1,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_5_pointwise/Relu6,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_6_depthwise/depthwise,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_6_depthwise/BatchNorm/batchnorm/mul_1,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_6_depthwise/BatchNorm/batchnorm/add_1,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_6_depthwise/Relu6,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_6_pointwise/BatchNorm/batchnorm/mul_1,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_6_pointwise/BatchNorm/batchnorm/add_1,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_6_pointwise/Relu6,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_7_depthwise/depthwise,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_7_depthwise/BatchNorm/batchnorm/mul_1,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_7_depthwise/BatchNorm/batchnorm/add_1,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_7_depthwise/Relu6,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_7_pointwise/BatchNorm/batchnorm/mul_1,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_7_pointwise/BatchNorm/batchnorm/add_1,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_7_pointwise/Relu6,FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_8_depthw
INFO: [TRT]: --------------- Layers running on GPU: 
INFO: [TRT]: GridAnchor, GridAnchor copy, GridAnchor_1 copy, GridAnchor_2 copy, GridAnchor_3 copy, GridAnchor_4 copy, GridAnchor_5 copy, (Unnamed Layer* 147) [Shuffle] + BoxPredictor_0/Reshape, (Unnamed Layer* 183) [Shuffle] + BoxPredictor_1/Reshape, (Unnamed Layer* 205) [Shuffle] + BoxPredictor_2/Reshape, (Unnamed Layer* 227) [Shuffle] + BoxPredictor_3/Reshape, (Unnamed Layer* 249) [Shuffle] + BoxPredictor_4/Reshape, (Unnamed Layer* 271) [Shuffle] + BoxPredictor_5/Reshape, Squeeze, (Unnamed Layer* 288) [Shuffle] + BoxPredictor_0/Reshape_1, (Unnamed Layer* 296) [Shuffle] + BoxPredictor_1/Reshape_1, (Unnamed Layer* 304) [Shuffle] + BoxPredictor_2/Reshape_1, (Unnamed Layer* 312) [Shuffle] + BoxPredictor_3/Reshape_1, (Unnamed Layer* 320) [Shuffle] + BoxPredictor_4/Reshape_1, (Unnamed Layer* 328) [Shuffle] + BoxPredictor_5/Reshape_1, concat_box_conf, NMS, 
terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc

From the error, it seems that we haven’t had enough memory for the conversion to be done.
Any ideas is much appreciated.

Thanks,
Vincent

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi,

Hardware Platform: Nvidia AGX Xavier 8 cores 32GB
DeepStream version: 5.0
JetPack Version: R32 (release), REVISION: 4.3, GCID: 21589087, BOARD: t186ref, EABI: aarch64, DATE: Fri Jun 26 04:34:27 UTC 2020
TensorRT Version: 7.1.3-1+cuda10.2
Issue: Failed to convert .uff to .engine with DLA enabled

config.txt:

[property]
gpu-id=0
net-scale-factor=0.0078431372
offsets=127.5;127.5;127.5
model-color-format=0

#model-engine-file=frozen_inference_graph.uff_b1_dla0_fp16.engine
uff-file=frozen_inference_graph.uff
uff-input-dims=3;300;300;0
uff-input-blob-name=Input
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=91
interval=0
gie-unique-id=1
is-classifier=0
output-blob-names=NMS
#output-blob-names=scores;boxes
parse-bbox-func-name=NvDsInferParseCustomSSD
custom-lib-path=libnvdsinfer_custom_impl_ssd.so
enable-dla=1
allowGPUFallback=0
useDLA=1

GStreamer + DeppStream Pipeline:

    gst-launch-1.0 -v -e \    
    filesrc location=$VIDEO_0 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 \
    nvstreammux name=m batch-size=8 width=1920 height=1080 ! \
    nvinfer config-file-path= $CONFIG_FILE_PATH batch-size=4 unique-id=1 ! \
    nvmultistreamtiler rows=2 columns=4 width=960 height=540  ! nvvideoconvert ! nvdsosd ! nvegltransform ! fpsdisplaysink video-sink=nveglglessink text-overlay=false sync=false \
    filesrc location=$VIDEO_1 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_1 \
    filesrc location=$VIDEO_2 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_2 \
    filesrc location=$VIDEO_3 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_3 \
    filesrc location=$VIDEO_4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_4 \
    filesrc location=$VIDEO_5 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_5 \
    filesrc location=$VIDEO_6 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_6 \
    filesrc location=$VIDEO_7 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_7 \

How to reproduce the issue: run the pipeline without an engine file to force re-generation of engine file with DLA enabled

Affected Plugin: nvinfer

Thanks and best regards,
Vincent

Hi,

The error is generated by the TensorRT.
Would you mind to test your model by trtexec with --verbose flag and share the output with us?

$ /usr/src/tensorrt/bin/trtexec --verbose ...

Thanks.

Hi,

Below is the output from trtexec:

output.txt (566.9 KB)

Output is uploaded due to text limits.

Thanks,
Vincent

Hi,

Thanks for the sharing.
Would you mind to generate the log with DLA enable?

Thanks.

Hi,

This is the error from execution of the following command:

/usr/src/tensorrt/bin/trtexec --verbose --uff=frozen_inference_graph.uff --batch=1 --uffInput=Input,3,300,300 --output=NMS --saveEngine=mobilenetssd.trt --useDLACore=1 2>&1 > output_DLA.txt
[W] [TRT] concat_priorbox: DLA only supports concatenation on the C dimension.
[E] [TRT] Default DLA is enabled but layer GridAnchor copy is not supported on DLA and falling back to GPU is not enabled.
[E] [TRT] Default DLA is enabled but layer GridAnchor_1 copy is not supported on DLA and falling back to GPU is not enabled.
[E] [TRT] Default DLA is enabled but layer GridAnchor_2 copy is not supported on DLA and falling back to GPU is not enabled.
[E] [TRT] Default DLA is enabled but layer GridAnchor_3 copy is not supported on DLA and falling back to GPU is not enabled.
[E] [TRT] Default DLA is enabled but layer GridAnchor_4 copy is not supported on DLA and falling back to GPU is not enabled.
[E] [TRT] Default DLA is enabled but layer GridAnchor_5 copy is not supported on DLA and falling back to GPU is not enabled.
[E] Engine creation failed
[E] Engine set up failed

The output file is attached.
output_DLA.txt (175.6 KB)

Thanks,
Vincent

Hi,

Based on the log, some operations are not supported by the DLA (ex. GridAnchor).
So please add the --allowGPUFallback flag to enable the automatically fallback.

Thanks.

Hi Aasta,

Thanks a lot for your help.

Vincent