Running tensorRT has a lot of warning message about DLA

I am using the TensorRT sample code under /usr/scr/tensorrt/samples to run the Lanenet (https://github.com/MaybeShewill-CV/lanenet-lane-detection.) However, it printed out a lot of warning messages showing below. I have two questions

  1. Looks like DLA was not fully used. Is there anything I can do to run my network on DLA?
  2. Is there a way to suppress these warning messages at run time?

[W] [TRT] DLA supports only 8 subgraphs per DLA core. Switching to GPU for layer lanenet_model/vgg_frontend/vgg16_decode_module/instance_seg_decode/decode_stage_1_fuse/deconv_bn/FusedBatchNorm
[W] [TRT] DLA supports only 8 subgraphs per DLA core. Switching to GPU for layer lanenet_model/vgg_frontend/vgg16_decode_module/binary_seg_decode/decode_stage_4_fuse/deconv_bn/FusedBatchNorm
[W] [TRT] DLA supports only 8 subgraphs per DLA core. Switching to GPU for layer lanenet_model/vgg_frontend/vgg16_decode_module/binary_seg_decode/decode_stage_4_fuse/fuse_gn/FusedBatchNorm
[W] [TRT] DLA supports only 8 subgraphs per DLA core. Switching to GPU for layer lanenet_model/vgg_frontend/vgg16_decode_module/binary_seg_decode/decode_stage_3_fuse/deconv_bn/FusedBatchNorm
[W] [TRT] DLA supports only 8 subgraphs per DLA core. Switching to GPU for layer lanenet_model/vgg_frontend/vgg16_decode_module/binary_seg_decode/decode_stage_3_fuse/fuse_gn/FusedBatchNorm
[W] [TRT] DLA supports only 8 subgraphs per DLA core. Switching to GPU for layer lanenet_model/vgg_frontend/vgg16_decode_module/binary_seg_decode/decode_stage_2_fuse/deconv_bn/FusedBatchNorm
[W] [TRT] DLA supports only 8 subgraphs per DLA core. Switching to GPU for layer lanenet_model/vgg_frontend/vgg16_decode_module/binary_seg_decode/decode_stage_2_fuse/fuse_gn/FusedBatchNorm
[W] [TRT] DLA supports only 8 subgraphs per DLA core. Switching to GPU for layer lanenet_model/vgg_frontend/vgg16_decode_module/binary_seg_decode/decode_stage_1_fuse/deconv_bn/FusedBatchNorm

Hi,

1. This error indicates that the network is larger than the DLA capacity.
So part of the network tasks are redirected to the GPU.

2. YES.
Suppose you are using the binary located at /usr/src/tensorrt/bin/trtexec.
Please modify below file to the log level you prefered

diff --git a/samples/trtexec/trtexec.cpp b/samples/trtexec/trtexec.cpp
index 5f46881..e3dc23e 100644
--- a/samples/trtexec/trtexec.cpp
+++ b/samples/trtexec/trtexec.cpp
@@ -257,6 +257,7 @@ int main(int argc, char** argv)
     {
         setReportableSeverity(Severity::kVERBOSE);
     }
+    setReportableSeverity(Severity::kERROR);
 
     cudaSetDevice(options.system.device);

https://docs.nvidia.com/deeplearning/tensorrt/api/c_api/classnvinfer1_1_1_i_logger.html#a9c6b909485471b60b4f7aaa78c27a389

Thanks.