Transfer Learning Toolkit DLA warning messages

Hi,

We are working with the Transfer Learning Toolkit (tlt) to develop an object detection model (retraining a Dashcamnet model) for use in the DLA module of a Jetson Xavier NX.

We have been investigating for some time about some warning messages that appear when creating the model on Jetson Xavier NX, where we can see that some layers are not supported by DLA(warning_messages.png).

After reading the following documentation [1] and making precision changes from FP32 to INT8, changing the structure of the model, also using QAT when training with its subsequent use of post-training quantization, and even using the dashcamnet’s [2] own pruned model, we keep getting the same messages. On the other hand, we have been looking for information in forums where problems were discussed similarly and the only answer we found was the same as we could get in the documentation. We would like to be clear where the layers of the model are running, not only when creating it but also at the time of its execution. To finish, we have we try the tao-converter command with which we can see the layers that are running in DLA and GPU at the time of model creation, but we found the following information (tao_converter.png).

After seeing this we wonder why there is no running layer in GPU if in the previous WARNING messages it is commented that there are layers that are not supported by DLA and therefore will run on GPU. We have also seen the use of trtexec but as our models are of type tlt it is not possible to convert [3] them to onnx/caffemodel which are necessary to be able to run this command for the purpose we want. In any case, we would like to know if you have any tool to be able to see where the layers are being executed at execution time and if you could clarify the output of the tao-converter regarding the warnings mentioned at the beginning.

Thank you very much for your time and dedication.

Links:

[1] Developer Guide :: NVIDIA Deep Learning TensorRT Documentation
[2] https://catalog.ngc.nvidia.com/orgs/nvidia/models/tlt_dashcamnet/files?version=pruned_v1.0
[3] Convert RN TLT to onnx

Could you please share the full command when you run tao-converter?

Hi,

of course I leave below the command used:

./tao-converter -k tlt_encode
           -d 3,480,640
           -o output_bbox/BiasAdd,output_cov/Sigmoid
           -e /home/models/model_1.engine
           -t int8
           -c /home/models/model_1_int8.txt
           -u 0
           /home/models/model_1.etlt

And the output of this:

[INFO] [MemUsageChange] Init CUDA: CPU +354, GPU +0, now: CPU 372, GPU 5660 (MiB)

[WARNING] Default DLA is enabled but layer output_bbox/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer conv1/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer conv1/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer bn_conv1/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer bn_conv1/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer bn_conv1/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer bn_conv1/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer bn_conv1/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer bn_conv1/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer bn_conv1/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer bn_conv1/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer bn_conv1/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_conv_1/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_conv_1/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_1/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_1/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_1/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_1/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_1/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_1/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_1/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_1/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_1/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_conv_2/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_conv_2/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_2/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_2/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_2/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_2/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_2/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_2/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_2/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_2/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_2/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_conv_shortcut/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_conv_shortcut/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_shortcut/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_shortcut/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_shortcut/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_shortcut/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_shortcut/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_shortcut/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_shortcut/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_shortcut/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_shortcut/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_conv_1/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_conv_1/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_1/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_1/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_1/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_1/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_1/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_1/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_1/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_1/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_1/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_conv_2/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_conv_2/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_2/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_2/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_2/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_2/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_2/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_2/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_2/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_2/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_2/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_conv_1/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_conv_1/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_1/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_1/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_1/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_1/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_1/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_1/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_1/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_1/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_1/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_conv_2/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_conv_2/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_2/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_2/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_2/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_2/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_2/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_2/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_2/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_2/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_2/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_conv_shortcut/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_conv_shortcut/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_shortcut/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_shortcut/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_shortcut/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_shortcut/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_shortcut/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_shortcut/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_shortcut/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_shortcut/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_shortcut/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_conv_1/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_conv_1/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_1/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_1/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_1/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_1/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_1/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_1/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_1/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_1/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_1/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_conv_2/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_conv_2/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_2/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_2/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_2/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_2/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_2/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_2/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_2/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_2/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_2/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_conv_1/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_conv_1/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_1/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_1/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_1/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_1/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_1/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_1/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_1/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_1/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_1/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_conv_2/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_conv_2/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_2/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_2/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_2/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_2/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_2/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_2/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_2/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_2/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_2/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_conv_shortcut/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_conv_shortcut/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_shortcut/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_shortcut/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_shortcut/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_shortcut/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_shortcut/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_shortcut/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_shortcut/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_shortcut/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_shortcut/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_conv_1/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_conv_1/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_1/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_1/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_1/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_1/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_1/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_1/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_1/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_1/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_1/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_conv_2/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_conv_2/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_2/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_2/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_2/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_2/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_2/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_2/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_2/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_2/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_2/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_conv_1/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_conv_1/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_1/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_1/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_1/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_1/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_1/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_1/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_1/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_1/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_1/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_conv_2/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_conv_2/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_2/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_2/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_2/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_2/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_2/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_2/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_2/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_2/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_2/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_conv_shortcut/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_conv_shortcut/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_shortcut/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_shortcut/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_shortcut/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_shortcut/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_shortcut/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_shortcut/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_shortcut/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_shortcut/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_shortcut/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_conv_1/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_conv_1/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_1/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_1/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_1/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_1/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_1/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_1/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_1/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_1/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_1/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_conv_2/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_conv_2/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_2/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_2/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_2/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_2/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_2/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_2/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_2/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_2/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_2/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer output_bbox/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer output_cov/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer output_cov/bias is not supported on DLA, falling back to GPU.
[INFO] [MemUsageSnapshot] Builder begin: CPU 458 MiB, GPU 5792 MiB
[INFO] Reading Calibration Cache for calibrator: EntropyCalibration2
[INFO] Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
[INFO] To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
[WARNING] Missing scale and zero-point for tensor output_bbox/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor conv1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor conv1/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor conv1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor conv1/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor bn_conv1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor bn_conv1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor bn_conv1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor bn_conv1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor bn_conv1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor bn_conv1/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor bn_conv1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor bn_conv1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor bn_conv1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor bn_conv1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor bn_conv1/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_conv_1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_conv_1/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_conv_1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_conv_1/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_1/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_1/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_conv_2/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_conv_2/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_conv_2/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_conv_2/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_2/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_2/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_2/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_2/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_2/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_2/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_2/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_2/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_2/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_2/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_conv_shortcut/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_conv_shortcut/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_conv_shortcut/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_conv_shortcut/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_shortcut/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_shortcut/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_shortcut/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_shortcut/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_shortcut/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_shortcut/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_shortcut/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_shortcut/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_shortcut/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_shortcut/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_conv_1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_conv_1/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_conv_1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_conv_1/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_1/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_1/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_conv_2/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_conv_2/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_conv_2/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_conv_2/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_2/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_2/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_2/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_2/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_2/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_2/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_2/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_2/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_2/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_2/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_conv_1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_conv_1/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_conv_1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_conv_1/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_1/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_1/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_conv_2/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_conv_2/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_conv_2/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_conv_2/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_2/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_2/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_2/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_2/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_2/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_2/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_2/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_2/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_2/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_2/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_conv_shortcut/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_conv_shortcut/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_conv_shortcut/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_conv_shortcut/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_shortcut/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_shortcut/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_shortcut/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_shortcut/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_shortcut/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_shortcut/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_shortcut/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_shortcut/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_shortcut/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_shortcut/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_conv_1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_conv_1/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_conv_1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_conv_1/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_1/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_1/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_conv_2/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_conv_2/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_conv_2/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_conv_2/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_2/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_2/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_2/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_2/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_2/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_2/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_2/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_2/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_2/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_2/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_conv_1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_conv_1/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_conv_1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_conv_1/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_1/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_1/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_conv_2/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_conv_2/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_conv_2/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_conv_2/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_2/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_2/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_2/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_2/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_2/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_2/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_2/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_2/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_2/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_2/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_conv_shortcut/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_conv_shortcut/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_conv_shortcut/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_conv_shortcut/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_shortcut/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_shortcut/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_shortcut/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_shortcut/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_shortcut/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_shortcut/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_shortcut/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_shortcut/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_shortcut/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_shortcut/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_conv_1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_conv_1/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_conv_1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_conv_1/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_1/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_1/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_conv_2/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_conv_2/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_conv_2/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_conv_2/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_2/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_2/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_2/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_2/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_2/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_2/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_2/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_2/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_2/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_2/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_conv_1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_conv_1/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_conv_1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_conv_1/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_1/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_1/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_conv_2/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_conv_2/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_conv_2/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_conv_2/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_2/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_2/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_2/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_2/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_2/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_2/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_2/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_2/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_2/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_2/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_conv_shortcut/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_conv_shortcut/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_conv_shortcut/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_conv_shortcut/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_shortcut/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_shortcut/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_shortcut/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_shortcut/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_shortcut/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_shortcut/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_shortcut/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_shortcut/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_shortcut/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_shortcut/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_conv_1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_conv_1/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_conv_1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_conv_1/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_1/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_1/batchnorm/add_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_conv_2/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_conv_2/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_conv_2/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_conv_2/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_2/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_2/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_2/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_2/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_2/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_2/batchnorm/mul_1, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_2/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_2/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_2/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_2/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor output_bbox/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor output_bbox/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor output_bbox/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor output_cov/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor output_cov/convolution, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor output_cov/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor output_cov/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor output_cov/Sigmoid, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[INFO] ---------- Layers Running on DLA ----------
[INFO] [DlaLayer] {ForeignNode[conv1/convolution...output_cov/Sigmoid]}
[INFO] ---------- Layers Running on GPU ----------
[INFO] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +226, GPU +59, now: CPU 728, GPU 6024 (MiB)
[INFO] [MemUsageChange] Init cuDNN: CPU +307, GPU +310, now: CPU 1035, GPU 6334 (MiB)
[WARNING] Detected invalid timing cache, setup a local cache instead
[INFO] Detected 1 inputs and 2 output network tensors.
[INFO] Total Host Persistent Memory: 864
[INFO] Total Device Persistent Memory: 0
[INFO] Total Scratch Memory: 0
[INFO] [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 127 MiB, GPU 210 MiB
[INFO] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 1084, GPU 6621 (MiB)
[INFO] [MemUsageChange] Init cuDNN: CPU +1, GPU +0, now: CPU 1085, GPU 6621 (MiB)
[INFO] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 1084, GPU 6621 (MiB)
[INFO] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 1084, GPU 6605 (MiB)
[INFO] [MemUsageSnapshot] Builder end: CPU 1041 MiB, GPU 6605 MiB

Thank you for your time.

The log is not expected. Did you ever try with official etlt model and its corresponding cal file for DLA?
For example, https://catalog.ngc.nvidia.com/orgs/nvidia/models/tlt_peoplenet/files?version=pruned_v2.0

I have executed both models, resnet18, resnet34, but it keeps giving me the same warning messages.

The command used for resnet18:

./tao-converter -k tlt_encode
           -d 3,544,960
           -o output_bbox/BiasAdd,output_cov/Sigmoid
           -e /home/models/resnet18_peoplenet.engine
           -t int8
           -c /home/models/resnet18_peoplenet_int8_dla.txt
           -u 0
           /home/models/resnet18_peoplenet_pruned.etlt

The output of resnet18:

[INFO] [MemUsageChange] Init CUDA: CPU +353, GPU +0, now: CPU 371, GPU 5474 (MiB)
[WARNING] Default DLA is enabled but layer output_bbox/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer conv1/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer conv1/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer bn_conv1/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer bn_conv1/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer bn_conv1/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer bn_conv1/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer bn_conv1/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer bn_conv1/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer bn_conv1/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer bn_conv1/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer bn_conv1/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_conv_1/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_conv_1/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_1/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_1/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_1/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_1/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_1/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_1/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_1/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_1/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_1/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_conv_2/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_conv_2/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_2/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_2/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_2/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_2/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_2/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_2/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_2/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_2/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_2/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_conv_shortcut/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_conv_shortcut/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_shortcut/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_shortcut/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_shortcut/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_shortcut/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_shortcut/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_shortcut/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_shortcut/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_shortcut/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1a_bn_shortcut/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_conv_1/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_conv_1/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_1/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_1/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_1/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_1/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_1/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_1/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_1/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_1/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_1/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_conv_2/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_conv_2/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_2/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_2/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_2/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_2/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_2/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_2/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_2/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_2/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_2/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_conv_shortcut/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_conv_shortcut/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_shortcut/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_shortcut/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_shortcut/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_shortcut/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_shortcut/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_shortcut/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_shortcut/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_shortcut/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_1b_bn_shortcut/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_conv_1/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_conv_1/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_1/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_1/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_1/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_1/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_1/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_1/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_1/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_1/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_1/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_conv_2/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_conv_2/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_2/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_2/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_2/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_2/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_2/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_2/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_2/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_2/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_2/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_conv_shortcut/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_conv_shortcut/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_shortcut/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_shortcut/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_shortcut/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_shortcut/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_shortcut/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_shortcut/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_shortcut/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_shortcut/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2a_bn_shortcut/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_conv_1/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_conv_1/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_1/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_1/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_1/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_1/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_1/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_1/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_1/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_1/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_1/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_conv_2/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_conv_2/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_2/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_2/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_2/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_2/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_2/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_2/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_2/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_2/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_2/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_conv_shortcut/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_conv_shortcut/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_shortcut/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_shortcut/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_shortcut/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_shortcut/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_shortcut/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_shortcut/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_shortcut/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_shortcut/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_2b_bn_shortcut/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_conv_1/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_conv_1/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_1/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_1/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_1/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_1/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_1/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_1/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_1/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_1/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_1/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_conv_2/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_conv_2/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_2/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_2/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_2/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_2/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_2/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_2/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_2/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_2/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_2/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_conv_shortcut/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_conv_shortcut/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_shortcut/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_shortcut/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_shortcut/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_shortcut/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_shortcut/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_shortcut/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_shortcut/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_shortcut/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3a_bn_shortcut/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_conv_1/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_conv_1/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_1/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_1/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_1/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_1/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_1/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_1/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_1/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_1/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_1/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_conv_2/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_conv_2/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_2/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_2/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_2/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_2/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_2/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_2/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_2/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_2/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_2/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_conv_shortcut/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_conv_shortcut/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_shortcut/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_shortcut/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_shortcut/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_shortcut/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_shortcut/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_shortcut/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_shortcut/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_shortcut/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_3b_bn_shortcut/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_conv_1/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_conv_1/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_1/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_1/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_1/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_1/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_1/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_1/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_1/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_1/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_1/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_conv_2/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_conv_2/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_2/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_2/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_2/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_2/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_2/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_2/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_2/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_2/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_2/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_conv_shortcut/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_conv_shortcut/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_shortcut/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_shortcut/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_shortcut/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_shortcut/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_shortcut/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_shortcut/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_shortcut/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_shortcut/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4a_bn_shortcut/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_conv_1/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_conv_1/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_1/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_1/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_1/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_1/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_1/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_1/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_1/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_1/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_1/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_conv_2/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_conv_2/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_2/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_2/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_2/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_2/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_2/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_2/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_2/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_2/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_2/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_conv_shortcut/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_conv_shortcut/bias is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_shortcut/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_shortcut/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_shortcut/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_shortcut/gamma is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_shortcut/Reshape_3/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_shortcut/beta is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_shortcut/Reshape_2/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_shortcut/moving_mean is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer block_4b_bn_shortcut/Reshape/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer output_bbox/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer output_cov/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer output_cov/bias is not supported on DLA, falling back to GPU.
[INFO] [MemUsageSnapshot] Builder begin: CPU 394 MiB, GPU 5508 MiB
[INFO] Reading Calibration Cache for calibrator: EntropyCalibration2
[INFO] Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
[INFO] To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
[WARNING] Missing scale and zero-point for tensor output_bbox/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor conv1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor conv1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor bn_conv1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor bn_conv1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor bn_conv1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor bn_conv1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor bn_conv1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor bn_conv1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor bn_conv1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor bn_conv1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor bn_conv1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_conv_1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_conv_1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_conv_2/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_conv_2/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_2/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_2/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_2/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_2/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_2/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_2/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_2/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_2/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_2/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_conv_shortcut/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_conv_shortcut/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_shortcut/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_shortcut/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_shortcut/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_shortcut/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_shortcut/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_shortcut/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_shortcut/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_shortcut/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1a_bn_shortcut/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_conv_1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_conv_1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_conv_2/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_conv_2/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_2/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_2/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_2/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_2/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_2/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_2/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_2/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_2/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_2/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_conv_shortcut/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_conv_shortcut/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_shortcut/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_shortcut/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_shortcut/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_shortcut/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_shortcut/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_shortcut/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_shortcut/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_shortcut/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_1b_bn_shortcut/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_conv_1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_conv_1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_conv_2/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_conv_2/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_2/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_2/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_2/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_2/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_2/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_2/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_2/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_2/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_2/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_conv_shortcut/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_conv_shortcut/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_shortcut/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_shortcut/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_shortcut/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_shortcut/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_shortcut/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_shortcut/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_shortcut/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_shortcut/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2a_bn_shortcut/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_conv_1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_conv_1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_conv_2/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_conv_2/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_2/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_2/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_2/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_2/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_2/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_2/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_2/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_2/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_2/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_conv_shortcut/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_conv_shortcut/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_shortcut/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_shortcut/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_shortcut/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_shortcut/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_shortcut/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_shortcut/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_shortcut/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_shortcut/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_2b_bn_shortcut/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_conv_1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_conv_1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_conv_2/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_conv_2/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_2/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_2/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_2/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_2/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_2/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_2/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_2/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_2/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_2/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_conv_shortcut/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_conv_shortcut/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_shortcut/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_shortcut/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_shortcut/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_shortcut/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_shortcut/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_shortcut/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_shortcut/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_shortcut/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3a_bn_shortcut/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_conv_1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_conv_1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_conv_2/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_conv_2/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_2/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_2/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_2/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_2/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_2/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_2/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_2/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_2/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_2/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_conv_shortcut/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_conv_shortcut/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_shortcut/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_shortcut/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_shortcut/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_shortcut/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_shortcut/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_shortcut/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_shortcut/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_shortcut/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_3b_bn_shortcut/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_conv_1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_conv_1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_conv_2/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_conv_2/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_2/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_2/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_2/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_2/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_2/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_2/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_2/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_2/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_2/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_conv_shortcut/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_conv_shortcut/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_shortcut/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_shortcut/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_shortcut/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_shortcut/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_shortcut/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_shortcut/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_shortcut/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_shortcut/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4a_bn_shortcut/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_conv_1/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_conv_1/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_1/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_1/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_1/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_1/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_1/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_1/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_1/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_1/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_1/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_conv_2/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_conv_2/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_2/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_2/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_2/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_2/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_2/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_2/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_2/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_2/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_2/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_conv_shortcut/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_conv_shortcut/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_shortcut/moving_variance, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_shortcut/Reshape_1/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_shortcut/batchnorm/add/y, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_shortcut/gamma, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_shortcut/Reshape_3/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_shortcut/beta, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_shortcut/Reshape_2/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_shortcut/moving_mean, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor block_4b_bn_shortcut/Reshape/shape, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor output_bbox/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor output_cov/kernel, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[WARNING] Missing scale and zero-point for tensor output_cov/bias, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
[INFO] ---------- Layers Running on DLA ----------
[INFO] [DlaLayer] {ForeignNode[conv1/convolution...output_cov/Sigmoid]}
[INFO] ---------- Layers Running on GPU ----------
[INFO] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +226, GPU +172, now: CPU 633, GPU 5743 (MiB)
[INFO] [MemUsageChange] Init cuDNN: CPU +307, GPU +308, now: CPU 940, GPU 6051 (MiB)
[WARNING] Detected invalid timing cache, setup a local cache instead
[INFO] Detected 1 inputs and 2 output network tensors.
[INFO] Total Host Persistent Memory: 864
[INFO] Total Device Persistent Memory: 0
[INFO] Total Scratch Memory: 0
[INFO] [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 32 MiB, GPU 354 MiB
[INFO] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +1, GPU +0, now: CPU 987, GPU 6482 (MiB)
[INFO] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 987, GPU 6482 (MiB)
[INFO] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 986, GPU 6482 (MiB)
[INFO] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 986, GPU 6467 (MiB)
[INFO] [MemUsageSnapshot] Builder end: CPU 975 MiB, GPU 6467 MiB

The command used for resnet34:

./tao-converter -k tlt_encode
           -d 3,544,960
           -o output_bbox/BiasAdd,output_cov/Sigmoid 
           -e /home/models/resnet34_peoplenet.engine
           -t int8
           -c /home/models/resnet34_peoplenet_int8_dla.txt
           -u 0
           /home/models/resnet34_peoplenet_pruned.etlt

The output of resnet34 is very similar to the one shown above, the warning messages, and the info about layers is the same.

Thanks you for your time

Sorry for late reply. After checking further, actually you can ignore the warning info.
The layer have been fused into another layer which DLA supports.
To prove this, you can generate the trt engine with or without "-c cal_file " . Then run trtexec to compare the inference speed.
$ /usr/src/tensorrt/bin/trtexec --loadEngine=xxx.engine --int8 --batch=1 --useSpinWait --avgRuns=10000 --useDLACore=0

You can also find below info for DLA. These are the layers running at DLA.

[INFO] [DlaLayer] {ForeignNode[conv1/convolution…output_cov/Sigmoid]}

The ... part means that the layers in the middle are omitted (so that the layer name is not too long)

Hi,

Ok thanks for the clarification about where the layers are really running, in the case of executing the command you propose I get the following error:

[04/19/2022-13:24:18] [I] Created input binding for input_1 with dimensions 3x480x640
[04/19/2022-13:24:18] [I] Created output binding for output_bbox/BiasAdd with dimensions 8x30x40
[04/19/2022-13:24:18] [I] Created output binding for output_cov/Sigmoid with dimensions 2x30x40
[04/19/2022-13:24:18] [I] Starting inference
Module_id 33 Severity 2 : NVMEDIA_DLA 424
Module_id 33 Severity 2 : Failed to set input tensor descriptor
Module_id 33 Severity 2 : NVMEDIA_DLA 712
Module_id 33 Severity 2 : Failed to set input tensor descriptor
Module_id 33 Severity 2 : Input tensor:  0
Module_id 33 Severity 2 : status:  0x000007
Module_id 33 Severity 2 : NVMEDIA_DLA 2866
Module_id 33 Severity 2 : Failed to bind input tensor args. status:  0x000007
[04/19/2022-13:24:18] [E] Error[1]: [nvdlaUtils.cpp::submit::198] Error Code 1: DLA (Failure to submit program to DLA engine.)
[04/19/2022-13:24:18] [E] Error occurred during inference
&&&& FAILED TensorRT.trtexec [TensorRT v8001] # /usr/src/tensorrt/bin/trtexec --loadEngine=model.engine --int8 --batch=1 --useSpinWait --avgRuns=10000 --useDLACore=0
[04/19/2022-13:24:18] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 941, GPU 7156 (MiB)
terminate called after throwing an instance of 'nvinfer1::InternalError'
  what():  Assertion !mCudaMemory || !mNvmTensor failed.

I have seen in another forum, that it is a problem that will be fixed in future updates, can you confirm this for me

Thanks for everything

For the new error, please try to use the latest Jetpack to check if issue is gone.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.

Update: For DLA, please set the same batch size between tao-converter and trtexec.
For example, “-m 1” in tao-converter and “--batch=1” in trtexec.