Hi,

Works perfectly. But I do have another question:

I trained resnet34 classification model, and like to run it on DLA (Jetson Xavier).

When converting, I get:

…

```
[WARNING] Default DLA is enabled but layer conv1/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer bn_conv1/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer bn_conv1/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer bn_conv1/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer bn_conv1/gamma is not supported on DLA, falling back to GPU.
...
```

(same msg lot of block_XXX layers)

followed by:

`[INFO] --------------- Layers running on DLA:`

` [INFO]`

`{conv1/convolution,bn_conv1/batchnorm/mul_1,bn_conv1/batchnorm/add_1,activation_1/Relu,block_1a_conv_1/convolution,block_1a_bn_1/batchnorm/mul_1,block_1a_bn_1/batchnorm/add_1,block_1a_relu_1/Relu,block_1a_conv_2/convolution,block_1a_bn_2/batchnorm/mul_1,block_1a_bn_2/batchnorm/add_1,block_1a_conv_shortcut/convolution,block_1a_bn_shortcut/batchnorm/mul_1,block_1a_bn_shortcut/batchnorm/add_1,add_1/add,block_1a_relu/Relu,block_1b_conv_1/convolution,block_1b_bn_1/batchnorm/mul_1,block_1b_bn_1/batchnorm/add_1,block_1b_relu_1/Relu,block_1b_conv_2/convolution,block_1b_bn_2/batchnorm/mul_1,block_1b_bn_2/batchnorm/add_1,block_1b_conv_shortcut/convolution,block_1b_bn_shortcut/batchnorm/mul_1,block_1b_bn_shortcut/batchnorm/add_1,add_2/add,block_1b_relu/Relu,block_1c_conv_1/convolution,block_1c_bn_1/batchnorm/mul_1,block_1c_bn_1/batchnorm/add_1,block_1c_relu_1/Relu,block_1c_conv_2/convolution,block_1c_bn_2/batchnorm/mul_1,block_1c_bn_2/batchnorm/add_1,block_1c_conv_shortcut/convolution,block_1c_bn_shortcut/batchnorm/mul_1,block_1c_bn_shortcut/batchnorm/add_1,add_3/add,block_1c_relu/Relu,block_2a_conv_1/convolution,block_2a_bn_1/batchnorm/mul_1,block_2a_bn_1/batchnorm/add_1,block_2a_relu_1/Relu,block_2a_conv_2/convolution,block_2a_bn_2/batchnorm/mul_1,block_2a_bn_2/batchnorm/add_1,block_2a_conv_shortcut/convolution,block_2a_bn_shortcut/batchnorm/mul_1,block_2a_bn_shortcut/batchnorm/add_1,add_4/add,block_2a_relu/Relu,block_2b_conv_1/convolution,block_2b_bn_1/batchnorm/mul_1,block_2b_bn_1/batchnorm/add_1,block_2b_relu_1/Relu,block_2b_conv_2/convolution,block_2b_bn_2/batchnorm/mul_1,block_2b_bn_2/batchnorm/add_1,block_2b_conv_shortcut/convolution,block_2b_bn_shortcut/batchnorm/mul_1,block_2b_bn_shortcut/batchnorm/add_1,add_5/add,block_2b_relu/Relu,block_2c_conv_1/convolution,block_2c_bn_1/batchnorm/mul_1,block_2c_bn_1/batchnorm/add_1,block_2c_relu_1/Relu,block_2c_conv_2/convolution,block_2c_bn_2/batchnorm/mul_1,block_2c_bn_2/batchnorm/add_1,block_2c_conv_shortcut/convolution,block_2c_bn_shortcut/batchnorm/mul_1,block_2c_bn_shortcut/batchnorm/add_1,add_6/add,block_2c_relu/Relu,block_2d_conv_1/convolution,block_2d_bn_1/batchnorm/mul_1,block_2d_bn_1/batchnorm/add_1,block_2d_relu_1/Relu,block_2d_conv_2/convolution,block_2d_bn_2/batchnorm/mul_1,block_2d_bn_2/batchnorm/add_1,block_2d_conv_shortcut/convolution,block_2d_bn_shortcut/batchnorm/mul_1,block_2d_bn_shortcut/batchnorm/add_1,add_7/add,block_2d_relu/Relu,block_3a_conv_1/convolution,block_3a_bn_1/batchnorm/mul_1,block_3a_bn_1/batchnorm/add_1,block_3a_relu_1/Relu,block_3a_conv_2/convolution,block_3a_bn_2/batchnorm/mul_1,block_3a_bn_2/batchnorm/add_1,block_3a_conv_shortcut/convolution,block_3a_bn_shortcut/batchnorm/mul_1,block_3a_bn_shortcut/batchnorm/add_1,add_8/add,block_3a_relu/Relu,block_3b_conv_1/convolution,block_3b_bn_1/batchnorm/mul_1,block_3b_bn_1/batchnorm/add_1,block_3b_relu_1/Relu,block_3b_conv_2/convolution,block_3b_bn_2/batchnorm/mul_1,block_3b_bn_2/batchnorm/add_1,block_3b_conv_shortcut/convolution,block_3b_bn_shortcut/batchnorm/mul_1,block_3b_bn_shortcut/batchnorm/add_1,add_9/add,block_3b_relu/Relu,block_3c_conv_1/convolution,block_3c_bn_1/batchnorm/mul_1,block_3c_bn_1/batchnorm/add_1,block_3c_relu_1/Relu,block_3c_conv_2/convolution,block_3c_bn_2/batchnorm/mul_1,block_3c_bn_2/batchnorm/add_1,block_3c_conv_shortcut/convolution,block_3c_bn_shortcut/batchnorm/mul_1,block_3c_bn_shortcut/batchnorm/add_1,add_10/add,block_3c_relu/Relu,block_3d_conv_1/convolution,block_3d_bn_1/batchnorm/mul_1,block_3d_bn_1/batchnorm/add_1,block_3d_relu_1/Relu,block_3d_conv_2/convolution,block_3d_bn_2/batchnorm/mul_1,block_3d_bn_2/batchnorm/add_1,block_3d_conv_shortcut/convolution,block_3d_bn_shortcut/batchnorm/mul_1,block_3d_bn_shortcut/batchnorm/add_1,add_11/add,block_3d_relu/Relu,block_3e_conv_1/convolution,block_3e_bn_1/batchnorm/mul_1,block_3e_bn_1/batchnorm/add_1,block_3e_relu_1/Relu,block_3e_conv_2/convolution,block_3e_bn_2/batchnorm/mul_1,block_3e_bn_2/batchnorm/add_1,block_3e_conv_shortcut/convolution,block_3e_bn_shortcut/batchnorm/mul_1,block_3e_bn_shortcut/batchnorm/add_1,add_12/add,block_3e_relu/Relu,block_3f_conv_1/convolution,block_3f_bn_1/batchnorm/mul_1,block_3f_bn_1/batchnorm/add_1,block_3f_relu_1/Relu,block_3f_conv_2/convolution,block_3f_bn_2/batchnorm/mul_1,block_3f_bn_2/batchnorm/add_1,block_3f_conv_shortcut/convolution,block_3f_bn_shortcut/batchnorm/mul_1,block_3f_bn_shortcut/batchnorm/add_1,add_13/add,block_3f_relu/Relu,block_4a_conv_1/convolution,block_4a_bn_1/batchnorm/mul_1,block_4a_bn_1/batchnorm/add_1,block_4a_relu_1/Relu,block_4a_conv_2/convolution,block_4a_bn_2/batchnorm/mul_1,block_4a_bn_2/batchnorm/add_1,block_4a_conv_shortcut/convolution,block_4a_bn_shortcut/batchnorm/mul_1,block_4a_bn_shortcut/batchnorm/add_1,add_14/add,block_4a_relu/Relu,block_4b_conv_1/convolution,block_4b_bn_1/batchnorm/mul_1,block_4b_bn_1/batchnorm/add_1,block_4b_relu_1/Relu,block_4b_conv_2/convolution,block_4b_bn_2/batchnorm/mul_1,block_4b_bn_2/batchnorm/add_1,block_4b_conv_shortcut/convolution,block_4b_bn_shortcut/batchnorm/mul_1,block_4b_bn_shortcut/batchnorm/add_1,add_15/add,block_4b_relu/Relu,block_4c_conv_1/convolution,block_4c_bn_1/batchnorm/mul_1,block_4c_bn_1/batchnorm/add_1,block_4c_relu_1/Relu,block_4c_conv_2/convolution,block_4c_bn_2/batchnorm/mul_1,block_4c_bn_2/batchnorm/add_1,block_4c_conv_shortcut/convolution,block_4c_bn_shortcut/batchnorm/mul_1,block_4c_bn_shortcut/batchnorm/add_1,add_16/add,block_4c_relu/Relu},`

`{predictions/MatMul,predictions/BiasAdd},`

` [INFO] --------------- Layers running on GPU:`

**[INFO] flatten/Reshape, predictions/Softmax,**

My question - in inference, are **all** the layers, except the flatten/Reshape and predictions/Softmax running on the DLA ? What about the lines with the warning ?

Thanks for the help !