[WARNING] Default DLA is enabled but layer mask_fcn_logits/bias is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer conv1/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer bn_conv1/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer bn_conv1/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer bn_conv1/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer bn_conv1/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1a_conv_1/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1a_bn_1/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1a_bn_1/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1a_bn_1/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1a_bn_1/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1a_conv_2/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1a_bn_2/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1a_bn_2/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1a_bn_2/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1a_bn_2/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1a_conv_3/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1a_bn_3/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1a_bn_3/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1a_bn_3/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1a_bn_3/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1a_conv_shortcut/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1a_bn_shortcut/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1a_bn_shortcut/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1a_bn_shortcut/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1a_bn_shortcut/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1b_conv_1/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1b_bn_1/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1b_bn_1/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1b_bn_1/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1b_bn_1/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1b_conv_2/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1b_bn_2/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1b_bn_2/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1b_bn_2/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1b_bn_2/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1b_conv_3/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1b_bn_3/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1b_bn_3/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1b_bn_3/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1b_bn_3/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1c_conv_1/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1c_bn_1/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1c_bn_1/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1c_bn_1/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1c_bn_1/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1c_conv_2/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1c_bn_2/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1c_bn_2/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1c_bn_2/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1c_bn_2/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1c_conv_3/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1c_bn_3/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1c_bn_3/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1c_bn_3/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_1c_bn_3/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer l2/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer l2/bias is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2a_conv_1/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2a_bn_1/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2a_bn_1/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2a_bn_1/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2a_bn_1/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2a_conv_2/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2a_bn_2/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2a_bn_2/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2a_bn_2/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2a_bn_2/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2a_conv_3/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2a_bn_3/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2a_bn_3/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2a_bn_3/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2a_bn_3/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2a_conv_shortcut/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2a_bn_shortcut/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2a_bn_shortcut/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2a_bn_shortcut/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2a_bn_shortcut/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2b_conv_1/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2b_bn_1/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2b_bn_1/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2b_bn_1/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2b_bn_1/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2b_conv_2/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2b_bn_2/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2b_bn_2/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2b_bn_2/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2b_bn_2/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2b_conv_3/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2b_bn_3/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2b_bn_3/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2b_bn_3/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2b_bn_3/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2c_conv_1/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2c_bn_1/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2c_bn_1/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2c_bn_1/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2c_bn_1/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2c_conv_2/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2c_bn_2/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2c_bn_2/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2c_bn_2/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2c_bn_2/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2c_conv_3/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2c_bn_3/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2c_bn_3/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2c_bn_3/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2c_bn_3/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2d_conv_1/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2d_bn_1/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2d_bn_1/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2d_bn_1/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2d_bn_1/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2d_conv_2/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2d_bn_2/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2d_bn_2/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2d_bn_2/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2d_bn_2/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2d_conv_3/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2d_bn_3/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2d_bn_3/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2d_bn_3/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_2d_bn_3/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer l3/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer l3/bias is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3a_conv_1/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3a_bn_1/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3a_bn_1/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3a_bn_1/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3a_bn_1/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3a_conv_2/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3a_bn_2/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3a_bn_2/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3a_bn_2/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3a_bn_2/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3a_conv_3/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3a_bn_3/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3a_bn_3/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3a_bn_3/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3a_bn_3/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3a_conv_shortcut/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3a_bn_shortcut/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3a_bn_shortcut/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3a_bn_shortcut/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3a_bn_shortcut/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3b_conv_1/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3b_bn_1/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3b_bn_1/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3b_bn_1/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3b_bn_1/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3b_conv_2/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3b_bn_2/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3b_bn_2/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3b_bn_2/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3b_bn_2/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3b_conv_3/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3b_bn_3/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3b_bn_3/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3b_bn_3/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3b_bn_3/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3c_conv_1/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3c_bn_1/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3c_bn_1/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3c_bn_1/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3c_bn_1/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3c_conv_2/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3c_bn_2/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3c_bn_2/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3c_bn_2/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3c_bn_2/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3c_conv_3/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3c_bn_3/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3c_bn_3/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3c_bn_3/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3c_bn_3/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3d_conv_1/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3d_bn_1/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3d_bn_1/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3d_bn_1/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3d_bn_1/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3d_conv_2/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3d_bn_2/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3d_bn_2/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3d_bn_2/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3d_bn_2/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3d_conv_3/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3d_bn_3/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3d_bn_3/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3d_bn_3/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3d_bn_3/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3e_conv_1/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3e_bn_1/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3e_bn_1/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3e_bn_1/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3e_bn_1/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3e_conv_2/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3e_bn_2/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3e_bn_2/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3e_bn_2/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3e_bn_2/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3e_conv_3/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3e_bn_3/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3e_bn_3/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3e_bn_3/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3e_bn_3/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3f_conv_1/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3f_bn_1/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3f_bn_1/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3f_bn_1/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3f_bn_1/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3f_conv_2/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3f_bn_2/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3f_bn_2/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3f_bn_2/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3f_bn_2/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3f_conv_3/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3f_bn_3/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3f_bn_3/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3f_bn_3/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_3f_bn_3/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer l4/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer l4/bias is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4a_conv_1/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4a_bn_1/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4a_bn_1/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4a_bn_1/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4a_bn_1/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4a_conv_2/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4a_bn_2/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4a_bn_2/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4a_bn_2/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4a_bn_2/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4a_conv_3/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4a_bn_3/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4a_bn_3/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4a_bn_3/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4a_bn_3/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4a_conv_shortcut/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4a_bn_shortcut/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4a_bn_shortcut/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4a_bn_shortcut/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4a_bn_shortcut/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4b_conv_1/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4b_bn_1/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4b_bn_1/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4b_bn_1/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4b_bn_1/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4b_conv_2/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4b_bn_2/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4b_bn_2/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4b_bn_2/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4b_bn_2/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4b_conv_3/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4b_bn_3/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4b_bn_3/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4b_bn_3/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4b_bn_3/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4c_conv_1/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4c_bn_1/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4c_bn_1/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4c_bn_1/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4c_bn_1/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4c_conv_2/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4c_bn_2/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4c_bn_2/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4c_bn_2/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4c_bn_2/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4c_conv_3/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4c_bn_3/gamma is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4c_bn_3/beta is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4c_bn_3/moving_mean is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer block_4c_bn_3/moving_variance is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer l5/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer l5/bias is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer nearest_upsampling_2 is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer nearest_upsampling_1 is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer nearest_upsampling is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer post_hoc_d2/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer post_hoc_d2/bias is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer rpn/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer rpn/bias is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer rpn-box/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer rpn-box/bias is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer permute_1/transpose is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer MLP/multilevel_propose_rois/level_2/Reshape_1/shape is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer (Unnamed Layer* 475) [Shuffle] is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer MLP/multilevel_propose_rois/level_2/Reshape_1 is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer rpn-class/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer rpn-class/bias is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer permute/transpose is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer MLP/multilevel_propose_rois/level_2/Reshape/shape is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer (Unnamed Layer* 483) [Shuffle] is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer MLP/multilevel_propose_rois/level_2/Reshape is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer post_hoc_d3/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer post_hoc_d3/bias is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer permute_3/transpose is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer MLP/multilevel_propose_rois/level_3/Reshape_1/shape is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer (Unnamed Layer* 497) [Shuffle] is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer MLP/multilevel_propose_rois/level_3/Reshape_1 is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer permute_2/transpose is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer MLP/multilevel_propose_rois/level_3/Reshape/shape is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer (Unnamed Layer* 503) [Shuffle] is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer MLP/multilevel_propose_rois/level_3/Reshape is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer post_hoc_d4/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer post_hoc_d4/bias is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer permute_5/transpose is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer MLP/multilevel_propose_rois/level_4/Reshape_1/shape is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer (Unnamed Layer* 517) [Shuffle] is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer MLP/multilevel_propose_rois/level_4/Reshape_1 is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer permute_4/transpose is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer MLP/multilevel_propose_rois/level_4/Reshape/shape is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer (Unnamed Layer* 523) [Shuffle] is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer MLP/multilevel_propose_rois/level_4/Reshape is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer post_hoc_d5/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer post_hoc_d5/bias is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer permute_7/transpose is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer MLP/multilevel_propose_rois/level_5/Reshape_1/shape is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer (Unnamed Layer* 537) [Shuffle] is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer MLP/multilevel_propose_rois/level_5/Reshape_1 is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer permute_6/transpose is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer MLP/multilevel_propose_rois/level_5/Reshape/shape is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer (Unnamed Layer* 543) [Shuffle] is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer MLP/multilevel_propose_rois/level_5/Reshape is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer permute_9/transpose is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer MLP/multilevel_propose_rois/level_6/Reshape_1/shape is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer (Unnamed Layer* 554) [Shuffle] is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer MLP/multilevel_propose_rois/level_6/Reshape_1 is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer permute_8/transpose is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer MLP/multilevel_propose_rois/level_6/Reshape/shape is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer (Unnamed Layer* 560) [Shuffle] is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer MLP/multilevel_propose_rois/level_6/Reshape is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer multilevel_propose_rois is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer pyramid_crop_and_resize_box is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer fc6/kernel is not supported on DLA, falling back to GPU. [WARNING] DLA LAYER: Batch size (combined volume except for CHW dimensions) 16000 for layer fc6/MatMul exceeds max batch size allowed of 32. [WARNING] Default DLA is enabled but layer fc6/MatMul is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer fc6/bias is not supported on DLA, falling back to GPU. [WARNING] DLA LAYER: Batch size (combined volume except for CHW dimensions) 16000 for layer fc6/BiasAdd exceeds max batch size allowed of 32. [WARNING] Default DLA is enabled but layer fc6/BiasAdd is not supported on DLA, falling back to GPU. [WARNING] DLA LAYER: Batch size (combined volume except for CHW dimensions) 16000 for layer fc6/Relu exceeds max batch size allowed of 32. [WARNING] Default DLA is enabled but layer fc6/Relu is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer fc7/kernel is not supported on DLA, falling back to GPU. [WARNING] DLA LAYER: Batch size (combined volume except for CHW dimensions) 16000 for layer fc7/MatMul exceeds max batch size allowed of 32. [WARNING] Default DLA is enabled but layer fc7/MatMul is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer fc7/bias is not supported on DLA, falling back to GPU. [WARNING] DLA LAYER: Batch size (combined volume except for CHW dimensions) 16000 for layer fc7/BiasAdd exceeds max batch size allowed of 32. [WARNING] Default DLA is enabled but layer fc7/BiasAdd is not supported on DLA, falling back to GPU. [WARNING] DLA LAYER: Batch size (combined volume except for CHW dimensions) 16000 for layer fc7/Relu exceeds max batch size allowed of 32. [WARNING] Default DLA is enabled but layer fc7/Relu is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer box-predict/kernel is not supported on DLA, falling back to GPU. [WARNING] DLA LAYER: Batch size (combined volume except for CHW dimensions) 16000 for layer box-predict/MatMul exceeds max batch size allowed of 32. [WARNING] Default DLA is enabled but layer box-predict/MatMul is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer box-predict/bias is not supported on DLA, falling back to GPU. [WARNING] DLA LAYER: Batch size (combined volume except for CHW dimensions) 16000 for layer box-predict/BiasAdd exceeds max batch size allowed of 32. [WARNING] Default DLA is enabled but layer box-predict/BiasAdd is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer class-predict/kernel is not supported on DLA, falling back to GPU. [WARNING] DLA LAYER: Batch size (combined volume except for CHW dimensions) 16000 for layer class-predict/MatMul exceeds max batch size allowed of 32. [WARNING] Default DLA is enabled but layer class-predict/MatMul is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer class-predict/bias is not supported on DLA, falling back to GPU. [WARNING] DLA LAYER: Batch size (combined volume except for CHW dimensions) 16000 for layer class-predict/BiasAdd exceeds max batch size allowed of 32. [WARNING] Default DLA is enabled but layer class-predict/BiasAdd is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer box_head_softmax is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer generate_detections is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer mrcnn_detection_bboxes is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer pyramid_crop_and_resize_mask is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer mask-conv-l0/kernel is not supported on DLA, falling back to GPU. [WARNING] DLA LAYER: Batch size (combined volume except for CHW dimensions) 1600 for layer mask-conv-l0/Conv2D exceeds max batch size allowed of 32. [WARNING] Default DLA is enabled but layer mask-conv-l0/Conv2D is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer mask-conv-l0/bias is not supported on DLA, falling back to GPU. [WARNING] DLA LAYER: Batch size (combined volume except for CHW dimensions) 1600 for layer mask-conv-l0/BiasAdd exceeds max batch size allowed of 32. [WARNING] Default DLA is enabled but layer mask-conv-l0/BiasAdd is not supported on DLA, falling back to GPU. [WARNING] DLA LAYER: Batch size (combined volume except for CHW dimensions) 1600 for layer mask-conv-l0/Relu exceeds max batch size allowed of 32. [WARNING] Default DLA is enabled but layer mask-conv-l0/Relu is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer mask-conv-l1/kernel is not supported on DLA, falling back to GPU. [WARNING] DLA LAYER: Batch size (combined volume except for CHW dimensions) 1600 for layer mask-conv-l1/Conv2D exceeds max batch size allowed of 32. [WARNING] Default DLA is enabled but layer mask-conv-l1/Conv2D is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer mask-conv-l1/bias is not supported on DLA, falling back to GPU. [WARNING] DLA LAYER: Batch size (combined volume except for CHW dimensions) 1600 for layer mask-conv-l1/BiasAdd exceeds max batch size allowed of 32. [WARNING] Default DLA is enabled but layer mask-conv-l1/BiasAdd is not supported on DLA, falling back to GPU. [WARNING] DLA LAYER: Batch size (combined volume except for CHW dimensions) 1600 for layer mask-conv-l1/Relu exceeds max batch size allowed of 32. [WARNING] Default DLA is enabled but layer mask-conv-l1/Relu is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer mask-conv-l2/kernel is not supported on DLA, falling back to GPU. [WARNING] DLA LAYER: Batch size (combined volume except for CHW dimensions) 1600 for layer mask-conv-l2/Conv2D exceeds max batch size allowed of 32. [WARNING] Default DLA is enabled but layer mask-conv-l2/Conv2D is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer mask-conv-l2/bias is not supported on DLA, falling back to GPU. [WARNING] DLA LAYER: Batch size (combined volume except for CHW dimensions) 1600 for layer mask-conv-l2/BiasAdd exceeds max batch size allowed of 32. [WARNING] Default DLA is enabled but layer mask-conv-l2/BiasAdd is not supported on DLA, falling back to GPU. [WARNING] DLA LAYER: Batch size (combined volume except for CHW dimensions) 1600 for layer mask-conv-l2/Relu exceeds max batch size allowed of 32. [WARNING] Default DLA is enabled but layer mask-conv-l2/Relu is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer mask-conv-l3/kernel is not supported on DLA, falling back to GPU. [WARNING] DLA LAYER: Batch size (combined volume except for CHW dimensions) 1600 for layer mask-conv-l3/Conv2D exceeds max batch size allowed of 32. [WARNING] Default DLA is enabled but layer mask-conv-l3/Conv2D is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer mask-conv-l3/bias is not supported on DLA, falling back to GPU. [WARNING] DLA LAYER: Batch size (combined volume except for CHW dimensions) 1600 for layer mask-conv-l3/BiasAdd exceeds max batch size allowed of 32. [WARNING] Default DLA is enabled but layer mask-conv-l3/BiasAdd is not supported on DLA, falling back to GPU. [WARNING] DLA LAYER: Batch size (combined volume except for CHW dimensions) 1600 for layer mask-conv-l3/Relu exceeds max batch size allowed of 32. [WARNING] Default DLA is enabled but layer mask-conv-l3/Relu is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer conv5-mask/kernel is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer conv5-mask/Shape is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer conv5-mask/strided_slice/stack is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer conv5-mask/strided_slice/stack_1 is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer conv5-mask/strided_slice/stack_2 is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer conv5-mask/stack/1 is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer conv5-mask/strided_slice_1/stack is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer conv5-mask/strided_slice_1/stack_1 is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer conv5-mask/strided_slice_1/stack_2 is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer conv5-mask/mul/y is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer conv5-mask/add/y is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer conv5-mask/strided_slice_2/stack is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer conv5-mask/strided_slice_2/stack_1 is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer conv5-mask/strided_slice_2/stack_2 is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer conv5-mask/mul_1/y is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer conv5-mask/add_1/y is not supported on DLA, falling back to GPU. [WARNING] DLA LAYER: Batch size (combined volume except for CHW dimensions) 1600 for layer (Unnamed Layer* 623) [Deconvolution] exceeds max batch size allowed of 32. [WARNING] Default DLA is enabled but layer (Unnamed Layer* 623) [Deconvolution] is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer conv5-mask/conv2d_transpose is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer conv5-mask/bias is not supported on DLA, falling back to GPU. [WARNING] DLA LAYER: Batch size (combined volume except for CHW dimensions) 1600 for layer conv5-mask/BiasAdd exceeds max batch size allowed of 32. [WARNING] Default DLA is enabled but layer conv5-mask/BiasAdd is not supported on DLA, falling back to GPU. [WARNING] DLA LAYER: Batch size (combined volume except for CHW dimensions) 1600 for layer conv5-mask/Relu exceeds max batch size allowed of 32. [WARNING] Default DLA is enabled but layer conv5-mask/Relu is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer mask_fcn_logits/kernel is not supported on DLA, falling back to GPU. [WARNING] DLA LAYER: Batch size (combined volume except for CHW dimensions) 1600 for layer mask_fcn_logits/Conv2D exceeds max batch size allowed of 32. [WARNING] Default DLA is enabled but layer mask_fcn_logits/Conv2D is not supported on DLA, falling back to GPU. [WARNING] DLA LAYER: Batch size (combined volume except for CHW dimensions) 1600 for layer mask_fcn_logits/BiasAdd exceeds max batch size allowed of 32. [WARNING] Default DLA is enabled but layer mask_fcn_logits/BiasAdd is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer rpn-class/bias_0 is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer rpn/bias_0 is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer rpn-box/bias_0 is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer rpn-class/bias_1 is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer rpn/bias_1 is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer rpn-box/bias_1 is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer rpn-class/bias_2 is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer rpn/bias_2 is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer rpn-box/bias_2 is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer rpn-class/bias_3 is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer rpn/bias_3 is not supported on DLA, falling back to GPU. [WARNING] Default DLA is enabled but layer rpn-box/bias_3 is not supported on DLA, falling back to GPU. [WARNING] Internal DLA error for layer post_hoc_d2/Conv2D. Switching to GPU fallback. [WARNING] Internal DLA error for layer post_hoc_d2/Conv2D. Switching to GPU fallback. [WARNING] Internal DLA error for layer rpn/Conv2D. Switching to GPU fallback. [WARNING] Internal DLA error for layer rpn/Conv2D. Switching to GPU fallback. [INFO] [INFO] --------------- Layers running on DLA: [INFO] {conv1/Conv2D,bn_conv1/FusedBatchNormV3,activation/Relu,max_pooling2d/MaxPool,block_1a_conv_1/Conv2D,block_1a_bn_1/FusedBatchNormV3,block_1a_relu_1/Relu,block_1a_conv_2/Conv2D,block_1a_bn_2/FusedBatchNormV3,block_1a_relu_2/Relu,block_1a_conv_3/Conv2D,block_1a_bn_3/FusedBatchNormV3,block_1a_conv_shortcut/Conv2D,block_1a_bn_shortcut/FusedBatchNormV3,add/add,block_1a_relu/Relu,block_1b_conv_1/Conv2D,block_1b_bn_1/FusedBatchNormV3,block_1b_relu_1/Relu,block_1b_conv_2/Conv2D,block_1b_bn_2/FusedBatchNormV3,block_1b_relu_2/Relu,block_1b_conv_3/Conv2D,block_1b_bn_3/FusedBatchNormV3,add_1/add,block_1b_relu/Relu,block_1c_conv_1/Conv2D,block_1c_bn_1/FusedBatchNormV3,block_1c_relu_1/Relu,block_1c_conv_2/Conv2D,block_1c_bn_2/FusedBatchNormV3,block_1c_relu_2/Relu,block_1c_conv_3/Conv2D,block_1c_bn_3/FusedBatchNormV3,add_2/add,block_1c_relu/Relu,l2/Conv2D,l2/BiasAdd,block_2a_conv_1/Conv2D,block_2a_bn_1/FusedBatchNormV3,block_2a_relu_1/Relu,block_2a_conv_2/Conv2D,block_2a_bn_2/FusedBatchNormV3,block_2a_relu_2/Relu,block_2a_conv_3/Conv2D,block_2a_bn_3/FusedBatchNormV3,block_2a_conv_shortcut/Conv2D,block_2a_bn_shortcut/FusedBatchNormV3,add_3/add,block_2a_relu/Relu,block_2b_conv_1/Conv2D,block_2b_bn_1/FusedBatchNormV3,block_2b_relu_1/Relu,block_2b_conv_2/Conv2D,block_2b_bn_2/FusedBatchNormV3,block_2b_relu_2/Relu,block_2b_conv_3/Conv2D,block_2b_bn_3/FusedBatchNormV3,add_4/add,block_2b_relu/Relu,block_2c_conv_1/Conv2D,block_2c_bn_1/FusedBatchNormV3,block_2c_relu_1/Relu,block_2c_conv_2/Conv2D,block_2c_bn_2/FusedBatchNormV3,block_2c_relu_2/Relu,block_2c_conv_3/Conv2D,block_2c_bn_3/FusedBatchNormV3,add_5/add,block_2c_relu/Relu,block_2d_conv_1/Conv2D,block_2d_bn_1/FusedBatchNormV3,block_2d_relu_1/Relu,block_2d_conv_2/Conv2D,block_2d_bn_2/FusedBatchNormV3,block_2d_relu_2/Relu,block_2d_conv_3/Conv2D,block_2d_bn_3/FusedBatchNormV3,add_6/add,block_2d_relu/Relu,l3/Conv2D,l3/BiasAdd,block_3a_conv_1/Conv2D,block_3a_bn_1/FusedBatchNormV3,block_3a_relu_1/Relu,block_3a_conv_2/Conv2D,block_3a_bn_2/FusedBatchNormV3,block_3a_relu_2/Relu,block_3a_conv_3/Conv2D,block_3a_bn_3/FusedBatchNormV3,block_3a_conv_shortcut/Conv2D,block_3a_bn_shortcut/FusedBatchNormV3,add_7/add,block_3a_relu/Relu,block_3b_conv_1/Conv2D,block_3b_bn_1/FusedBatchNormV3,block_3b_relu_1/Relu,block_3b_conv_2/Conv2D,block_3b_bn_2/FusedBatchNormV3,block_3b_relu_2/Relu,block_3b_conv_3/Conv2D,block_3b_bn_3/FusedBatchNormV3,add_8/add,block_3b_relu/Relu,block_3c_conv_1/Conv2D,block_3c_bn_1/FusedBatchNormV3,block_3c_relu_1/Relu,block_3c_conv_2/Conv2D,block_3c_bn_2/FusedBatchNormV3,block_3c_relu_2/Relu,block_3c_conv_3/Conv2D,block_3c_bn_3/FusedBatchNormV3,add_9/add,block_3c_relu/Relu,block_3d_conv_1/Conv2D,block_3d_bn_1/FusedBatchNormV3,block_3d_relu_1/Relu,block_3d_conv_2/Conv2D,block_3d_bn_2/FusedBatchNormV3,block_3d_relu_2/Relu,block_3d_conv_3/Conv2D,block_3d_bn_3/FusedBatchNormV3,add_10/add,block_3d_relu/Relu,block_3e_conv_1/Conv2D,block_3e_bn_1/FusedBatchNormV3,block_3e_relu_1/Relu,block_3e_conv_2/Conv2D,block_3e_bn_2/FusedBatchNormV3,block_3e_relu_2/Relu,block_3e_conv_3/Conv2D,block_3e_bn_3/FusedBatchNormV3,add_11/add,block_3e_relu/Relu,block_3f_conv_1/Conv2D,block_3f_bn_1/FusedBatchNormV3,block_3f_relu_1/Relu,block_3f_conv_2/Conv2D,block_3f_bn_2/FusedBatchNormV3,block_3f_relu_2/Relu,block_3f_conv_3/Conv2D,block_3f_bn_3/FusedBatchNormV3,add_12/add,block_3f_relu/Relu,l4/Conv2D,l4/BiasAdd,block_4a_conv_1/Conv2D,block_4a_bn_1/FusedBatchNormV3,block_4a_relu_1/Relu,block_4a_conv_2/Conv2D,block_4a_bn_2/FusedBatchNormV3,block_4a_relu_2/Relu,block_4a_conv_3/Conv2D,block_4a_bn_3/FusedBatchNormV3,block_4a_conv_shortcut/Conv2D,block_4a_bn_shortcut/FusedBatchNormV3,add_13/add,block_4a_relu/Relu,block_4b_conv_1/Conv2D,block_4b_bn_1/FusedBatchNormV3,block_4b_relu_1/Relu,block_4b_conv_2/Conv2D,block_4b_bn_2/FusedBatchNormV3,block_4b_relu_2/Relu,block_4b_conv_3/Conv2D,block_4b_bn_3/FusedBatchNormV3,add_14/add,block_4b_relu/Relu,block_4c_conv_1/Conv2D,block_4c_bn_1/FusedBatchNormV3,block_4c_relu_1/Relu,block_4c_conv_2/Conv2D,block_4c_bn_2/FusedBatchNormV3,block_4c_relu_2/Relu,block_4c_conv_3/Conv2D,block_4c_bn_3/FusedBatchNormV3,add_15/add,block_4c_relu/Relu,l5/Conv2D,l5/BiasAdd,post_hoc_d5/Conv2D,post_hoc_d5/BiasAdd,rpn_3/Conv2D,rpn_3/BiasAdd,rpn_3/Relu,rpn-box_3/Conv2D,rpn-box_3/BiasAdd,rpn-class_3/Conv2D,rpn-class_3/BiasAdd,p6/MaxPool,rpn_4/Conv2D,rpn_4/BiasAdd,rpn_4/Relu,rpn-box_4/Conv2D,rpn-box_4/BiasAdd,rpn-class_4/Conv2D,rpn-class_4/BiasAdd}, {FPN_add_4/add,post_hoc_d4/Conv2D,post_hoc_d4/BiasAdd,rpn_2/Conv2D,rpn_2/BiasAdd,rpn_2/Relu,rpn-box_2/Conv2D,rpn-box_2/BiasAdd,rpn-class_2/Conv2D,rpn-class_2/BiasAdd,MLP/multilevel_propose_rois/level_5/Sigmoid,MLP/multilevel_propose_rois/level_6/Sigmoid}, {FPN_add_3/add,post_hoc_d3/Conv2D,post_hoc_d3/BiasAdd,rpn_1/Conv2D,rpn_1/BiasAdd,rpn_1/Relu,rpn-box_1/Conv2D,rpn-box_1/BiasAdd,rpn-class_1/Conv2D,rpn-class_1/BiasAdd}, {FPN_add_2/add}, {post_hoc_d2/BiasAdd}, {rpn/BiasAdd,rpn/Relu,rpn-box/Conv2D,rpn-box/BiasAdd,rpn-class/Conv2D,rpn-class/BiasAdd}, [INFO] --------------- Layers running on GPU: [INFO] nearest_upsampling_2, permute_7/transpose + (Unnamed Layer* 537) [Shuffle] + MLP/multilevel_propose_rois/level_5/Reshape_1, permute_6/transpose + (Unnamed Layer* 543) [Shuffle] + MLP/multilevel_propose_rois/level_5/Reshape, permute_9/transpose + (Unnamed Layer* 554) [Shuffle] + MLP/multilevel_propose_rois/level_6/Reshape_1, permute_8/transpose + (Unnamed Layer* 560) [Shuffle] + MLP/multilevel_propose_rois/level_6/Reshape, nearest_upsampling_1, permute_5/transpose + (Unnamed Layer* 517) [Shuffle] + MLP/multilevel_propose_rois/level_4/Reshape_1, permute_4/transpose + (Unnamed Layer* 523) [Shuffle] + MLP/multilevel_propose_rois/level_4/Reshape, MLP/multilevel_propose_rois/level_4/Sigmoid, nearest_upsampling, permute_3/transpose + (Unnamed Layer* 497) [Shuffle] + MLP/multilevel_propose_rois/level_3/Reshape_1, permute_2/transpose + (Unnamed Layer* 503) [Shuffle] + MLP/multilevel_propose_rois/level_3/Reshape, MLP/multilevel_propose_rois/level_3/Sigmoid, post_hoc_d2/Conv2D, rpn/Conv2D, permute_1/transpose + (Unnamed Layer* 475) [Shuffle] + MLP/multilevel_propose_rois/level_2/Reshape_1, permute/transpose + (Unnamed Layer* 483) [Shuffle] + MLP/multilevel_propose_rois/level_2/Reshape, MLP/multilevel_propose_rois/level_2/Sigmoid, multilevel_propose_rois, pyramid_crop_and_resize_box, fc6/MatMul, fc6/BiasAdd + fc6/Relu, fc7/MatMul, fc7/BiasAdd + fc7/Relu, box-predict/MatMul, box-predict/BiasAdd, class-predict/MatMul, class-predict/BiasAdd, box_head_softmax, generate_detections, mrcnn_detection_bboxes, pyramid_crop_and_resize_mask, mask-conv-l0/Conv2D + mask-conv-l0/Relu, mask-conv-l1/Conv2D + mask-conv-l1/Relu, mask-conv-l2/Conv2D + mask-conv-l2/Relu, mask-conv-l3/Conv2D + mask-conv-l3/Relu, (Unnamed Layer* 623) [Deconvolution], conv5-mask/BiasAdd + conv5-mask/Relu, mask_fcn_logits/Conv2D, [WARNING] DLA Node compilation Failed. [WARNING] DLA Node compilation Failed. [ERROR] Try increasing the workspace size with IBuilderConfig::setMaxWorkspaceSize() if using IBuilder::buildEngineWithConfig, or IBuilder::setMaxWorkspaceSize() if using IBuilder::buildCudaEngine. [ERROR] ../builder/tacticOptimizer.cpp (1715) - TRTInternal Error in computeCosts: 0 (Could not find any implementation for node {conv1/Conv2D,bn_conv1/FusedBatchNormV3,activation/Relu,max_pooling2d/MaxPool,block_1a_conv_1/Conv2D,block_1a_bn_1/FusedBatchNormV3,block_1a_relu_1/Relu,block_1a_conv_2/Conv2D,block_1a_bn_2/FusedBatchNormV3,block_1a_relu_2/Relu,block_1a_conv_3/Conv2D,block_1a_bn_3/FusedBatchNormV3,block_1a_conv_shortcut/Conv2D,block_1a_bn_shortcut/FusedBatchNormV3,add/add,block_1a_relu/Relu,block_1b_conv_1/Conv2D,block_1b_bn_1/FusedBatchNormV3,block_1b_relu_1/Relu,block_1b_conv_2/Conv2D,block_1b_bn_2/FusedBatchNormV3,block_1b_relu_2/Relu,block_1b_conv_3/Conv2D,block_1b_bn_3/FusedBatchNormV3,add_1/add,block_1b_relu/Relu,block_1c_conv_1/Conv2D,block_1c_bn_1/FusedBatchNormV3,block_1c_relu_1/Relu,block_1c_conv_2/Conv2D,block_1c_bn_2/FusedBatchNormV3,block_1c_relu_2/Relu,block_1c_conv_3/Conv2D,block_1c_bn_3/FusedBatchNormV3,add_2/add,block_1c_relu/Relu,l2/Conv2D,l2/BiasAdd,block_2a_conv_1/Conv2D,block_2a_bn_1/FusedBatchNormV3,block_2a_relu_1/Relu,block_2a_conv_2/Conv2D,block_2a_bn_2/FusedBatchNormV3,block_2a_relu_2/Relu,block_2a_conv_3/Conv2D,block_2a_bn_3/FusedBatchNormV3,block_2a_conv_shortcut/Conv2D,block_2a_bn_shortcut/FusedBatchNormV3,add_3/add,block_2a_relu/Relu,block_2b_conv_1/Conv2D,block_2b_bn_1/FusedBatchNormV3,block_2b_relu_1/Relu,block_2b_conv_2/Conv2D,block_2b_bn_2/FusedBatchNormV3,block_2b_relu_2/Relu,block_2b_conv_3/Conv2D,block_2b_bn_3/FusedBatchNormV3,add_4/add,block_2b_relu/Relu,block_2c_conv_1/Conv2D,block_2c_bn_1/FusedBatchNormV3,block_2c_relu_1/Relu,block_2c_conv_2/Conv2D,block_2c_bn_2/FusedBatchNormV3,block_2c_relu_2/Relu,block_2c_conv_3/Conv2D,block_2c_bn_3/FusedBatchNormV3,add_5/add,block_2c_relu/Relu,block_2d_conv_1/Conv2D,block_2d_bn_1/FusedBatchNormV3,block_2d_relu_1/Relu,block_2d_conv_2/Conv2D,block_2d_bn_2/FusedBatchNormV3,block_2d_relu_2/Relu,block_2d_conv_3/Conv2D,block_2d_bn_3/FusedBatchNormV3,add_6/add,block_2d_relu/Relu,l3/Conv2D,l3/BiasAdd,block_3a_conv_1/Conv2D,block_3a_bn_1/FusedBatchNormV3,block_3a_relu_1/Relu,block_3a_conv_2/Conv2D,block_3a_bn_2/FusedBatchNormV3,block_3a_relu_2/Relu,block_3a_conv_3/Conv2D,block_3a_bn_3/FusedBatchNormV3,block_3a_conv_shortcut/Conv2D,block_3a_bn_shortcut/FusedBatchNormV3,add_7/add,block_3a_relu/Relu,block_3b_conv_1/Conv2D,block_3b_bn_1/FusedBatchNormV3,block_3b_relu_1/Relu,block_3b_conv_2/Conv2D,block_3b_bn_2/FusedBatchNormV3,block_3b_relu_2/Relu,block_3b_conv_3/Conv2D,block_3b_bn_3/FusedBatchNormV3,add_8/add,block_3b_relu/Relu,block_3c_conv_1/Conv2D,block_3c_bn_1/FusedBatchNormV3,block_3c_relu_1/Relu,block_3c_conv_2/Conv2D,block_3c_bn_2/FusedBatchNormV3,block_3c_relu_2/Relu,block_3c_conv_3/Conv2D,block_3c_bn_3/FusedBatchNormV3,add_9/add,block_3c_relu/Relu,block_3d_conv_1/Conv2D,block_3d_bn_1/FusedBatchNormV3,block_3d_relu_1/Relu,block_3d_conv_2/Conv2D,block_3d_bn_2/FusedBatchNormV3,block_3d_relu_2/Relu,block_3d_conv_3/Conv2D,block_3d_bn_3/FusedBatchNormV3,add_10/add,block_3d_relu/Relu,block_3e_conv_1/Conv2D,block_3e_bn_1/FusedBatchNormV3,block_3e_relu_1/Relu,block_3e_conv_2/Conv2D,block_3e_bn_2/FusedBatchNormV3,block_3e_relu_2/Relu,block_3e_conv_3/Conv2D,block_3e_bn_3/FusedBatchNormV3,add_11/add,block_3e_relu/Relu,block_3f_conv_1/Conv2D,block_3f_bn_1/FusedBatchNormV3,block_3f_relu_1/Relu,block_3f_conv_2/Conv2D,block_3f_bn_2/FusedBatchNormV3,block_3f_relu_2/Relu,block_3f_conv_3/Conv2D,block_3f_bn_3/FusedBatchNormV3,add_12/add,block_3f_relu/Relu,l4/Conv2D,l4/BiasAdd,block_4a_conv_1/Conv2D,block_4a_bn_1/FusedBatchNormV3,block_4a_relu_1/Relu,block_4a_conv_2/Conv2D,block_4a_bn_2/FusedBatchNormV3,block_4a_relu_2/Relu,block_4a_conv_3/Conv2D,block_4a_bn_3/FusedBatchNormV3,block_4a_conv_shortcut/Conv2D,block_4a_bn_shortcut/FusedBatchNormV3,add_13/add,block_4a_relu/Relu,block_4b_conv_1/Conv2D,block_4b_bn_1/FusedBatchNormV3,block_4b_relu_1/Relu,block_4b_conv_2/Conv2D,block_4b_bn_2/FusedBatchNormV3,block_4b_relu_2/Relu,block_4b_conv_3/Conv2D,block_4b_bn_3/FusedBatchNormV3,add_14/add,block_4b_relu/Relu,block_4c_conv_1/Conv2D,block_4c_bn_1/FusedBatchNormV3,block_4c_relu_1/Relu,block_4c_conv_2/Conv2D,block_4c_bn_2/FusedBatchNormV3,block_4c_relu_2/Relu,block_4c_conv_3/Conv2D,block_4c_bn_3/FusedBatchNormV3,add_15/add,block_4c_relu/Relu,l5/Conv2D,l5/BiasAdd,post_hoc_d5/Conv2D,post_hoc_d5/BiasAdd,rpn_3/Conv2D,rpn_3/BiasAdd,rpn_3/Relu,rpn-box_3/Conv2D,rpn-box_3/BiasAdd,rpn-class_3/Conv2D,rpn-class_3/BiasAdd,p6/MaxPool,rpn_4/Conv2D,rpn_4/BiasAdd,rpn_4/Relu,rpn-box_4/Conv2D,rpn-box_4/BiasAdd,rpn-class_4/Conv2D,rpn-class_4/BiasAdd}.) [ERROR] ../builder/tacticOptimizer.cpp (1715) - TRTInternal Error in computeCosts: 0 () [ERROR] Unable to create engine Segmentation fault (core dumped)