Can we adjust TensorRT optimization to improve result?

Hi,

When I test my caffe model on TensorRT3 with parse in kFLOAT

I found the accuracy is worse than testing on caffe.

I wonder that whether we can control the fusing weights (percentage or scale)

to lower the percentage of layer optimization on TensorRT for improvng the accuracy?

Thx!

Hi,

May I know how you use TensorRT? From TensorRT native sample or Jetson_inference?

Usually, the reason for accuracy drop is the different mean subtraction handling.
Could you check if the input of network inferencing between Caffe and TensorRT is identical?
(If there is a mean.binaryproto for Caffe?)

Thanks.

Hi,

I use my model and network (resnet-101 base)

give 10 images but only have about 4 detected results (result is correct but the confidence is low ) in tensorRT

but in caffe and with same model and network

it will give about 9 detect results and confidences are high

Could you please give more detail explanation for “mean subtraction handling” ?

Thx!

Hi,

May I know how you launch TensorRT?

Mean-substraction is kind of data pre-process, and you can find more information on this wiki:
http://ufldl.stanford.edu/wiki/index.php/Data_Preprocessing

In Caffe, mean subtraction is applied here:
https://github.com/BVLC/caffe/blob/master/python/caffe/classifier.py#L34

If you are using Jetson_inference, mean subtraction is implemented here:
https://github.com/dusty-nv/jetson-inference/blob/master/detectNet.cpp#L329

Please make sure you are using the same mean-handling first.
Thanks.

Hi,

I just modify “sample_FasterRCNN” and add some custom layer parser rule

and I test another network based on (restnet-50), the accuracy is almost

as same as tested in caffe. so I am confused about this…

Thx!

Hi,

sampleFasterRCNN handles mean-subtraction with this value:

float pixelMean[3]{ 102.9801f, 115.9465f, 122.7717f }; // also in BGR order

Please remember to use identical image input to get the same results between Caffe and TensorRT.

Thanks.

Hi,

I did use same input image to test and handle them with mean-subtraction!

Thx!

Hi,

I did use same input image to test and handle them with mean-subtraction!

Thx!

Hi,

Could you share some information for us reproducing this issue?

Including:

  • .prototxt
  • .caffemodel
  • Input image
  • The expected result from Caffe
  • Your TRT source code

Thanks.

Hi,

in BatchNorm layer

does TensorRT supports “use_global_stats: false” ??

Thanks!

Hi,

TensorRT doesn’t support use_global_stats=false flag.
This feature is normally used for training only. So it isn’t supported by TensorRT.

Thanks.

hi!
I have meet the same problem.do you solve it?
thank you

Hi,

I finally found that

there is a unsupported param in my layer

and it affects the accuracy

Could you speak a little more detail?
i use ResNet-50,which is “unsupported param”?

name: “ResNet-50”
#input: “data”
#input_dim: 1
#input_dim: 3
#input_dim: 224
#input_dim: 224

layer {
name: “data”
type: “MemoryData”
top: “data”
top: “label”
memory_data_param {
batch_size: 1
channels: 3
height: 224
width: 224
}

}

layer {
bottom: “data”
top: “conv1”
name: “conv1”
type: “Convolution”
convolution_param {
num_output: 64
kernel_size: 7
pad: 3
stride: 2
}
}

layer {
bottom: “conv1”
top: “conv1”
name: “bn_conv1”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “conv1”
top: “conv1”
name: “scale_conv1”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “conv1”
top: “conv1”
name: “conv1_relu”
type: “ReLU”
}

layer {
bottom: “conv1”
top: “pool1”
name: “pool1”
type: “Pooling”
pooling_param {
kernel_size: 3
stride: 2
pool: MAX
}
}

layer {
bottom: “pool1”
top: “res2a_branch1”
name: “res2a_branch1”
type: “Convolution”
convolution_param {
num_output: 256
kernel_size: 1
pad: 0
stride: 1
bias_term: false
}
}

layer {
bottom: “res2a_branch1”
top: “res2a_branch1”
name: “bn2a_branch1”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res2a_branch1”
top: “res2a_branch1”
name: “scale2a_branch1”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “pool1”
top: “res2a_branch2a”
name: “res2a_branch2a”
type: “Convolution”
convolution_param {
num_output: 64
kernel_size: 1
pad: 0
stride: 1
bias_term: false
}
}

layer {
bottom: “res2a_branch2a”
top: “res2a_branch2a”
name: “bn2a_branch2a”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res2a_branch2a”
top: “res2a_branch2a”
name: “scale2a_branch2a”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res2a_branch2a”
top: “res2a_branch2a”
name: “res2a_branch2a_relu”
type: “ReLU”
}

layer {
bottom: “res2a_branch2a”
top: “res2a_branch2b”
name: “res2a_branch2b”
type: “Convolution”
convolution_param {
num_output: 64
kernel_size: 3
pad: 1
stride: 1
bias_term: false
}
}

layer {
bottom: “res2a_branch2b”
top: “res2a_branch2b”
name: “bn2a_branch2b”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res2a_branch2b”
top: “res2a_branch2b”
name: “scale2a_branch2b”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res2a_branch2b”
top: “res2a_branch2b”
name: “res2a_branch2b_relu”
type: “ReLU”
}

layer {
bottom: “res2a_branch2b”
top: “res2a_branch2c”
name: “res2a_branch2c”
type: “Convolution”
convolution_param {
num_output: 256
kernel_size: 1
pad: 0
stride: 1
bias_term: false
}
}

layer {
bottom: “res2a_branch2c”
top: “res2a_branch2c”
name: “bn2a_branch2c”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res2a_branch2c”
top: “res2a_branch2c”
name: “scale2a_branch2c”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res2a_branch1”
bottom: “res2a_branch2c”
top: “res2a”
name: “res2a”
type: “Eltwise”
}

layer {
bottom: “res2a”
top: “res2a”
name: “res2a_relu”
type: “ReLU”
}

layer {
bottom: “res2a”
top: “res2b_branch2a”
name: “res2b_branch2a”
type: “Convolution”
convolution_param {
num_output: 64
kernel_size: 1
pad: 0
stride: 1
bias_term: false
}
}

layer {
bottom: “res2b_branch2a”
top: “res2b_branch2a”
name: “bn2b_branch2a”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res2b_branch2a”
top: “res2b_branch2a”
name: “scale2b_branch2a”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res2b_branch2a”
top: “res2b_branch2a”
name: “res2b_branch2a_relu”
type: “ReLU”
}

layer {
bottom: “res2b_branch2a”
top: “res2b_branch2b”
name: “res2b_branch2b”
type: “Convolution”
convolution_param {
num_output: 64
kernel_size: 3
pad: 1
stride: 1
bias_term: false
}
}

layer {
bottom: “res2b_branch2b”
top: “res2b_branch2b”
name: “bn2b_branch2b”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res2b_branch2b”
top: “res2b_branch2b”
name: “scale2b_branch2b”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res2b_branch2b”
top: “res2b_branch2b”
name: “res2b_branch2b_relu”
type: “ReLU”
}

layer {
bottom: “res2b_branch2b”
top: “res2b_branch2c”
name: “res2b_branch2c”
type: “Convolution”
convolution_param {
num_output: 256
kernel_size: 1
pad: 0
stride: 1
bias_term: false
}
}

layer {
bottom: “res2b_branch2c”
top: “res2b_branch2c”
name: “bn2b_branch2c”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res2b_branch2c”
top: “res2b_branch2c”
name: “scale2b_branch2c”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res2a”
bottom: “res2b_branch2c”
top: “res2b”
name: “res2b”
type: “Eltwise”
}

layer {
bottom: “res2b”
top: “res2b”
name: “res2b_relu”
type: “ReLU”
}

layer {
bottom: “res2b”
top: “res2c_branch2a”
name: “res2c_branch2a”
type: “Convolution”
convolution_param {
num_output: 64
kernel_size: 1
pad: 0
stride: 1
bias_term: false
}
}

layer {
bottom: “res2c_branch2a”
top: “res2c_branch2a”
name: “bn2c_branch2a”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res2c_branch2a”
top: “res2c_branch2a”
name: “scale2c_branch2a”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res2c_branch2a”
top: “res2c_branch2a”
name: “res2c_branch2a_relu”
type: “ReLU”
}

layer {
bottom: “res2c_branch2a”
top: “res2c_branch2b”
name: “res2c_branch2b”
type: “Convolution”
convolution_param {
num_output: 64
kernel_size: 3
pad: 1
stride: 1
bias_term: false
}
}

layer {
bottom: “res2c_branch2b”
top: “res2c_branch2b”
name: “bn2c_branch2b”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res2c_branch2b”
top: “res2c_branch2b”
name: “scale2c_branch2b”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res2c_branch2b”
top: “res2c_branch2b”
name: “res2c_branch2b_relu”
type: “ReLU”
}

layer {
bottom: “res2c_branch2b”
top: “res2c_branch2c”
name: “res2c_branch2c”
type: “Convolution”
convolution_param {
num_output: 256
kernel_size: 1
pad: 0
stride: 1
bias_term: false
}
}

layer {
bottom: “res2c_branch2c”
top: “res2c_branch2c”
name: “bn2c_branch2c”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res2c_branch2c”
top: “res2c_branch2c”
name: “scale2c_branch2c”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res2b”
bottom: “res2c_branch2c”
top: “res2c”
name: “res2c”
type: “Eltwise”
}

layer {
bottom: “res2c”
top: “res2c”
name: “res2c_relu”
type: “ReLU”
}

layer {
bottom: “res2c”
top: “res3a_branch1”
name: “res3a_branch1”
type: “Convolution”
convolution_param {
num_output: 512
kernel_size: 1
pad: 0
stride: 2
bias_term: false
}
}

layer {
bottom: “res3a_branch1”
top: “res3a_branch1”
name: “bn3a_branch1”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res3a_branch1”
top: “res3a_branch1”
name: “scale3a_branch1”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res2c”
top: “res3a_branch2a”
name: “res3a_branch2a”
type: “Convolution”
convolution_param {
num_output: 128
kernel_size: 1
pad: 0
stride: 2
bias_term: false
}
}

layer {
bottom: “res3a_branch2a”
top: “res3a_branch2a”
name: “bn3a_branch2a”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res3a_branch2a”
top: “res3a_branch2a”
name: “scale3a_branch2a”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res3a_branch2a”
top: “res3a_branch2a”
name: “res3a_branch2a_relu”
type: “ReLU”
}

layer {
bottom: “res3a_branch2a”
top: “res3a_branch2b”
name: “res3a_branch2b”
type: “Convolution”
convolution_param {
num_output: 128
kernel_size: 3
pad: 1
stride: 1
bias_term: false
}
}

layer {
bottom: “res3a_branch2b”
top: “res3a_branch2b”
name: “bn3a_branch2b”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res3a_branch2b”
top: “res3a_branch2b”
name: “scale3a_branch2b”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res3a_branch2b”
top: “res3a_branch2b”
name: “res3a_branch2b_relu”
type: “ReLU”
}

layer {
bottom: “res3a_branch2b”
top: “res3a_branch2c”
name: “res3a_branch2c”
type: “Convolution”
convolution_param {
num_output: 512
kernel_size: 1
pad: 0
stride: 1
bias_term: false
}
}

layer {
bottom: “res3a_branch2c”
top: “res3a_branch2c”
name: “bn3a_branch2c”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res3a_branch2c”
top: “res3a_branch2c”
name: “scale3a_branch2c”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res3a_branch1”
bottom: “res3a_branch2c”
top: “res3a”
name: “res3a”
type: “Eltwise”
}

layer {
bottom: “res3a”
top: “res3a”
name: “res3a_relu”
type: “ReLU”
}

layer {
bottom: “res3a”
top: “res3b_branch2a”
name: “res3b_branch2a”
type: “Convolution”
convolution_param {
num_output: 128
kernel_size: 1
pad: 0
stride: 1
bias_term: false
}
}

layer {
bottom: “res3b_branch2a”
top: “res3b_branch2a”
name: “bn3b_branch2a”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res3b_branch2a”
top: “res3b_branch2a”
name: “scale3b_branch2a”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res3b_branch2a”
top: “res3b_branch2a”
name: “res3b_branch2a_relu”
type: “ReLU”
}

layer {
bottom: “res3b_branch2a”
top: “res3b_branch2b”
name: “res3b_branch2b”
type: “Convolution”
convolution_param {
num_output: 128
kernel_size: 3
pad: 1
stride: 1
bias_term: false
}
}

layer {
bottom: “res3b_branch2b”
top: “res3b_branch2b”
name: “bn3b_branch2b”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res3b_branch2b”
top: “res3b_branch2b”
name: “scale3b_branch2b”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res3b_branch2b”
top: “res3b_branch2b”
name: “res3b_branch2b_relu”
type: “ReLU”
}

layer {
bottom: “res3b_branch2b”
top: “res3b_branch2c”
name: “res3b_branch2c”
type: “Convolution”
convolution_param {
num_output: 512
kernel_size: 1
pad: 0
stride: 1
bias_term: false
}
}

layer {
bottom: “res3b_branch2c”
top: “res3b_branch2c”
name: “bn3b_branch2c”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res3b_branch2c”
top: “res3b_branch2c”
name: “scale3b_branch2c”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res3a”
bottom: “res3b_branch2c”
top: “res3b”
name: “res3b”
type: “Eltwise”
}

layer {
bottom: “res3b”
top: “res3b”
name: “res3b_relu”
type: “ReLU”
}

layer {
bottom: “res3b”
top: “res3c_branch2a”
name: “res3c_branch2a”
type: “Convolution”
convolution_param {
num_output: 128
kernel_size: 1
pad: 0
stride: 1
bias_term: false
}
}

layer {
bottom: “res3c_branch2a”
top: “res3c_branch2a”
name: “bn3c_branch2a”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res3c_branch2a”
top: “res3c_branch2a”
name: “scale3c_branch2a”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res3c_branch2a”
top: “res3c_branch2a”
name: “res3c_branch2a_relu”
type: “ReLU”
}

layer {
bottom: “res3c_branch2a”
top: “res3c_branch2b”
name: “res3c_branch2b”
type: “Convolution”
convolution_param {
num_output: 128
kernel_size: 3
pad: 1
stride: 1
bias_term: false
}
}

layer {
bottom: “res3c_branch2b”
top: “res3c_branch2b”
name: “bn3c_branch2b”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res3c_branch2b”
top: “res3c_branch2b”
name: “scale3c_branch2b”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res3c_branch2b”
top: “res3c_branch2b”
name: “res3c_branch2b_relu”
type: “ReLU”
}

layer {
bottom: “res3c_branch2b”
top: “res3c_branch2c”
name: “res3c_branch2c”
type: “Convolution”
convolution_param {
num_output: 512
kernel_size: 1
pad: 0
stride: 1
bias_term: false
}
}

layer {
bottom: “res3c_branch2c”
top: “res3c_branch2c”
name: “bn3c_branch2c”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res3c_branch2c”
top: “res3c_branch2c”
name: “scale3c_branch2c”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res3b”
bottom: “res3c_branch2c”
top: “res3c”
name: “res3c”
type: “Eltwise”
}

layer {
bottom: “res3c”
top: “res3c”
name: “res3c_relu”
type: “ReLU”
}

layer {
bottom: “res3c”
top: “res3d_branch2a”
name: “res3d_branch2a”
type: “Convolution”
convolution_param {
num_output: 128
kernel_size: 1
pad: 0
stride: 1
bias_term: false
}
}

layer {
bottom: “res3d_branch2a”
top: “res3d_branch2a”
name: “bn3d_branch2a”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res3d_branch2a”
top: “res3d_branch2a”
name: “scale3d_branch2a”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res3d_branch2a”
top: “res3d_branch2a”
name: “res3d_branch2a_relu”
type: “ReLU”
}

layer {
bottom: “res3d_branch2a”
top: “res3d_branch2b”
name: “res3d_branch2b”
type: “Convolution”
convolution_param {
num_output: 128
kernel_size: 3
pad: 1
stride: 1
bias_term: false
}
}

layer {
bottom: “res3d_branch2b”
top: “res3d_branch2b”
name: “bn3d_branch2b”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res3d_branch2b”
top: “res3d_branch2b”
name: “scale3d_branch2b”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res3d_branch2b”
top: “res3d_branch2b”
name: “res3d_branch2b_relu”
type: “ReLU”
}

layer {
bottom: “res3d_branch2b”
top: “res3d_branch2c”
name: “res3d_branch2c”
type: “Convolution”
convolution_param {
num_output: 512
kernel_size: 1
pad: 0
stride: 1
bias_term: false
}
}

layer {
bottom: “res3d_branch2c”
top: “res3d_branch2c”
name: “bn3d_branch2c”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res3d_branch2c”
top: “res3d_branch2c”
name: “scale3d_branch2c”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res3c”
bottom: “res3d_branch2c”
top: “res3d”
name: “res3d”
type: “Eltwise”
}

layer {
bottom: “res3d”
top: “res3d”
name: “res3d_relu”
type: “ReLU”
}

layer {
bottom: “res3d”
top: “res4a_branch1”
name: “res4a_branch1”
type: “Convolution”
convolution_param {
num_output: 1024
kernel_size: 1
pad: 0
stride: 2
bias_term: false
}
}

layer {
bottom: “res4a_branch1”
top: “res4a_branch1”
name: “bn4a_branch1”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res4a_branch1”
top: “res4a_branch1”
name: “scale4a_branch1”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res3d”
top: “res4a_branch2a”
name: “res4a_branch2a”
type: “Convolution”
convolution_param {
num_output: 256
kernel_size: 1
pad: 0
stride: 2
bias_term: false
}
}

layer {
bottom: “res4a_branch2a”
top: “res4a_branch2a”
name: “bn4a_branch2a”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res4a_branch2a”
top: “res4a_branch2a”
name: “scale4a_branch2a”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res4a_branch2a”
top: “res4a_branch2a”
name: “res4a_branch2a_relu”
type: “ReLU”
}

layer {
bottom: “res4a_branch2a”
top: “res4a_branch2b”
name: “res4a_branch2b”
type: “Convolution”
convolution_param {
num_output: 256
kernel_size: 3
pad: 1
stride: 1
bias_term: false
}
}

layer {
bottom: “res4a_branch2b”
top: “res4a_branch2b”
name: “bn4a_branch2b”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res4a_branch2b”
top: “res4a_branch2b”
name: “scale4a_branch2b”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res4a_branch2b”
top: “res4a_branch2b”
name: “res4a_branch2b_relu”
type: “ReLU”
}

layer {
bottom: “res4a_branch2b”
top: “res4a_branch2c”
name: “res4a_branch2c”
type: “Convolution”
convolution_param {
num_output: 1024
kernel_size: 1
pad: 0
stride: 1
bias_term: false
}
}

layer {
bottom: “res4a_branch2c”
top: “res4a_branch2c”
name: “bn4a_branch2c”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res4a_branch2c”
top: “res4a_branch2c”
name: “scale4a_branch2c”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res4a_branch1”
bottom: “res4a_branch2c”
top: “res4a”
name: “res4a”
type: “Eltwise”
}

layer {
bottom: “res4a”
top: “res4a”
name: “res4a_relu”
type: “ReLU”
}

layer {
bottom: “res4a”
top: “res4b_branch2a”
name: “res4b_branch2a”
type: “Convolution”
convolution_param {
num_output: 256
kernel_size: 1
pad: 0
stride: 1
bias_term: false
}
}

layer {
bottom: “res4b_branch2a”
top: “res4b_branch2a”
name: “bn4b_branch2a”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res4b_branch2a”
top: “res4b_branch2a”
name: “scale4b_branch2a”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res4b_branch2a”
top: “res4b_branch2a”
name: “res4b_branch2a_relu”
type: “ReLU”
}

layer {
bottom: “res4b_branch2a”
top: “res4b_branch2b”
name: “res4b_branch2b”
type: “Convolution”
convolution_param {
num_output: 256
kernel_size: 3
pad: 1
stride: 1
bias_term: false
}
}

layer {
bottom: “res4b_branch2b”
top: “res4b_branch2b”
name: “bn4b_branch2b”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res4b_branch2b”
top: “res4b_branch2b”
name: “scale4b_branch2b”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res4b_branch2b”
top: “res4b_branch2b”
name: “res4b_branch2b_relu”
type: “ReLU”
}

layer {
bottom: “res4b_branch2b”
top: “res4b_branch2c”
name: “res4b_branch2c”
type: “Convolution”
convolution_param {
num_output: 1024
kernel_size: 1
pad: 0
stride: 1
bias_term: false
}
}

layer {
bottom: “res4b_branch2c”
top: “res4b_branch2c”
name: “bn4b_branch2c”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res4b_branch2c”
top: “res4b_branch2c”
name: “scale4b_branch2c”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res4a”
bottom: “res4b_branch2c”
top: “res4b”
name: “res4b”
type: “Eltwise”
}

layer {
bottom: “res4b”
top: “res4b”
name: “res4b_relu”
type: “ReLU”
}

layer {
bottom: “res4b”
top: “res4c_branch2a”
name: “res4c_branch2a”
type: “Convolution”
convolution_param {
num_output: 256
kernel_size: 1
pad: 0
stride: 1
bias_term: false
}
}

layer {
bottom: “res4c_branch2a”
top: “res4c_branch2a”
name: “bn4c_branch2a”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res4c_branch2a”
top: “res4c_branch2a”
name: “scale4c_branch2a”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res4c_branch2a”
top: “res4c_branch2a”
name: “res4c_branch2a_relu”
type: “ReLU”
}

layer {
bottom: “res4c_branch2a”
top: “res4c_branch2b”
name: “res4c_branch2b”
type: “Convolution”
convolution_param {
num_output: 256
kernel_size: 3
pad: 1
stride: 1
bias_term: false
}
}

layer {
bottom: “res4c_branch2b”
top: “res4c_branch2b”
name: “bn4c_branch2b”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res4c_branch2b”
top: “res4c_branch2b”
name: “scale4c_branch2b”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res4c_branch2b”
top: “res4c_branch2b”
name: “res4c_branch2b_relu”
type: “ReLU”
}

layer {
bottom: “res4c_branch2b”
top: “res4c_branch2c”
name: “res4c_branch2c”
type: “Convolution”
convolution_param {
num_output: 1024
kernel_size: 1
pad: 0
stride: 1
bias_term: false
}
}

layer {
bottom: “res4c_branch2c”
top: “res4c_branch2c”
name: “bn4c_branch2c”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res4c_branch2c”
top: “res4c_branch2c”
name: “scale4c_branch2c”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res4b”
bottom: “res4c_branch2c”
top: “res4c”
name: “res4c”
type: “Eltwise”
}

layer {
bottom: “res4c”
top: “res4c”
name: “res4c_relu”
type: “ReLU”
}

layer {
bottom: “res4c”
top: “res4d_branch2a”
name: “res4d_branch2a”
type: “Convolution”
convolution_param {
num_output: 256
kernel_size: 1
pad: 0
stride: 1
bias_term: false
}
}

layer {
bottom: “res4d_branch2a”
top: “res4d_branch2a”
name: “bn4d_branch2a”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res4d_branch2a”
top: “res4d_branch2a”
name: “scale4d_branch2a”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res4d_branch2a”
top: “res4d_branch2a”
name: “res4d_branch2a_relu”
type: “ReLU”
}

layer {
bottom: “res4d_branch2a”
top: “res4d_branch2b”
name: “res4d_branch2b”
type: “Convolution”
convolution_param {
num_output: 256
kernel_size: 3
pad: 1
stride: 1
bias_term: false
}
}

layer {
bottom: “res4d_branch2b”
top: “res4d_branch2b”
name: “bn4d_branch2b”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res4d_branch2b”
top: “res4d_branch2b”
name: “scale4d_branch2b”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res4d_branch2b”
top: “res4d_branch2b”
name: “res4d_branch2b_relu”
type: “ReLU”
}

layer {
bottom: “res4d_branch2b”
top: “res4d_branch2c”
name: “res4d_branch2c”
type: “Convolution”
convolution_param {
num_output: 1024
kernel_size: 1
pad: 0
stride: 1
bias_term: false
}
}

layer {
bottom: “res4d_branch2c”
top: “res4d_branch2c”
name: “bn4d_branch2c”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res4d_branch2c”
top: “res4d_branch2c”
name: “scale4d_branch2c”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res4c”
bottom: “res4d_branch2c”
top: “res4d”
name: “res4d”
type: “Eltwise”
}

layer {
bottom: “res4d”
top: “res4d”
name: “res4d_relu”
type: “ReLU”
}

layer {
bottom: “res4d”
top: “res4e_branch2a”
name: “res4e_branch2a”
type: “Convolution”
convolution_param {
num_output: 256
kernel_size: 1
pad: 0
stride: 1
bias_term: false
}
}

layer {
bottom: “res4e_branch2a”
top: “res4e_branch2a”
name: “bn4e_branch2a”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res4e_branch2a”
top: “res4e_branch2a”
name: “scale4e_branch2a”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res4e_branch2a”
top: “res4e_branch2a”
name: “res4e_branch2a_relu”
type: “ReLU”
}

layer {
bottom: “res4e_branch2a”
top: “res4e_branch2b”
name: “res4e_branch2b”
type: “Convolution”
convolution_param {
num_output: 256
kernel_size: 3
pad: 1
stride: 1
bias_term: false
}
}

layer {
bottom: “res4e_branch2b”
top: “res4e_branch2b”
name: “bn4e_branch2b”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res4e_branch2b”
top: “res4e_branch2b”
name: “scale4e_branch2b”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res4e_branch2b”
top: “res4e_branch2b”
name: “res4e_branch2b_relu”
type: “ReLU”
}

layer {
bottom: “res4e_branch2b”
top: “res4e_branch2c”
name: “res4e_branch2c”
type: “Convolution”
convolution_param {
num_output: 1024
kernel_size: 1
pad: 0
stride: 1
bias_term: false
}
}

layer {
bottom: “res4e_branch2c”
top: “res4e_branch2c”
name: “bn4e_branch2c”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res4e_branch2c”
top: “res4e_branch2c”
name: “scale4e_branch2c”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res4d”
bottom: “res4e_branch2c”
top: “res4e”
name: “res4e”
type: “Eltwise”
}

layer {
bottom: “res4e”
top: “res4e”
name: “res4e_relu”
type: “ReLU”
}

layer {
bottom: “res4e”
top: “res4f_branch2a”
name: “res4f_branch2a”
type: “Convolution”
convolution_param {
num_output: 256
kernel_size: 1
pad: 0
stride: 1
bias_term: false
}
}

layer {
bottom: “res4f_branch2a”
top: “res4f_branch2a”
name: “bn4f_branch2a”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res4f_branch2a”
top: “res4f_branch2a”
name: “scale4f_branch2a”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res4f_branch2a”
top: “res4f_branch2a”
name: “res4f_branch2a_relu”
type: “ReLU”
}

layer {
bottom: “res4f_branch2a”
top: “res4f_branch2b”
name: “res4f_branch2b”
type: “Convolution”
convolution_param {
num_output: 256
kernel_size: 3
pad: 1
stride: 1
bias_term: false
}
}

layer {
bottom: “res4f_branch2b”
top: “res4f_branch2b”
name: “bn4f_branch2b”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res4f_branch2b”
top: “res4f_branch2b”
name: “scale4f_branch2b”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res4f_branch2b”
top: “res4f_branch2b”
name: “res4f_branch2b_relu”
type: “ReLU”
}

layer {
bottom: “res4f_branch2b”
top: “res4f_branch2c”
name: “res4f_branch2c”
type: “Convolution”
convolution_param {
num_output: 1024
kernel_size: 1
pad: 0
stride: 1
bias_term: false
}
}

layer {
bottom: “res4f_branch2c”
top: “res4f_branch2c”
name: “bn4f_branch2c”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res4f_branch2c”
top: “res4f_branch2c”
name: “scale4f_branch2c”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res4e”
bottom: “res4f_branch2c”
top: “res4f”
name: “res4f”
type: “Eltwise”
}

layer {
bottom: “res4f”
top: “res4f”
name: “res4f_relu”
type: “ReLU”
}

layer {
bottom: “res4f”
top: “res5a_branch1”
name: “res5a_branch1”
type: “Convolution”
convolution_param {
num_output: 2048
kernel_size: 1
pad: 0
stride: 2
bias_term: false
}
}

layer {
bottom: “res5a_branch1”
top: “res5a_branch1”
name: “bn5a_branch1”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res5a_branch1”
top: “res5a_branch1”
name: “scale5a_branch1”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res4f”
top: “res5a_branch2a”
name: “res5a_branch2a”
type: “Convolution”
convolution_param {
num_output: 512
kernel_size: 1
pad: 0
stride: 2
bias_term: false
}
}

layer {
bottom: “res5a_branch2a”
top: “res5a_branch2a”
name: “bn5a_branch2a”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res5a_branch2a”
top: “res5a_branch2a”
name: “scale5a_branch2a”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res5a_branch2a”
top: “res5a_branch2a”
name: “res5a_branch2a_relu”
type: “ReLU”
}

layer {
bottom: “res5a_branch2a”
top: “res5a_branch2b”
name: “res5a_branch2b”
type: “Convolution”
convolution_param {
num_output: 512
kernel_size: 3
pad: 1
stride: 1
bias_term: false
}
}

layer {
bottom: “res5a_branch2b”
top: “res5a_branch2b”
name: “bn5a_branch2b”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res5a_branch2b”
top: “res5a_branch2b”
name: “scale5a_branch2b”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res5a_branch2b”
top: “res5a_branch2b”
name: “res5a_branch2b_relu”
type: “ReLU”
}

layer {
bottom: “res5a_branch2b”
top: “res5a_branch2c”
name: “res5a_branch2c”
type: “Convolution”
convolution_param {
num_output: 2048
kernel_size: 1
pad: 0
stride: 1
bias_term: false
}
}

layer {
bottom: “res5a_branch2c”
top: “res5a_branch2c”
name: “bn5a_branch2c”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res5a_branch2c”
top: “res5a_branch2c”
name: “scale5a_branch2c”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res5a_branch1”
bottom: “res5a_branch2c”
top: “res5a”
name: “res5a”
type: “Eltwise”
}

layer {
bottom: “res5a”
top: “res5a”
name: “res5a_relu”
type: “ReLU”
}

layer {
bottom: “res5a”
top: “res5b_branch2a”
name: “res5b_branch2a”
type: “Convolution”
convolution_param {
num_output: 512
kernel_size: 1
pad: 0
stride: 1
bias_term: false
}
}

layer {
bottom: “res5b_branch2a”
top: “res5b_branch2a”
name: “bn5b_branch2a”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res5b_branch2a”
top: “res5b_branch2a”
name: “scale5b_branch2a”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res5b_branch2a”
top: “res5b_branch2a”
name: “res5b_branch2a_relu”
type: “ReLU”
}

layer {
bottom: “res5b_branch2a”
top: “res5b_branch2b”
name: “res5b_branch2b”
type: “Convolution”
convolution_param {
num_output: 512
kernel_size: 3
pad: 1
stride: 1
bias_term: false
}
}

layer {
bottom: “res5b_branch2b”
top: “res5b_branch2b”
name: “bn5b_branch2b”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res5b_branch2b”
top: “res5b_branch2b”
name: “scale5b_branch2b”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res5b_branch2b”
top: “res5b_branch2b”
name: “res5b_branch2b_relu”
type: “ReLU”
}

layer {
bottom: “res5b_branch2b”
top: “res5b_branch2c”
name: “res5b_branch2c”
type: “Convolution”
convolution_param {
num_output: 2048
kernel_size: 1
pad: 0
stride: 1
bias_term: false
}
}

layer {
bottom: “res5b_branch2c”
top: “res5b_branch2c”
name: “bn5b_branch2c”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res5b_branch2c”
top: “res5b_branch2c”
name: “scale5b_branch2c”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res5a”
bottom: “res5b_branch2c”
top: “res5b”
name: “res5b”
type: “Eltwise”
}

layer {
bottom: “res5b”
top: “res5b”
name: “res5b_relu”
type: “ReLU”
}

layer {
bottom: “res5b”
top: “res5c_branch2a”
name: “res5c_branch2a”
type: “Convolution”
convolution_param {
num_output: 512
kernel_size: 1
pad: 0
stride: 1
bias_term: false
}
}

layer {
bottom: “res5c_branch2a”
top: “res5c_branch2a”
name: “bn5c_branch2a”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res5c_branch2a”
top: “res5c_branch2a”
name: “scale5c_branch2a”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res5c_branch2a”
top: “res5c_branch2a”
name: “res5c_branch2a_relu”
type: “ReLU”
}

layer {
bottom: “res5c_branch2a”
top: “res5c_branch2b”
name: “res5c_branch2b”
type: “Convolution”
convolution_param {
num_output: 512
kernel_size: 3
pad: 1
stride: 1
bias_term: false
}
}

layer {
bottom: “res5c_branch2b”
top: “res5c_branch2b”
name: “bn5c_branch2b”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res5c_branch2b”
top: “res5c_branch2b”
name: “scale5c_branch2b”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res5c_branch2b”
top: “res5c_branch2b”
name: “res5c_branch2b_relu”
type: “ReLU”
}

layer {
bottom: “res5c_branch2b”
top: “res5c_branch2c”
name: “res5c_branch2c”
type: “Convolution”
convolution_param {
num_output: 2048
kernel_size: 1
pad: 0
stride: 1
bias_term: false
}
}

layer {
bottom: “res5c_branch2c”
top: “res5c_branch2c”
name: “bn5c_branch2c”
type: “BatchNorm”
batch_norm_param {
use_global_stats: true
}
}

layer {
bottom: “res5c_branch2c”
top: “res5c_branch2c”
name: “scale5c_branch2c”
type: “Scale”
scale_param {
bias_term: true
}
}

layer {
bottom: “res5b”
bottom: “res5c_branch2c”
top: “res5c”
name: “res5c”
type: “Eltwise”
}

layer {
bottom: “res5c”
top: “res5c”
name: “res5c_relu”
type: “ReLU”
}

layer {
bottom: “res5c”
top: “pool5”
name: “pool5”
type: “Pooling”
pooling_param {
kernel_size: 7
stride: 1
pool: AVE
}
}

layer {
bottom: “pool5”
top: “fc1000”
name: “fc1000”
type: “InnerProduct”
inner_product_param {
num_output: 1000
}
}

layer {
bottom: “fc1000”
top: “prob”
name: “prob”
type: “Softmax”
}

Hi,

in my layer

I revise “use_global_stats” to false and it is not supported

not one in ur case

Besides, I also tried Res50 but there are just a little bit

accuracy reduction.

thank you for you reply!

i set “use_global_stats” to true or false ,the results is same,and right on caffe.
but tensorRT is not support this param,How to set the “use_global_stats” param?
thank you!

It seems that is not the cause from “use_global_stats”

Which version of TRT do you use?

and if your input image is 224*224??

sorry ,i make a mistake.
on pc:
use_global_stats set true ,the result is right,
use_global_stats set false,the result is error!

but is all different from tensorRT
i use tensorRT-2.1.2
and input image is 224*224

hi!excuse me,
you say “Besides, I also tried Res50 but there are just a little bit accuracy reduction”,which platform ? PX2 or other?
would you show me how to do it? thanks

next,i did like this on px2
1.input image
i use opencv to load JPG image like this,already substract means before this code.
just one image picture for one test.

for (int h = 0; h < height; ++h) {
for (int w = 0; w < width; ++w) {
data[(0 * height + h) * width + w] = float(cv_resized.atcv::Vec3f(cv::Point(w, h))[0]);// Blue
data[(1 * height + h) * width + w] = float(cv_resized.atcv::Vec3f(cv::Point(w, h))[1]);// Green
data[(2 * height + h) * width + w] = float(cv_resized.atcv::Vec3f(cv::Point(w, h))[2]);// Red
}
}

and set data to the net input like this:
doInference(*context, data, imInfo, bboxPreds, clsProbs, rois, 1);

2.get output of cls_pro or “res5c”
is all different with eachoher!
your any suggestion is greatly appreciated!

detail information on
https://devtalk.nvidia.com/default/topic/1030909/general/resnet50-get-error-result-on-px2-with-tensorrt2-1-2/