TensorRT crash during Resnet34 optimization

Hello, I’m trying to optimize Resnet34 model, mentioned in NVIDIA/caffe/models/modelBuilder.
I’ve successfully generated .caffemodel and updated .ptototxt for deployment, and can use it via caffe.
However, when I try to load and optimize this model using TensorRT, I get crash with following logs:

TENSORRT INFO: Applying generic optimizations to the graph for inference.
TENSORRT INFO: Original: 124 layers
TENSORRT INFO: After dead-layer removal: 124 layers
TENSORRT INFO: Fusing convolution weights from conv1 with scale conv1/bn
TENSORRT INFO: Fusing convolution weights from res2.1.conv1 with scale res2.1.conv1/bn
TENSORRT INFO: Fusing convolution weights from res2.1.conv2 with scale res2.1.conv2/bn
TENSORRT INFO: Fusing convolution weights from res2.2.conv1 with scale res2.2.conv1/bn
TENSORRT INFO: Fusing convolution weights from res2.2.conv2 with scale res2.2.conv2/bn
TENSORRT INFO: Fusing convolution weights from res2.3.conv1 with scale res2.3.conv1/bn
TENSORRT INFO: Fusing convolution weights from res2.3.conv2 with scale res2.3.conv2/bn
TENSORRT INFO: Fusing convolution weights from res3.1.conv1 with scale res3.1.conv1/bn
TENSORRT INFO: Fusing convolution weights from res3.1.conv2 with scale res3.1.conv2/bn
TENSORRT INFO: Fusing convolution weights from res3.1.skipConv with scale res3.1.skipConv/bn
TENSORRT INFO: Fusing convolution weights from res3.2.conv1 with scale res3.2.conv1/bn
TENSORRT INFO: Fusing convolution weights from res3.2.conv2 with scale res3.2.conv2/bn
TENSORRT INFO: Fusing convolution weights from res3.3.conv1 with scale res3.3.conv1/bn
TENSORRT INFO: Fusing convolution weights from res3.3.conv2 with scale res3.3.conv2/bn
TENSORRT INFO: Fusing convolution weights from res3.4.conv1 with scale res3.4.conv1/bn
TENSORRT INFO: Fusing convolution weights from res3.4.conv2 with scale res3.4.conv2/bn
TENSORRT INFO: Fusing convolution weights from res4.1.conv1 with scale res4.1.conv1/bn
TENSORRT INFO: Fusing convolution weights from res4.1.conv2 with scale res4.1.conv2/bn
TENSORRT INFO: Fusing convolution weights from res4.1.skipConv with scale res4.1.skipConv/bn
TENSORRT INFO: Fusing convolution weights from res4.2.conv1 with scale res4.2.conv1/bn
TENSORRT INFO: Fusing convolution weights from res4.2.conv2 with scale res4.2.conv2/bn
TENSORRT INFO: Fusing convolution weights from res4.3.conv1 with scale res4.3.conv1/bn
TENSORRT INFO: Fusing convolution weights from res4.3.conv2 with scale res4.3.conv2/bn
TENSORRT INFO: Fusing convolution weights from res4.4.conv1 with scale res4.4.conv1/bn
TENSORRT INFO: Fusing convolution weights from res4.4.conv2 with scale res4.4.conv2/bn
TENSORRT INFO: Fusing convolution weights from res4.5.conv1 with scale res4.5.conv1/bn
TENSORRT INFO: Fusing convolution weights from res4.5.conv2 with scale res4.5.conv2/bn
TENSORRT INFO: Fusing convolution weights from res4.6.conv1 with scale res4.6.conv1/bn
TENSORRT INFO: Fusing convolution weights from res4.6.conv2 with scale res4.6.conv2/bn
TENSORRT INFO: Fusing convolution weights from res5.1.conv1 with scale res5.1.conv1/bn
TENSORRT INFO: Fusing convolution weights from res5.1.conv2 with scale res5.1.conv2/bn

My generated model files:
https://drive.google.com/open?id=1RmGk244vhz1lcLdXDlNMkSz24BdV_LJD

Hi,

Could you please share the detailed error that you are getting during optimizing the model?
Also, the attached model file is not accessible, request you to please share the script and model file to reproduce the issue so we can better help.

Meanwhile, please try to use “trtexec” command for model optimization:
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec

Thanks

the same problem, do you solve the problem?

Sadly, no, I was using laptop at that time, but I don’t have access to it right now. I think I’ve managed to run the same model on another GPU, so it could be a hardware/driver issue.