TensorRT inference from caffe model and prototxt


I am building a inference to run a segmentation, following the sample in tensorrt folder.
I have successfully build the cpp file and can run the execution file.
But a problem about prototxt file setting which may give nan to output.
Here’s the situation:

input: “data”
input_shape { dim: 1 dim: 3 dim: 192 dim: 256}
layer {
name: “down1”
type: “Convolution”
bottom: “data”
top: “down1”

layer {
name: “down1/bn”
type: “BatchNorm”
bottom: “down1”
top: “down1/bn”

layer {
name: “down1/relu”
type: “ReLU”
bottom: “down1/bn”
top: “down1/relu”

layer {
name: “down2”
type: “Convolution”
bottom: “down1” -> leads to non NAN value
bottom: “down1/relu” -> leads to NAN value
top: “down2”

I know that TensorRT will merge some operation into single layer.
Is it the reason why down1 works but down1/relu doesn’t?


TensorRT Version: 7.0.0-1
GPU Type: Tesla V100
Nvidia Driver Version: 450.51.05
CUDA Version: 11.0
CUDNN Version:
Operating System + Version: ubuntu 18.04
Python Version (if applicable): 3.6.9
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

We have deprecated the Caffe Parser and UFF Parser in TensorRT 7.
Could you please try following the below conversion flow:
Caffe -> ONNX -> TRT
Please refer below link for onnx conversion.

Also, will recommend to use latest TRT version.