TensorRT Engine build problem on Windows10

Hi.

Recently, TensorRT for Windows is released, so I’m testing TensorRT on Windows10.
I am using C++ CaffeParser to use TensorRT engine build from caffe model,
but the following error has come out.

Error location
engine = builder-> buildCudaEngine (* network);

Error
[2018-10-18 07: 45: 04 ERROR] c: \ p4sw \ sw \ gpgpu \ MachineLearning \ DIT \ release \ 5.0 \ builder \ cudnnBuilderUtils.cpp (255) - Cuda Error in nvinfer 1 :: cudnn :: findFastestTactic: 77
[2018-10-18 07: 45: 04 ERROR] c: \ p4sw \ sw \ gpgpu \ MachineLearning \ DIT \ release \ 5.0 \ engine \ runtime.cpp (30) - Cuda Error in nvinfer 1 :: `anonymous-namespace ’ :: DefaultAllocator :: free: 77

By the way Ubuntu handles it well with the same code.

Is there any solution?

My Environment:
Windows10 64-bit
Geforce 1080Ti
Nvidia Driver Version: 416.16
TensorRT 5RC for Windows
CUDA10, cuDNN7.3.1

Hello,

it’d help us debug this if you can provide a small repro package that contains the source, model, and dataset that exhibits the symptom.

Hi.

Thank you for reply.

I’m sorry. It was my mistake.
It was a misconfiguration of Caffe’s Deconvoution layer.

Thanks.

Hi,

I have the same problem. Could you explain your solution in a little more detail ?

Thank you very much.

Hello.

I converted onnx model to caffe model using onnx2caffe below and used it for TensorRT.

Set caffe’s deconvolution setting to “bilinear”
I solved it when I did it.

Thanks.

Hi,

My caffe model’s deconvolution’s type is “bilinear”,but it have this problem.

this is my layer:

  • layer { name: "upscore" type: "Deconvolution" bottom: "score_fr" top: "upscore" param { lr_mult: 0.0 } convolution_param { num_output: 21 bias_term: false kernel_size: 63 group: 21 stride: 32 weight_filler { type: "bilinear" } } }

e…I don’t know what to do. It feels like TENSORRT made this mistake.

Hi.

I set Deconvolution parameters as follows.

factor = int(node.attrs["height_scale"])
node_name = node.name
input_name = str(node.inputs[0])
output_name = str(node.outputs[0])
channels = graph.channel_dims[input_name]

layer = myf("Deconvolution", node_name, [input_name], [output_name],
            convolution_param=dict(
                num_output=channels,
                kernel_size= (2 * factor - factor % 2),
                stride=factor,
                pad=int(np.ceil((factor - 1) / 2.)),
                group=channels,
                bias_term=False,
                weight_filler=dict(type="bilinear")
            ),
            param=dict(
                lr_mult=0,
                decay_mult=0,
            ))

Please check.

Thanks.