Caffe DenseNet concat layer error using TRT3.0

When deploy Caffe DenseNet model, the concat layer throw error:
ERROR: genericReformat.cu (2068) - Cuda Error in callMemcpyCodepath: 11
While with newly provided createConcatPlugin(1, false) API, it goes well and the output is right.

But densenet is channelwise concat operation, so shouldn’t default layer be ok?

Hi,

Cross-channel concatenation is supported by TensorRT.
You can find the detail support information here:
[url]Developer Guide :: NVIDIA Deep Learning TensorRT Documentation

CUDA error 11 indicates the invalid value.
Please check if your application has unexpected behaviour first:
[url]CUDA Runtime API :: CUDA Toolkit Documentation

Thanks.

Thanks first for ur prompt reply.

I will here detail the bug:

I shortten my prototxt for testing this error, with:

Conv1 → relu → pool1 → Conv2 → relu
Concat1(pool1+conv2) → Conv3 → relu
concat2(pool1+conv2+conv3)

all concat layer with no param (default axis=1 mean channelwise)

When runing under FP32, no error and output is right~
When runing under FP16, the error come up
When runing under FP16 & Concat implemented by plugin “createConcatPlugin”, no error and output is right~

Thanks again.

Hi,

We have discussed this issue internally.

This is a known issue and is fixed in TensorRT 3.0 GA.
Please wait for our announcement to get an updated library.

Thanks.

Hi, @jiangshimiao1. I have faced the same problem. Did you solve it? Please help me.

Hi, @jiangshimiao1 @1130445121, I also have this errors. Did you solve it? Please help me. Thanks firstly!