Caffe DenseNet concat layer error using TRT3.0

When deploy Caffe DenseNet model, the concat layer throw error:
ERROR: genericReformat.cu (2068) - Cuda Error in callMemcpyCodepath: 11
While with newly provided createConcatPlugin(1, false) API, it goes well and the output is right.

But densenet is channelwise concat operation, so shouldn’t default layer be ok?

Hi,

Cross-channel concatenation is supported by TensorRT.
You can find the detail support information here:
http://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#layers

CUDA error 11 indicates the invalid value.
Please check if your application has unexpected behaviour first:
http://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html#group__CUDART__TYPES_1g3f51e3575c2178246db0a94a430e0038

Thanks.

Thanks first for ur prompt reply.

I will here detail the bug:

I shortten my prototxt for testing this error, with:

Conv1 -> relu -> pool1 -> Conv2 -> relu
Concat1(pool1+conv2) -> Conv3 -> relu
concat2(pool1+conv2+conv3)

all concat layer with no param (default axis=1 mean channelwise)

When runing under FP32, no error and output is right~
When runing under FP16, the error come up
When runing under FP16 & Concat implemented by plugin “createConcatPlugin”, no error and output is right~

Thanks again.

Hi,

We have discussed this issue internally.

This is a known issue and is fixed in TensorRT 3.0 GA.
Please wait for our announcement to get an updated library.

Thanks.

Hi, @jiangshimiao1. I have faced the same problem. Did you solve it? Please help me.

Hi, @jiangshimiao1 @1130445121, I also have this errors. Did you solve it? Please help me. Thanks firstly!