cuDNN: Merging parallel layers?

If i have two separate layers, let’s say layer L1 and layer L2, that both receive the same or different inputs. Would it be possible to merge the outputs of L1 and L2 to create an input for the next layer L3?

As far as i can see i can only merge layers by first shifting the output data from L1 in certain regions and then injecting the output data from L2 in the resulting empty regions. However this would be a time consuming process. Did the designers take this into account? Is there some internal mechanism to deal with this?

What to you mean by merging exactly?
If you want to concatenate some features for examples, you can use the Strides in the tensors to weave the features or concatenate the features of one tensor at the end of the other.

If by merging you mean adding the values, you can simply use the ACCUMULATE mode in the convolution.