How to implement fork and join in non-linear neural network?

When I am trying to implement the Resnet101, I have some questions. In ResNet-101, there are nonlinear dependency, or short-cuts to jump over some layers. Non-linear network topology can contain one-to-many (fork) and many-to-one (join) inter-layer dependencies. For example, layer 1’s output can be the input for both layer in the forward pass, i.e. fork. I want to know that for these nonlinear operation, how to implement them using CUDNN?

Could you please let us know if you are still facing this issue?

Thanks

Yes.

For fork, basically you need to call the 2 operations on the 2 branches using the same pointer to the tensor at the fork (note you should not do in-place operations at those points otherwise the tensor data may be changed for the second operation that you have done).

If needed, you can use cudaMemcpy (or the async version) to copy the tensor to a new place for the second branch.

For join, you just need to call the joining operation (usually add) using 2 pointers to the output of each of the side branches.

Thanks

thx. I got it fixed. I made it more complex before.