Hi,
UPDATE
Look also for discussion on:
[url]https://devtalk.nvidia.com/default/topic/1041741/cudnn/use-of-cudnn-rnn-forwardtraining-and-backwardtraining/?offset=4#5284745[/url]
removed
Hi,
UPDATE
Look also for discussion on:
[url]https://devtalk.nvidia.com/default/topic/1041741/cudnn/use-of-cudnn-rnn-forwardtraining-and-backwardtraining/?offset=4#5284745[/url]
removed
I think its an undefined buffer content somewhere, but yet I still haven’t found the reason…
UPDATED: output dimension was wrong (see edit in post above); and also some transposing handling was missing
Hi m1,
I have some questions.
1、What is the mean of “values 0 and 4 reference the input gate” from the doc at 4.106?
2、For example, in caffe the lstm layer’s first input_dim is [17 4], and second input_dim is [17, 4 256]. The out_dim is [17 4 128]. How use cudnn RNN apis to implement forwarding?
Thanks a lot!
there is a RNN sample in cudnn_samples_v7 (CUDNN v7.2); the use of cudnnGetRNNLinLayerMatrixParams is shown in there.
see my answer:
[url]https://devtalk.nvidia.com/default/topic/1042215/cudnn/is-there-any-samples-using-cudnn-to-create-a-neutral-network-/post/5286471/#5286471[/url]
maybe that helps.
here also see the RNN sample, there forward+backward propagation (through cudnnRNNForwardTraining/cudnnRNNBackwardData) is shown
Hi, m1
Thank you very much! From the sample, I implemented lstm layer using cudnn API, and I got correct result comparing with caffe. Thanks a lot.