This is in relation to:
https://devtalk.nvidia.com/default/topic/1046795/jetson-tx2/nvcaffe-0-17-used-in-two-plugins-in-the-same-pipe-crashes/
I did more digging and found that
test_mem_req_all_grps_
is a static member of CuDNNConvolutionLayer
So my question is:
Is nvcaffe cudnn_conv_layer (.cu,.hpp,.cpp) safe to be used in two separate inferencing Net objects inferencing in separate threads?
Also, is there a better forum for this question?
I checked the main codeline version of caffe. That version of CuDNNConvolutionLayer does not have static data members.
Hi,
For the cuDNN document:
https://docs.nvidia.com/deeplearning/sdk/cudnn-developer-guide/index.html#thread-safety
[i]----------------------------------------------------------------------------------------------------
2.5. Thread Safety
The library is thread safe and its functions can be called from multiple host threads, as long as threads to do not share the same cuDNN handle simultaneously.
[/i]
Maybe you can check if there is a shared cuDNN handle inside your program.
Thanks.
Thanks for taking the time to look at the problem.
I continued this thread on github. It appears that the problem could possibly come from
cudnn_conv_layer
See:
https://github.com/NVIDIA/caffe/issues/555