Multiple Caffe models on single GPU

Hi,

We have trained multiple caffe models already.

We want to use these multiple caffe models for making predictions on Single GPU simultaneously.

Is this possible, if yes how to do it ?

We are getting following error when we try it:

Check failed: status == CUDNN_STATUS_SUCCESS (8 vs. 0) CUDNN_STATUS_EXECUTION_FAILED

Can anyone help in resolving this error ?

Thanks in advance for the help.

Thanks, Abhinav

i can not really provide an answer

but it is clear to me that the error message is not helping much
it provides little reason why the execution failed?
did an instance fail to allocate sufficient memory, etc…?

If you are invoking the prediction functions from multiple CPU processes, it should work.

If you are invoking the prediction functions from multiple CPU threads (in one application), then it might be because the CUDNN funcstions are not CPU-thred-safe (due to internal usage of global variables like constant memory, texture references, … in the Cuda kernels).

See https://devtalk.nvidia.com/default/topic/491350/constant-memory-not-thread-safe-in-cuda-4-0/ or http://stackoverflow.com/questions/19662388/how-to-get-gpu-kernels-using-global-texture-references-thread-safe-for-multiple or https://devtalk.nvidia.com/default/topic/711438/are-npp-routines-cpu-thread-safe-/

Generally, I conservatively assume that none of the black-box CUDA libraries (cublas, cufft, npp, cudnn, …) is CPU-thredsafe. That means e.g. in our framework that we do not call their functions from multiple CPU threads simultanously.