pycuda._driver.LogicError: explicit_context_dependent failed: invalid device context - no currently active context?

when I run tensorrt with python ,the error is apear
d_input = cuda.mem_alloc(batch_size * input_img.size * input_img.dtype.itemsize)
pycuda._driver.LogicError: explicit_context_dependent failed: invalid device context - no currently active context?

it seems can not find the GPU device,but I can run tensorrt with C++;
I have no idea to solve

Hello,

You may have a configuration issue. can you run the following python code?

root@d8326872c382:/workspace# python

Python 2.7.12 (default, Dec  4 2017, 14:50:18)
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
[b]>>> import tensorflow as tf

>>> from tensorflow.python.client import device_lib

>>> print(device_lib.list_local_devices())[/b]

2018-10-15 17:51:03.557931: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-10-15 17:51:03.982264: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1411] Found device 0 with properties:
name: Tesla P100-SXM2-16GB major: 6 minor: 0 memoryClockRate(GHz): 1.4805
pciBusID: 0000:06:00.0
totalMemory: 15.89GiB freeMemory: 506.12MiB
2018-10-15 17:51:04.326804: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1411] Found device 1 with properties:
name: Tesla P100-SXM2-16GB major: 6 minor: 0 memoryClockRate(GHz): 1.4805
pciBusID: 0000:07:00.0
totalMemory: 15.89GiB freeMemory: 506.25MiB
2018-10-15 17:51:04.687152: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1411] Found device 2 with properties:
name: Tesla P100-SXM2-16GB major: 6 minor: 0 memoryClockRate(GHz): 1.4805
pciBusID: 0000:0a:00.0
totalMemory: 15.89GiB freeMemory: 506.25MiB
2018-10-15 17:51:05.066520: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1411] Found device 3 with properties:
name: Tesla P100-SXM2-16GB major: 6 minor: 0 memoryClockRate(GHz): 1.4805
pciBusID: 0000:0b:00.0
totalMemory: 15.89GiB freeMemory: 506.25MiB
2018-10-15 17:51:05.453332: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1411] Found device 4 with properties:
name: Tesla P100-SXM2-16GB major: 6 minor: 0 memoryClockRate(GHz): 1.4805
pciBusID: 0000:85:00.0
totalMemory: 15.89GiB freeMemory: 506.25MiB
2018-10-15 17:51:05.855407: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1411] Found device 5 with properties:
name: Tesla P100-SXM2-16GB major: 6 minor: 0 memoryClockRate(GHz): 1.4805
pciBusID: 0000:86:00.0
totalMemory: 15.89GiB freeMemory: 506.25MiB
2018-10-15 17:51:06.268557: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1411] Found device 6 with properties:
name: Tesla P100-SXM2-16GB major: 6 minor: 0 memoryClockRate(GHz): 1.4805
pciBusID: 0000:89:00.0
totalMemory: 15.89GiB freeMemory: 506.25MiB
2018-10-15 17:51:06.695135: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1411] Found device 7 with properties:
name: Tesla P100-SXM2-16GB major: 6 minor: 0 memoryClockRate(GHz): 1.4805
pciBusID: 0000:8a:00.0
totalMemory: 15.89GiB freeMemory: 506.25MiB

I run print(device_lib.list_local_devices())
print as

2018-10-16 16:23:51.175907: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2018-10-16 16:23:51.266274: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:897] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-10-16 16:23:51.266875: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1405] Found device 0 with properties:
name: GeForce GTX 1050 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.62
pciBusID: 0000:01:00.0
totalMemory: 3.94GiB freeMemory: 3.28GiB
2018-10-16 16:23:51.266891: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1484] Adding visible gpu devices: 0
Traceback (most recent call last):
File “”, line 1, in
File “/home/zf/anaconda3/envs/python35/lib/python3.5/site-packages/tensorflow/python/client/device_lib.py”, line 41, in list_local_devices
for s in pywrap_tensorflow.list_devices(session_config=session_config)
File “/home/zf/anaconda3/envs/python35/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow_internal.py”, line 1679, in list_devices
return ListDevices(status)
File “/home/zf/anaconda3/envs/python35/lib/python3.5/site-packages/tensorflow/python/framework/errors_impl.py”, line 519, in exit
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InternalError: cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version

I run print(device_lib.list_local_devices()) it seems cuda driver version is wrong
print as

2018-10-16 16:23:51.175907: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2018-10-16 16:23:51.266274: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:897] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-10-16 16:23:51.266875: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1405] Found device 0 with properties:
name: GeForce GTX 1050 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.62
pciBusID: 0000:01:00.0
totalMemory: 3.94GiB freeMemory: 3.28GiB
2018-10-16 16:23:51.266891: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1484] Adding visible gpu devices: 0
Traceback (most recent call last):
File “”, line 1, in
File “/home/zf/anaconda3/envs/python35/lib/python3.5/site-packages/tensorflow/python/client/device_lib.py”, line 41, in list_local_devices
for s in pywrap_tensorflow.list_devices(session_config=session_config)
File “/home/zf/anaconda3/envs/python35/lib/python3.5/site-packages/tensorflow/python/pywrap_tensorflow_internal.py”, line 1679, in list_devices
return ListDevices(status)
File “/home/zf/anaconda3/envs/python35/lib/python3.5/site-packages/tensorflow/python/framework/errors_impl.py”, line 519, in exit
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InternalError: cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version

Yes, looks like you have CUDA configuration issue.

If you want to install CUDA, start with a clean OS reboot, get your installers from here:

and follow the instructions in the linux install guide carefully:

http://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html

I think I install the cuda correct,I can use tensorrt in c++ version and other GPU program,also pytorch in gpu version. it maybe the pycuda has some problem , is it require the cuda runtime version more higher?
my machine is
NVIDIA-SMI 384.130
CUDA Version: 9.0
CUDNN: 7.301
CPU: GTX 1050Ti
OS: Ubuntu 16.04 LTS

If you have the latest CUDA 10 driver and toolkit installed, I think the current distribution of PyCUDA is still linked with CUDA 9.x. If this is the problem, it is possible to re-build PyCUDA from source. Otherwise, you could try sending a request to the PyCUDA maintainers… ?

PS: I’m also waiting, but for a Windows-friendly CUDA-10 wheel to be released!

I have installed cuda9 driver and toolkit,I try to build pycuda with source code,but it also has the problem