tf.Session(), got error "cuDevicePrimaryCtxRetain: CUDA_ERROR_NOT_SUPPORTED"

tf.Session(), got error

Using TensorFlow backend.
Traceback (most recent call last):
File “train.py”, line 190, in
_main()
File “train.py”, line 33, in _main
freeze_body=2, weights_path=‘model_data/yolo_weights.h5’) # make sure you know what you freeze
File “train.py”, line 116, in create_model
model_body = yolo_body(image_input, num_anchors//3, num_classes)
File “/home/eliyart/yolo3/yolo3/model.py”, line 72, in yolo_body
darknet = Model(inputs, darknet_body(inputs))
File “/home/eliyart/yolo3/yolo3/model.py”, line 48, in darknet_body
x = DarknetConv2D_BN_Leaky(32, (3,3))(x)
File “/home/eliyart/yolo3/yolo3/utils.py”, line 16, in
return reduce(lambda f, g: lambda *a, **kw: g(f(*a, **kw)), funcs)
File “/home/eliyart/yolo3/yolo3/utils.py”, line 16, in
return reduce(lambda f, g: lambda *a, **kw: g(f(*a, **kw)), funcs)
File “/usr/local/lib/python3.6/dist-packages/keras/engine/base_layer.py”, line 457, in call
output = self.call(inputs, **kwargs)
File “/usr/local/lib/python3.6/dist-packages/keras/layers/normalization.py”, line 185, in call
epsilon=self.epsilon)
File “/usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py”, line 1858, in normalize_batch_in_training
if not _has_nchw_support() and list(reduction_axes) == [0, 2, 3]:
File “/usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py”, line 292, in _has_nchw_support
gpus_available = len(_get_available_gpus()) > 0
File “/usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py”, line 278, in _get_available_gpus
_LOCAL_DEVICES = get_session().list_devices()
File “/usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py”, line 186, in get_session
_SESSION = tf.Session(config=config)
File “/home/eliyart/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py”, line 1511, in init
super(Session, self).init(target, graph, config=config)
File “/home/eliyart/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py”, line 634, in init
self._session = tf_session.TF_NewSessionRef(self._graph._c_graph, opts)
tensorflow.python.framework.errors_impl.InternalError: failed initializing StreamExecutor for CUDA device ordinal 0: Internal: failed call to cuDevicePrimaryCtxRetain: CUDA_ERROR_NOT_SUPPORTED: operation not supported

Sat Nov 17 13:58:40 2018
±----------------------------------------------------------------------------+
| NVIDIA-SMI 410.48 Driver Version: 410.48 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 |
| N/A 29C P0 68W / 149W | 0MiB / 11441MiB | 100% Default |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
±----------------------------------------------------------------------------+

nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Sat_Aug_25_21:08:01_CDT_2018
Cuda compilation tools, release 10.0, V10.0.130

Any suggestion, thanks.