failure to set vgpu computing mode from prohibited to default

We are try to run tensorflow on virtualized gpus grid of Tesla T4 on virtual machine system Ubuntu 18.04, but when we checked the availability of vgpus, the following error popped up.

tf.test.is_gpu_available()
2019-11-07 20:54:08.422120: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-11-07 20:54:08.440825: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcuda.so.1
2019-11-07 20:54:08.504290: W tensorflow/compiler/xla/service/platform_util.cc:256] unable to create StreamExecutor for CUDA:0: failed initializing StreamExecutor for CUDA device ordinal 0: Internal: failed call to cuDevicePrimaryCtxRetain: CUDA_ERROR_UNKNOWN: unknown error
2019-11-07 20:54:08.504434: F tensorflow/stream_executor/lib/statusor.cc:34] Attempting to fetch value instead of handling error Internal: no supported devices found for platform CUDA

Our guess was that happened because the computing mode of the virtualized gpu was “prohibited”. We tried to change it back to default but failed, because ”Setting compute mode to DEFAULT is not supported.”

gpu@gpu-KVM:~$ nvidia-smi
Thu Nov 7 20:47:04 2019
±----------------------------------------------------------------------------+
| NVIDIA-SMI 430.30 Driver Version: 430.30 CUDA Version: 10.2 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GRID T4-1B On | 00000000:00:09.0 Off | N/A |
| N/A N/A P8 N/A / N/A | 80MiB / 1016MiB | 0% Prohibited |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
±----------------------------------------------------------------------------+
gpu@gpu-KVM:~$ sudo nvidia-smi -i 0 -c 0
[sudo] password for gpu:
Setting compute mode to DEFAULT is not supported.
Treating as warning and moving on.
All done.

wrong vgpu type T4-1B:
https://docs.nvidia.com/grid/latest/grid-vgpu-user-guide/index.html#cuda-open-cl-support-vgpu
https://docs.nvidia.com/grid/latest/grid-vgpu-user-guide/index.html#vgpu-types-tesla-t4

Thank for your reply. Following your instruction, we are able to change the type from 1B to 1Q and set the compute mode from prohibited to default, but vgpu still can not be detected with the same error.

tf.test.is_gpu_available()
2019-11-07 20:54:08.422120: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-11-07 20:54:08.440825: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcuda.so.1
2019-11-07 20:54:08.504290: W tensorflow/compiler/xla/service/platform_util.cc:256] unable to create StreamExecutor for CUDA:0: failed initializing StreamExecutor for CUDA device ordinal 0: Internal: failed call to cuDevicePrimaryCtxRetain: CUDA_ERROR_UNKNOWN: unknown error
2019-11-07 20:54:08.504434: F tensorflow/stream_executor/lib/statusor.cc:34] Attempting to fetch value instead of handling error Internal: no supported devices found for platform CUDA
nvidia-bug-report.log.gz (1.1 MB)

Please post the output of the deviceQuery demo that comes with the cuda toolkit.
Please run nvidia-bug-report.sh as root and attach the resulting .gz file to your post. Hovering the mouse over an existing post of yours will reveal a paperclip icon.
https://devtalk.nvidia.com/default/topic/1043347/announcements/attaching-files-to-forum-topics-posts/

Thanks for your reply. We have attached bug-report file to our previous post. We also followed the instruction and ran startx -- -logverbose 6 but the VM crashed then.
nvidia-bug-report.log.gz (1.05 MB)

The nvidia-uvm module is not loaded. Please run
sudo modprobe nvidia-uvm
and rerun your cuda application

Thanks for your reply. We followed your steps, but tensorflow is still showing the same traceback bug we had before. The current bug report is included in the previous post.
nvidia-bug-report.log.gz (1.05 MB)

Please run and post the output of the deviceQuery demo from cuda toolkit.

We tried to run deviceQuery but it seemed like Nvidia toolkit was not installed(we ran nvcc it said nvcc not found), so we installed toolkit 10.1. We chose 10.1 rather than 10.2 because CUDA Toolkit and Compatible Driver Versions chart says 10.2 requires driver version >=440(Release Notes :: CUDA Toolkit Documentation)

After the installation, we ran deviceQuery:
gpu@gpu-KVM:~/NVIDIA_CUDA-10.1_Samples/bin/x86_64/linux/release$ ./deviceQuery
./deviceQuery Starting…

CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: “GRID T4-1Q”
CUDA Driver Version / Runtime Version 10.2 / 10.1
CUDA Capability Major/Minor version number: 7.5
Total amount of global memory: 1016 MBytes (1065353216 bytes)
(40) Multiprocessors, ( 64) CUDA Cores/MP: 2560 CUDA Cores
GPU Max Clock rate: 1590 MHz (1.59 GHz)
Memory Clock rate: 5001 Mhz
Memory Bus Width: 256-bit
L2 Cache Size: 4194304 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 1024
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 3 copy engine(s)
Run time limit on kernels: No
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device supports Compute Preemption: Yes
Supports Cooperative Kernel Launch: Yes
Supports MultiDevice Co-op Kernel Launch: Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 0 / 9
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.2, CUDA Runtime Version = 10.1, NumDevs = 1
Result = PASS

Newly generated bug report included

Output looks good.
If you didn’t have the cuda toolkit installed, how did you intend to run tensorflow? Do you have some kind of bundle installed? Please see the requirements:
https://www.tensorflow.org/install/gpu

Setting compute mode to DEFAULT is not supported. Unable to set the compute mode for GPU 00000000:02:01.0: Not Supported Treating as warning and moving on. All done. on vgpu A40

i’m not able to change compute mode to default , it shows as below prohibited . Need help from nvidia support or anyone please, much appreciated.

The VM was created with a vgpu profile A40-48A, meaning it’s for application-only usage. You’d need a VM created with a different profile (e.g. Q/C) for compute workloads.