Cuda capability requirement

Hi,

I tried nvcr.io/nvidia/tensorflow to run tensorflow examples on GPU with multiple tags.
With recent on (19.01-py3), it said ERROR: Detected NVIDIA Tesla K80 GPU, which is not supported by this container.

Then, I tried old versions 18.11, 18.10, and 18.05 since they used old version of tensorflow (1.11) since the TF requires lower GPU compute capabilities.

However, it still showed the same messages when I started a docker.
ERROR: Detected NVIDIA Tesla K80 GPU, which is not supported by this container

And when I run label_image.py example, it showed this message.
I tensorflow/core/common_runtime/gpu/gpu_device.cc:1394] Ignoring visible gpu device (device: 2, name: Tesla K80, pci bus id: 0000:86:00.0, compute capability: 3.7) with Cuda compute capability 3.7. The minimum required Cuda capability is 5.2.

This is output from nvidia-smi.
±----------------------------------------------------------------------------+
| NVIDIA-SMI 410.48 Driver Version: 410.48 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla K80 Off | 00000000:04:00.0 Off | Off |
| N/A 66C P0 61W / 149W | 0MiB / 12206MiB | 0% Default |
±------------------------------±---------------------±---------------------+
| 1 Tesla K80 Off | 00000000:05:00.0 Off | Off |
| N/A 44C P0 74W / 149W | 0MiB / 12206MiB | 0% Default |
±------------------------------±---------------------±---------------------+
| 2 Tesla K80 Off | 00000000:86:00.0 Off | Off |
| N/A 36C P0 60W / 149W | 0MiB / 12206MiB | 0% Default |
±------------------------------±---------------------±---------------------+
| 3 Tesla K80 Off | 00000000:87:00.0 Off | Off |
| N/A 51C P0 72W / 149W | 0MiB / 12206MiB | 0% Default |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
±----------------------------------------------------------------------------+

With Tesla K80, is there any way of using tensorflow in docker images (https://ngc.nvidia.com/catalog/containers/nvidia:tensorflow/tags)?

Thanks,

Any feedbacks?

This is expected behavior.

Generally, NGC containers are usable on various/certain members of the pascal, volta, and turing GPU families.

You can certainly use TF in a docker image on K80. However it would require a TF version that supported K80 to be installed in the image, or for you to build TF to meet your needs. NGC containers are not set up to support K80.

Hi, Robert.

Thank you for reply.
How about Jetson TX2 model which has 6.2 compute capability?
Does Jetson Tx2 support most of updated version of NGC containers?

Thanks.