CUDA does not work on the Ubuntu 14.04 with Geforce 210 and Geforce GTX 890 Ti.

On the workstation, two cards are installed: Geforce 210 (monitor connected) and Geforce GTX 890Ti, operation system Ubuntu 14.04.

First, we install the NVIDIA driver for Geforce 210 adapter:

sudo add-apt-repository ppa:ubuntu-x-swat/x-updates
sudo apt-get update && sudo apt-get install nvidia-340 nvidia-settings
sudo nvidia-xconfig

Then we modify the /etc/X11/xorg.conf file:

Section “Device”
Identifier “Device0”
Driver “nvidia”
VendorName “NVIDIA Corporation”
BoardName “GeForce 210”
BusID “PCI:02:00.0”
EndSection

The video system is working correctly.

The lspci -k| grep -EA2 ‘VGA|3D’ finds two cards:

02:00.0 GT218 [GeForce210] (rev a2)
Device 3629
Kernel driver in use: nvidia

03:00.0 GM200 [GeForce GTX 980 Ti] (rev a1)
Device 3230

Next we install CUDA Toolkit 7.5 (deb local) in exact accordance with the guidance of the Installation Guide for Linux. The installation process is completed normally.

Without reboot everything’s in order: The video system is working correctly and deviceQuery (CUDA Samples) is PASS.

After the reboot, the video system is not working correctly: The system displays a graphical password entry screen and the password does not perceive. In text mode terminal, we can log in, and deviceQuery is working properly.

The lspci -k| grep -EA2 ‘VGA|3D’ shows other details:

02:00.0 GT218 [GeForce210] (rev a2)
Device 3629

03:00.0 GM200 [GeForce GTX 980 Ti] (rev a1)
Device 3230
Kernel driver in use: nvidia

What we have done is wrong?

How to do the right thing?

Thanks in advance for your help.

GeForce 210 is a compute 1.x device, and it is not compatible with the GPU driver installed by CUDA 7.5

When X is restarted, the GeForce 210 will either not work at all, or it will revert to being serviced by some other driver. This other driver and the GeForce 210 is being affected by the OpenGL libs installed by the driver associated with CUDA 7.5

Although you say “deviceQuery is working correctly” after the reboot, I believe that it will not detect or report on the GeForce 210 device.

If you want to use CUDA 7.5, the GeForce 210 is not compatible and should be removed. Alternatively, you can attempt to retain the GeForce 210 and attempt to have it serviced by another (non-NVIDIA) driver. This would be fairly complicated and I cannot give you instructions on how to do it. It is not possible to have two different versions of the NVIDIA driver resident and active at the same time.

Thank you for your reply txbob.

I’d like to preserve the GeForce 210 in the system is to connect the monitor. Running CUDA on it is not planned. Evidently indeed remains an option with two different drivers. Is there any similar examples (not necessarily for the Ubuntu)?

What is the “official” recommended NVIDIA binary driver to use for CUDA / OpenCL on this older series of cards and on Linux 64-bit (Ubuntu 14.04 LTS)? (Specifically, I’m trying pyopencl and it says “No OpenCL” with the recommended set of packages)

According to the product page and the driver search page using Linux x86_64 currently takes me to the driver download page for v340.96.

I’ve seen multiple threads online mentioning problems that people are running into getting CUDA working on the legacy GeForce series cards:

It would be great to put these old cards to good use, but I haven’t had much luck (or time) to get it working using the recommended driver.

As I mentioned, I’m running Ubuntu 14.04 LTS with a GeForce GT 240 with latest recommended drivers (340.96) from packages in Graphics Drivers Team PPA.

In case it helps, the specific packages & versions that I’m currently using are:

$ dpkg -l | grep -i nvidia | grep '^ii'
ii  bbswitch-dkms                                               0.7-2ubuntu1                                        amd64        Interface for toggling the power on nVidia Optimus video cards
ii  libcuda1-340                                                340.96-0ubuntu0.14.04.1                             amd64        NVIDIA CUDA runtime library
ii  nvidia-340                                                  340.96-0ubuntu0.14.04.1                             amd64        NVIDIA binary driver - version 340.96
ii  nvidia-340-uvm                                              340.96-0ubuntu0.14.04.1                             amd64        Transitional package for nvidia-340
ii  nvidia-opencl-icd-340                                       340.96-0ubuntu0.14.04.1                             amd64        NVIDIA OpenCL ICD
ii  nvidia-prime                                                0.6.2                                               amd64        Tools to enable NVIDIA's Prime
ii  nvidia-settings                                             361.28-0ubuntu0~gpu14.04.1                          amd64        Tool for configuring the NVIDIA graphics driver

NOTE: According to the GT 240 Product Page and CUDA Legacy GPUs Page this card supports the following:

NVIDIA CUDA™ Technology  ✓
OpenGL                   3.2
CUDA Cores               96
Compute Capability       1.2

If I recall correctly, the last version of CUDA that supported compute capability 1.x (sm_1x) devices was CUDA 6.5. So yes, CUDA is supported on the GT 240, just not recent versions of CUDA. Hardware vendors typically do not put much effort into updating product pages for obsolete products to reflect such new restrictions.

Given the many functional and performance limitations of sm_1x hardware, I strongly suggest an upgrade to more recent hardware, otherwise the CUDA programming experience will be unnecessarily painful. Affordable modern GPUs will run rings around outdated sm_1x GPUs like the GT 240.

To the original poster:

As far as I’m aware, and as far as my own Google searching results have turned up… it appears that there is no known way to use 2 incompatible NVIDIA driver versions at the same time.

Initially my thought was that perhaps some containerization technology such as Docker might help to isolate your dependencies & runtime environment for each driver & card. However… because the NVIDIA driver is a kernel module, and the kernel sits above and is shared by all containers on a system… it’s simply not possible to run 2 incompatible kernel modules at the same time. For this, you’d need a hypervisor or some virtualization technology that acts as an abstraction layer between the kernel & hardware, so you could run a VM that attaches to one card, and your host that attaches to another.

Now, as far as actually doing this… I’m not sure that I can be of any help… as it’s beyond my experience of anything I’ve tried yet. However, it’s probably doable but will take a long time to figure out, and definitely doable if you have enough time, resources, and probably money and people to throw at the problem.

This Stack Overflow Question may point you in the right direction for appropriate virtualization technologies.

To jcuzella and njuffa:

Thank you for your interesting ideas. We have studied this issue: CUDA 1._ & CUDA x._, x> 1 :). Development and support of the kernel are not included in our plans. So we replaced the Geforce 210 on the Geforce GT610. Everything is works perfectly. With half a turn: GT610 - Video, GTX980Ti - computing on the GPU. As a result it has turned out much cheaper and faster than the original configuration.