Ubuntu 22.04. CUDA on second, non-prime, video card. Is it possible?

Hello Experts!

I am a newbie. Just installed pytorch with cudatoolkit on Ubuntu 22.04 on Lenovo IdeaPad-Gaming-3

It has two video cards
0000:00:02.0 VGA compatible controller [0300]: Intel Corporation Alder Lake-P GT1 [UHD Graphics] [8086:46a3] (rev 0c)
Subsystem: Lenovo Device [17aa:3af6]
Kernel driver in use: i915
Kernel modules: i915

0000:01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GA107M [GeForce RTX 3050 Mobile] [10de:25a2] (rev a1)
Subsystem: Lenovo GA107M [GeForce RTX 3050 Mobile] [17aa:3af6]
Kernel driver in use: nvidia
Kernel modules: nvidiafb, nouveau, nvidia_drm, nvidia

As a first step I am trying to explore cuda using below test:

import time
import torch

torch.cuda.is_available()

#matrix_size=32 * 512
matrix_size=32 * 256
#matrix_size=32 * 128

x=torch.randn(matrix_size,matrix_size)
y=torch.randn(matrix_size,matrix_size)

print(“******************* CPU SPEED *****************”)
start=time.time()
result=torch.matmul(x,y)
print(time.time()-start)
print(“verify device:”,result.device)

x_gpu=x.to(device)
y_gpu=y.to(device)
torch.cuda.synchronize()

for i in range(3):
print(“******************* GPU SPEED *****************”)
start=time.time()
result_gpu=torch.matmul(x_gpu,y_gpu)
torch.cuda.synchronize()
print(time.time()-start)
print(“verify device:”,result_gpu.device)

It works fine up to matrix size 32*256 giving 10 times performance gain on GPU:
******************* CPU SPEED *****************
2.9146814346313477
verify device: cpu
******************* GPU SPEED *****************
0.654292106628418
verify device: cuda:0
******************* GPU SPEED *****************
0.23547625541687012
verify device: cuda:0
******************* GPU SPEED *****************
0.23913264274597168
verify device: cuda:0

But with matrix size 32*512 I am getting out of memory problem:
RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 3.81 GiB total capacity; 768.00 MiB already allocated; 1.03 GiB free; 768.00 MiB reserved in total by PyTorch)

That’s natural I reached the limit.
Thinking about how can I use whole NVIDIA card memory for calculations?

Currently I am using NVIDIA as prime, the first thing that comes to my mind is to use another card (intel) for video to serve my monitor and use whole NVIDIA for calculations.

I tried to
sudo prime-select intel
reboot and the try to run above code, but no success.
torch.cuda.is_available()
returns False.

Does it makes any sense what I am trying to do?
Is it possible to use Intel card for video, and NVIDIA for calculations?

Appreciate any comment/advice.

it depends on the laptop design.
Some laptops are designed so that when the iGPU is serving the display, the dGPU is still or can be still active. Some are designed in such a way that when the iGPU is active, the dGPU cannot be. This is not purely a software issue, it has to do with hardware design as well. Furthermore some laptops have BIOS settings which affect the options.

I probably won’t be able to offer much further advice, but you may want to check the BIOS to see if there are any relevant settings that control dGPU status when active/inactive, and it may also be instructive to run your lspci command again after the prime-select intel, to see if the dGPU is still present and see if it still shows a nvidia driver in use.

Even if you report all that info back here, I’m not sure I’ll have any further advice. I don’t have data to report or confirm on all laptops, or your specific laptop.

Thank you Robert for your comment.

I double checked BIOS settings. Yes I found the option to select between Switchable Graphics and UMA Grafics.
By default it is in Switchable Graphics mode, which I used for my CUDA tests. UMA mode hardware switches off NVIDIA.

In Switchable mode lspci shows me both adapters not dependent of which one I have selected to be prime with sudo prime-select command, intel of nvidia.
In UMA mode lspci show only intel. so that NVIDIA is hard disabled.

The question remains open.
in Switchable Graphics BIOS mode, when both graphics adapters are available to the system, when I select intel as primary, so that Gnome uses it to display graphics, is it possible to use NVIDIA full memory for CUDA?

As for now when I switch to intel to be prime, torch says that cuda is not available:

(base) leonid@leonid-IdeaPad-Gaming-3-16IAH7:~/Projects/Education/cuda$ python3
Python 3.7.16 (default, Jan 17 2023, 22:20:44)
[GCC 11.2.0] :: Anaconda, Inc. on linux
Type “help”, “copyright”, “credits” or “license” for more information.

import torch
torch.cuda.is_available()
False

Looks like I found solution by myself (with the help of google :)

I have selected on-demand mode:

sudo prime-select on-demandd

and created /etc/X11/xorg.conf file of below content:

Section “Device”
Identifier “intel”
Driver “intel”
BusId “PCI:0:2:0”
EndSection

Section “Screen”
Identifier “intel”
Device “intel”
EndSection

Then rebooted. This way I explicitly asked X11 to use intel card, and on-demand mode does not blacklist nvidia driver…

If anybody else decide to go this way, you will need to specify Busid value in xorg.conf file, that can be different for your workstation.
You can find it from output of

lspci | grep VGA

0000:00:02.0 VGA compatible controller: Intel Corporation Alder Lake-P GT1 [UHD Graphics] (rev 0c)
0000:01:00.0 VGA compatible controller: NVIDIA Corporation GA107M [GeForce RTX 3050 Mobile] (rev a1)

So if I need to switch back to use NVIDIA to render my display, I will just rename xorg.conf to xorg.conf_intel
and reboot my laptop.

I am far from sure that that is the best way.
Appreciate any comment from experts!

1 Like