I installed Cuda Toolkit 12.8 for Ubuntu. I tried installing the Nvidia Driver via sudo apt-get install -y nvidia-open and it appears to work. No errors in the terminal.
But when I try installing the drivers via the run file in that same link above, it fails because I’m in a VM and don’t have a Nvidia GPU. This is confusing because it means the CLI command actually must not have worked.
This raises the question, how am I supposed to execute these samples if I can’t install the driver? I definitely need them.
I am able to build the Cuda Samples project. But when I try to execute ./simpleIPC, I get this error:
CUDA error at /home/j/repos/cuda-samples/Samples/0_Introduction/simpleIPC/simpleIPC.cu:196 code=35(cudaErrorInsufficientDriver) "cudaGetDeviceCount(&devCount)"
How should I go about fixing this issue? Related to this, when I run nvidia-smi, it throws this error:
NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.
I have tried rebooting my VM. That didn’t fix it. I need the drivers but not sure what is the proper way to get them.
If you don’t have a NVIDIA GPU you won’t be able to run most CUDA codes, including those sample codes.
Relatedly, attempts to use a GPU driver will be unsuccessful. The driver depends on and requires a NVIDIA GPU for proper operation.
A package manager system installs software packages. That is not an indication that things will actually work. They won’t work properly without a NVIDIA GPU. The package manager system is not designed to test for the presence of a NVIDIA GPU and throw an error if one is not found. However the actual driver code will do that, as you have already discovered.
You generally cannot successfully run CUDA codes without a CUDA-capable GPU.
Then, does that mean I must have a NVIDIA GPU in order to develop applications for devices that use NVIDIA GPUs? My plan was to develop applications for NVIDIA GPU devices inside my VM and then port it over.
What if I use the NVIDIA Cross-Compile Docker containers + run the CUDA codes inside them? I am targeting Orin.
We have to define what we mean by “develop”. You can write and compile CUDA/GPU codes (you’ve already pretty much proven that, by compiling the CUDA sample codes). You cannot run or test those codes.
What you have installed already (the CUDA toolkit, minus the GPU driver) is sufficient to do ordinary basic CUDA writing and compiling.
There is no container, VM, or other strategy to run CUDA codes, without actual CUDA GPU hardware. At least none provided by NVIDIA. It doesn’t matter if you refer to Orin or to any other CUDA capable GPU. If you want to explore simulator options, go ahead. I don’t have any to recommend.