What is minimal CUDA components to install on fresh computer?

I have a program that uses GPU and CUDA.
When I setup GPU card on another computer and installed drivers the program doesn’t start saying can’t find any GPU cards.
If I install full CUDA kit on that system the program will work but what is the minimal package I have to install in order to make the program work?

Installing CUDA kit everywhere is strange. What if I have a game that I want to distribute. Will every user need to go to NVIDIA site and install CUDA?

You normally only need to install a recent enough display driver which supports the CUDA Toolkit version your application was developed with. Which CUDA version is supported is mentioned on the driver download site’s release notes tab.
The rest would need to be handled by the application as described in the deployment chapter of the CUDA Programming Guide:
[url]Best Practices Guide :: CUDA Toolkit Documentation

I installed the very latest driver from nvidia/downloads. But application says cannot find CUDA driver. Is there different set of drivers - one just for graphics card and another for card and CUDA?

Did you read the section of the guide that was posted? Simply having a driver alone is not enough for a dynamically linked cuda runtime API application. You also need the cuda runtime DLL or shared library. If you statically link (in CUDA 5.5, or 6.0RC) against the static cudart, then you should be OK with just the driver. If your code is written to use just the driver API (probably isn’t based on this discussion), then you just need the driver. And I believe if you use any of the libraries like NPP, CUBLAS, CUSPARSE, etc. you will need the appropriate DLL’s or shared libraries for those. Any needed DLLs/shared libraries need to be installed on the machine (which happens when you install the CUDA tookit) or you need to bundle them with your app. Read the document that was linked - the entire section 15, especially section 15.4

I did. There is something else going on.
All I run is statically linked deviceQueryDrv.exe
First it says is “cuInit(0) returned 100”
I build it with latest CUDA SDK on another system, copy on this one and it just shows error.
Device manager shows that devices are there and enabled.
GeForce Experience shows that devices are there and the driver is latest (I also tried previous driver just in case).
Strange.
Is there like a probe app available that analyzes system for any kind of errors?

I’ve had no trouble dropping deviceQueryDrv.exe on completely different systems (even going from win7 to win2008R2) and it works just fine without the tookit installed.

What happens when you run:

nvidia-smi -a

on the system where deviceQueryDrv is failing?

What driver did you install on that system?
What OS is on that system?
What GPU is on that system?
What CUDA toolkit did you use to build the deviceQueryDrv.exe app?
Are you by chance connecting to the failing compute over a RDP protocol, ie. a Remote Desktop Connection?

Oops…
I ran device QueryDrv using Remote Desktop View.
When I just went to the system and logged in using monitor the app starts working.
Is it a well known issue?
I never saw it mentioned before.
Besides, is it a smart thing to do on NVIDIA side?
If devices are there, why can’t they access it just because I use RDV?

Sorry, when you connect via RDP, you are running in a special windows service that does not have access to WDDM devices. It’s a windows limitation. If you put the GPU into TCC mode (with nvidia-smi) then you can access the GPU via RDP, but you cannot use it as a display device, and the TCC option is not available for GeForce GPUs.

It’s documented in a variety of places such as:

[url]Page Not Found | NVIDIA

NVIDIA could detect that RDP is running and report different error instead of returning CUDA_ERROR_NO_DEVICE.
Probably GoToMyPC service (instead of RDP) will not cause that problem.

Thank you!

Another possible candidate to remote access PC’s that allow access to GPU resources is TeamViewer

TeamViewer is great. Thanks!