I’m a career software engineer but a novice to CUDA/GPU programming. This post may belong in another topic category (its own?) under CUDA Developer Tools, but I don’t see a way to create one.
I found the Altimesh Essentials package, which comes tantalizingly close to what I want to do, which is interface to my GPU via CUDA from C#. The main gap I see is the package was last committed 2-3 years ago, and it does not detect my CUDA 12.8.1 installation. I have not reached anyone on that project at the contact address. Has anyone had success with Altimesh Essentials on a recent CUDA stack? What other good options exist for calling CUDA from C#?
Thanks of course!
I don’t have anything to recommend, but this may be of interest. As indicated there: " I have no experience with them. None of them are directly provided by or supported by NVIDIA. … Some, perhaps most, may be moribund/deprecated"
1 Like
Very useful-looking thread; thanks, Robert. Another 10 avenues to check for solutions! :D
ILGPU is the one I have been seeing most recently. It’s not exactly “calling CUDA from C#”, however it allows you to access GPU acceleration from C#. managedCuda is the one that I had seen the most consistent support of for many years, but I don’t know if that is still true recently.
1 Like
I’m reading the ILCPU docs and running the tutorials. (It plugged right into VS 2022 via a PackageReference solution element; very easy to follow.) Would you say “access GPU accel. from C#” has the effect of hiding CUDA calls behind the ILGPU object API? I guess that will work until I need some specific CUDA call. I’ve seen a couple of refs and will check ManagedCuda; I wonder if it coexists with ILGPU.
You could also combine C# with C++ and call Cuda from the latter.
Yes. Again, not an expert, but when I have looked at it, it seems that CUDA is not there syntactically or else abstracted away. This makes sense (to me) since ILGPU intends (it seems) to provide parallel acceleration for environments beyond just CUDA environments. When I look at a ILGPU example, it looks to me sort of like CUDA with a wrapper around it.
OTOH managedCuda to me appears to have a 1:1 correspondence (to my untrained eye, anyway) to CUDA C++ and the CUDA runtime API. I can immediately see CUDA API constructs and the kernel code there is basically just CUDA C++ kernel code.
I’m probably splitting hairs. To some degree they are obviously both wrappers of some sort.
1 Like
A thicker distinction than hairs! Both ManagedCuda and ILGPU read like wrappers, but ILGPU layers its own abstractions (which make sense to me) over CUDA, OpenGL, AMD, Intel APIs… impressive! Whereas MC looks more like an impedance-matched C#->Cuda passthrough library. +1 for your take.
I want to look at whether ManagedCuda will play alongside ILGPU (my project is new so the risk is low! :) Though I want to be requirement-driven… no need to add MC if ILGPU covers my needs (of which I’m not fully aware yet!)
The MC Github page sets it out: “managedCuda is not a code converter, which means that no C# code will be translated to Cuda. Every cuda kernel that you want to use has to be written in CUDA-C and must be compiled to PTX or CUBIN format using the NVCC toolchain.” So for the ability to invoke Cuda APIs, I need to write in two languages: C# for main app. logic and GPU work unit dispatch, CUDA-C (plus cross-compilation to “PTX”, new to me) for work unit logic. Time will reveal the better approach.