Inquiry Regarding Using CUDA Plugin in Unity for Runtime GPU Specification on Multiple NVIDIA Cards

Dear NVIDIA Technical Support Team,

I am a Unity developer currently working on a project that requires CUDA acceleration for a Windows application. My plan is to run this application on machines with different models of NVIDIA GPUs and dynamically specify the GPU to be used at runtime.

While I am aware that NVIDIA provides a CUDA plugin, I have not been able to find comprehensive documentation or resources on how to integrate and use the CUDA plugin in Unity, specifically for dynamically specifying the GPU during runtime.

I have a few specific questions for which I am seeking clarification:

  1. How can I integrate and configure NVIDIA’s CUDA plugin within a Unity project?
  2. During runtime, what is the procedure for specifying the use of a particular NVIDIA GPU through the CUDA plugin?
  3. Are there any sample code snippets or documentation available that could assist me in better understanding and implementing the process of using CUDA in Unity?

I greatly appreciate your time and assistance. If there is any additional information or guidance you can provide, please feel free to let me know. Thank you!

Hi there @646577470 and welcome to the NVIDIA developer forums.

Sadly there is no such thing as an “NVIDIA CUDA plugin”. CUDA is a standalone programming API to implement Compute workloads. In a tool like Unity you would usually use compute shaders to do exactly the same work.

If you really think you need to apply CUDA kernels to internal data of your project, you would need to write your own library to wrap CUDA code in your application. But I am sure there are others who have tried similar things, a quick search showed me this thread for example on the Unity forums: https://forum.unity.com/threads/how-to-use-cuda-in-unity.1396699/

Maybe that can answer your question.

If you need more CUDA help, make sure to drop by our dedicated CUDA forums here on this server!

Thanks!

Thank you for your response。Is it possible to dynamically specify the GPU at runtime for a Unity executable, generated without using any specific plugins, on machines with multiple graphics cards, using command-line parameters or a similar method?

Not that I am aware of. But the Unity forums might be a better source of information on that part.

CUDA apps can choose the GPU they want to use, but how and if Unity can differentiate, I don’t know.