Executing CUDA Kernel in python

I’m looking to utilize CUDA to speed up simulation code in a Python environment. From my search, the ability to write CUDA code with a syntax similar to Python using CuPy and Numba’s CUDA seems appealing, and I am currently proceeding with coding in this manner. However, I still have lingering questions that haven’t been resolved:

  1. Writing code using Python-style expressions in a Python environment (e.g., CuPy, Numba’s cuda.jit)
  2. Executing kernels written in CUDA/C++ style in a Python environment (e.g., PyCUDA)
  3. Using ctypes in Python to employ already written CUDA/C++ *.dll or *.so files

“Is there a significant difference in computation between these methods due to specific factors?”

I’ve tried looking for documents, but besides explanations on how to use each method, it’s hard to find information about the structural differences. Could you share any knowledge you might have on this topic?"

numba, pycuda, and ctypes with CUDA C++ are all doing roughly the same thing. There should be no difference at a high level, and for operations supported in each case, there shouldn’t be significant performance differences. There will be some things you can do in CUDA C++ (which includes both ctypes approach and pycuda approach, as both will use kernels written in CUDA C++) which can’t currently be done in numba CUDA jit method.

cupy is a bit of a different animal. I wouldn’t attempt to do performance comparisons between cupy and CUDA C++. The purpose of cupy is to provide a best case scenario for using numpy-like functionality, but GPU accelerated. Certain operations can be done analogously in CUDA C++, and may have similar performance. There will be things that you can do in CUDA C++ that would be difficult to do in the same way in cupy, In those cases, there may be perf differences, as the coding/logic realization may be quite different. The overriding reason to use cupy is if you are familiar/comfortable with numpy approach to problem solving, and prefer to stay with that approach. cupy can be very performant. But since ultimately cupy is doing everything it does at a lower level using CUDA C++ or equivalent (e.g. PTX), it stands to reason that ultimately, CUDA C++ is a superset of cupy functionality.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.