Defining Class Functions as Host and Device in Python Using PyCUDA

Hi Team,
I have one query on pyCUDA, Is there any method in pyCUDA such that it will make as host and device functions.

Similar to host device in CUDA C/C++, where we can define the Class functions as host and device using these attributes. Do we have anything similar to this in pyCUDA.


class ABCD:
self.var1 = 0
self.var2 = 0
def method1():
self.var1 = self.var1 + 10

For this class, I want declare the method1 as host and device, because i need to run this method on GPU using CUDA.
If it was a CPP Class, we would have added the attributes host device to make it run on GPU and CPU.

But in Python pyCUDA, i dint find any. Is there any method that we can declare those Methods as host and device in pyCUDA?

Please help us resolve this issue.

pycuda doesn’t allow for python functions to be used on the GPU. You can’t run arbitrary user-defined python code on the GPU.

pycuda requires that user-defined code to be run on the GPU be written in CUDA C++

Hi Team,
Thanks for the input

Can we pass list of Class Object to pyCUDA Kernel?


If Possible How to pass List of Class Objects to Kernel in pyCUDA?

Please help us!.

I doubt that it is possible and I don’t know how to do it. For pycuda, the primary data interoperability between the python side and the CUDA C++ side is via numpy arrays. Scalar POD values also work. As indicated in that documentation link, if you can figure out how to express python class data via the “Python buffer interface” then that may be another possibility. I don’t have a recipe for you. pycuda is not a NVIDIA product, and is not developed, maintained, or supported directly by NVIDIA.

Why not copy your class data into numpy arrays? That is the easy button.

No I don’t know how to do it via cython, numba, or any other pythonic interface to CUDA.