Tutorial for Python Parallel Programming??


What is the easiest way to do parallel programming in python using Nvidia GPU? Any Tutorial over that? I have already started learning the Nvidia labs in qwiklab. My question is that whether I can use anything in kernel? for example a classifier object from scikit?

I have an image detection task and I have sliding windows to check whether they are my objects or not? Can I use gpu in python to check each window in parallel as a thread??


I think one of the easiest ways to get started is to use numba or numba pro from continuum analytics. There is also pycuda. There are numerous tutorials for each of these, just google. Numba allows you to write kernels in python (subject to various rules and limitations) whereas pycuda will require you to write the kernels effectively in ordinary CUDA C/C++.


Niether of these approaches will allow you to call scikit methods directly from GPU kernel code, AFAIK.

There is a scikit-cuda which may be of interest:


Thanks. Where can I get tutorials about what I can use inside a kernel??

Try this…python tutorial