Python and Cuda

Hi,

It seems that Intel is provding full support for Python with API support.

Is this possible than Nvidia provides API support for C++/Python, especially for new API like Cuda 8,
maybe by re-using Pycuda.

It would efficiency by using recent API amd increase performance on both sides.

Have you have a chance to try PyCUDA (PyCUDA) to see whether that fits your needs? Or maybe Numba (http://numba.pydata.org/) does what you want?

Hi,

NUMBA is incomplete API (almost not usable), NUMABA PRO is better but paid version.

PYcuda has some functionnalities, has missing items in the API like CuDNN and lastest Cuda 8 Graph API.

For example, Intel is actively supporting Python through those API:

JIT Compiler for Intel CPU
Intel® SDK for OpenCL™ Applications

DAAL Python Library: Accelerator for Intel CPU.
Intel® Distribution for Python*

So, If NVDIA actively supports some API Python interface on top of CUDA C API (maybe reusing Pycuda and numpy), it would standardize and have new tools easily.
It also benefits back NVDIA

OPen Source can also support it, like tensor flow of google.

Intel Python uses MKL. You have to pay to use MKL.

I think it is unrealistic to expect support for CUDA 8 features in PyCUDA at this time, given that CUDA 8 hasn’t even been finalized yet (the final version is expected this month, from what I understand).