Hello everyone,
i’m trying to investigate the difference in Performance between running a code with CUDA on GPU and with out it. So I wrote the following program:
from numba import jit, cuda
import numpy as np
# to measure exec time
from timeit import default_timer as timer
# normal function to run on cpu
def func(a):
for i in range(10000000):
a[i]+= 1
# function optimized to run on gpu
@jit(target ="cuda")
def func2(a):
for i in range(10000000):
a[i]+= 1
if __name__=="__main__":
n = 10000000
a = np.ones(n, dtype = np.float64)
b = np.ones(n, dtype = np.float32)
start = timer()
func(a)
print("without GPU:", timer()-start)
start = timer()
func2(a)
print("with GPU:", timer()-start)
func(a) seems to works properly but func2(a) which Python mit CUDA doesn’t work and return the program the following error:
without GPU: 7.720662347999678
/usr/local/lib/python3.6/dist-packages/numba/cuda/decorators.py:116: UserWarning: autojit is deprecated and will be removed in a future release. Use jit instead.
warn(‘autojit is deprecated and will be removed in a future release. Use jit instead.’)
Traceback (most recent call last):
File “GPU vs CPU.py”, line 26, in
func2(a)
File “/usr/local/lib/python3.6/dist-packages/numba/cuda/dispatcher.py”, line 42, in call
return self.compiled(*args, **kws)
File “/usr/local/lib/python3.6/dist-packages/numba/cuda/dispatcher.py”, line 38, in compiled
self._compiled = autojit(self.py_func, **self.targetoptions)
File “/usr/local/lib/python3.6/dist-packages/numba/cuda/decorators.py”, line 117, in autojit
return jit(*args, **kwargs)
File “/usr/local/lib/python3.6/dist-packages/numba/cuda/decorators.py”, line 56, in jit
raise NotImplementedError(“bounds checking is not supported for CUDA”)
NotImplementedError: bounds checking is not supported for CUDA
I’m not able to interpret the error so I would be thankful if you could help.
Thanks in advance
Khaled