Greetings to the programming community,
As a beginner, I recently started studying the “Accelerated Computing with CUDA Python” course offered by NVIDIA. I have successfully installed all the necessary drivers from the NVIDIA website to configure my environment.
Currently, I am attempting to execute the following code
in a Jupyter Notebook:
import numba
import numpy as np
from numba import vectorize
from numba import cuda
@vectorize(['int64(int64, int64)'], target='cuda') # Type signature and target are required for the GPU
def add_ufunc(x, y):
return x + y
c = add_ufunc(a, b)
However, I'm encountering an error with the
following diagnostics:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[7], line 10
6 @vectorize(['int64(int64, int64)'], target='cuda') # Type signature and target are required for the GPU
7 def add_ufunc(x, y):
8 return x + y
---> 10 c = add_ufunc(a,b)
File c:\Users\korob\anaconda3\lib\site-packages\numba\cuda\vectorizers.py:36, in CUDAUFuncDispatcher.__call__(self, *args, **kws)
25 def __call__(self, *args, **kws):
26 """
27 *args: numpy arrays or DeviceArrayBase (created by cuda.to_device).
28 Cannot mix the two types in one call.
(...)
34 the input arguments.
35 """
---> 36 return CUDAUFuncMechanism.call(self.functions, args, kws)
File c:\Users\korob\anaconda3\lib\site-packages\numba\np\ufunc\deviceufunc.py:250, in UFuncMec
I am working on Windows 11,
and my computer is equipped with an
NVIDIA GeForce GTX 1060 with Max-Q Design graphics card.
I would greatly appreciate any assistance
as I have already spent two days trying
to resolve this issue.
What could be causing this error,
and what steps or modifications
should I take to successfully
run this code in Jupyter Notebook?
Thank you sincerely for any help or guidance
you can provide.