Use GPU and numba in a data thinning function

Hi All,

I am trying to implement the function below using GPU but I got different errors like “SQRt is unknown” and type casting issues. I would appreciate if somebody could help me out what is wrong with this code?

@vectorize([‘float32(float32, float32)’],target=‘gpu’)

def thin_pts(pts,thr):
# pts is 2D column vector with n*2 size and thr is a single value
out_x = np.array()
out_y = np.array()
while 1:
n=pts.shape[0]
if n>1:
dist=sqrt((pts[0,0]-pts[1:n,0])**2+(pts[0,1]-pts[1:n,1])**2)
if np.all(dist>thr):
out_x=np.append(out_x,pts[0,0])
out_y=np.append(out_y,pts[0,1])
pts = numpy.delete(pts, (0), axis=0)
else: break
return (out_x,out_y)