Kernel issues with : expression has no effect and expression must have arithmetic or enum type

Hi,
I’ve a problem with my CUDA kernel (see below) in which I’ve multiple times the errors:

kernel.cu(): warning: expression has no effect
kernel.cu(): error: expression must have arithmetic or enum type

But, I don’t understand what is the origin (See Error: must have arithmetic type - CUDA Programming and Performance - NVIDIA Developer Forums)

Is someone may help me? Thanks.

My Kernel:

mod = SourceModule("""
  __global__ void convolve_wavelet(float *histogram,float **morlet_re,float **morlet_im,float **out_re,float **out_im, float **out, int const histogram_len, int const period_len)
  {	
      #include <cmath>
      const int index = threadIdx.x + blockIdx.x * blockDim.x;
     
      	if (index <= (histogram_len-1)*(period_len-1))
	{	  
	  out_re[int(index/histogram_len),int(index%(histogram_len))]=0;
	  out_im[int(index/histogram_len),int(index%(histogram_len))]=0;
	  
	  for(int k=0;k<histogram_len;k++)
	    {
	      out_re[int(index/histogram_len),int(index%(histogram_len))]+=histogram[k]*morlet_re[int(index/histogram_len),k-int(index%(histogram_len))];
	      out_im[int(index/histogram_len),int(index%(histogram_len))]+=histogram[k]*morlet_im[int(index/histogram_len),k-int(index%(histogram_len))];
	    }
	    
	  out[int(index/histogram_len),int(index%(histogram_len))]=sqrt(out_re[int(index/histogram_len),int(index%(histogram_len))]*out_re[int(index/histogram_len),int(index%(histogram_len))]+out_im[int(index/histogram_len),int(index%(histogram_len))]*out_im[int(index/histogram_len),int(index%(histogram_len))]);
	} 
	

    }

  """)

To start, not sure why you’re doing an #include statement in the kernel source, get rid of that. The other post said it can be because you’re using a float as an index to an array… which in fact it seems like you are… you might want to review how to properly cast in C:

https://www.cs.tut.fi/~jkorpela/round.html

Basically you should be doing:

out_re[(int)(index/histogram_len),(int)(index%(histogram_len))]=0;

Note the parenthesis in between the cast type (int) which you do not have in your code.

Thank you for your answer… it was a silly question (I thought int(a_float) worked…), it works now!
But I still have 2 more small issues:

  1. How can I include functions such as pow and sqrt if I can't include cmath? (I had the same problem with complex.h...)
  2. When I ran a test whitout pow or sqrt, I saw that index was limited by my block definition
    func(..., block=(250,1,1))
    

    which is logic, but I wonder how to make the kernel continue to increment index to reach (histogram_len-1)*(period_len-1) inside the kernel ?

Thank you for your time.

  1. Perhaps in whatever programming language you’re using that’s the only way to define includes, I do not know… those are typically done at the beginning of your .cu CUDA source file, but perhaps that’s the way to do it with what you’re using.

  2. As far as CUDA questions go in regards to kernel launches, there are multiple tutorials online you can look up that explain how to manage launches of threads/blocks/grids. Here is a simple example of a 2D kernel launch:
    [url]https://code.google.com/p/stanford-cs193g-sp2010/wiki/TutorialMultidimensionalKernelLaunch[/url]

It seems you only have a 1D kernel so far, which can be equated to a simple vector addition example:
[url]https://www.olcf.ornl.gov/tutorials/cuda-vector-addition/[/url]

I don’t think this is valid C/C++ syntax:

out_re[(int)(index/histogram_len),(int)(index%(histogram_len))]=0;

C/C++ doesn’t separate array subscripts with commas, if that is what the intent is.

Good point, totally missed that. Not sure how that’s even compiling for jprog