NEW USER - In VS 2013 I contuinueally get IntelliSense error on CUDA

I am using CUDA 7.0, in VS 2013.

I get an error (IntelliSense expected an expression) on this line every time I compile:
// Launch a kernel on the GPU with one thread for each element.
addKernel<<<1, size>>>(dev_c, dev_a, dev_b);

The error marker is always under the 3rd <.

it compiles and executes properly however.

Am I missing something in the VS setup?

try googling “cuda red underline”

no help other than the comments by people to simply ignore it. “nature of the beast” and “NVIDIA has chosen to not fix this for many builds.”

the code compiles and runs, I can hover over a CUDA keyword and get definitions, etc. I can start typing a Cuda keyword and get the suggestions.

I tried the fixes offered for VS2010/12 (couldn’t find any that really explained it for 2013 and V.7) Entering the offered fixes made it worse. I would still get the same error, but now I was also getting lots of warning messages about conflicting files and including files from other projects and resulting unstable builds.

I backed out all the fixes and am back to square one. (also, the usertype.dat does not exist anywhere on my computer - note, this was also reported by another having similar problems and no resolution)

So for now, I am simply going to ignore this error unless some someone posts a true fix for this ‘issue’.

thank you anyway for the suggestion.

As far as I know you can’t fix the Intellisense errors at the <<<
but you can fix other things:

#ifdef __INTELLISENSE__
#define __launch_bounds__(a,b)
void __syncthreads(void);
void __threadfence(void);
#endif

Thanks KlausT, I haven’t had any problems with them yet but I am just starting.

I should also note that the IntelliSense error did not show up until AFTER I loaded the latest NSight update yesterday. (and I tried reloading everything - SDK’s, etc- after that just to make sure.)

Well, After I posted the no problems statement above, it seems that __syncthreads also doesn’t work and as well as a bunch of other CUDA functions, including enough to not being able to compile a bunch of the SAMPLES.

I uninstalled my entire VS 2013, a remaining VS2010, VS2013 Community, and all the NVIDIA files.

I just reinstalled everything stating with VS2013 only (no 2010) then the Windows SDK followed by the NVIDIA toolkit/CUDA 7 (and update posted 5/31/15)

Going to do the google thing on CUDA and IntelliSense again and will try to walk through that set of fixes.

this is frustrating.

very interesting. After searching google and some folks just say to “compile them anyway, sometimes IntelliSense will just start working,” another reboot, without doing anything else, it seems IntelliSense is now working in all the samples AFTER I gave up and just tried to see what a “Rebuild all” would do. Lo and behold, all the samples compiled.

No errors however there were 559 “Warnings” and almost all were about “Macro redefined” or conversion losses (e.g., float to int)


my confusion level is climbing … I just ran the Mandelbrot example (and some others) and while it “successfully compiles” and runs, it also is reporting 20 error, all dealing with IntelliSense and CUDA. Many for __syncthreads, threadIdx, atomicAdd, dim3, and others.

adding the short code block from KlausT fixed _syncthreads but everything else stayed the same. I tried adding the others to the code block but no joy.

what is even more strange is it is not consistent (example - threadIdx.x is accepted in some places in the code and a few lines later not.)


I am going to create a new project from scratch and try to enter by hand a sample from one of the teaching books I have (CUDA by Example), ignoring all the IntelliSense errors and see if it will work.