I have been working on this for a few hours with no luck and as such decided to seek out help here.
This is the line of code I am finding issues with:
IntelliSense: identifier “atomicAdd” is undefined
Could this be simply missing a header or am I missing much more?
Both of these values are floats by the way.
I am using Visual Studio 2010
I am quite new to this and really don’t know how to go about fixing it.
Any and all help is appreciated.
2 min search http://stackoverflow.com/questions/5994859/some-issue-with-atomic-add-in-cuda-kernel-operation
Specifiy at least compute capability 1.1 to support the floats. the flag on linux is -arch=sm_1.1 (other popsiblities are 13, 20 and 21). I do not know in winsows, but check the atomic add example in sdk to see the exact flags in compiling.
If the architecture is not specified it assume the lowest automatically.
more here http://forums.nvidia.com/index.php?showtopic=40698
So I went into the properties bit of VS2010 and changed that but it still doesn’t work.
Also, exits with code -1
Did you try the SDK example for atomic add? simpleAtomicIntrinsics If it runs check the project settings for it.
Atomic addition of floats is only supported with compute capability 2.0 and later. You need to pass the
option to nvcc.