On recent GPUs (Kepler and newer) using recent versions of CUDA (8.x, 9.x, 10.x) the language you have within a single CUDA thread is expected to be compliant to a particular ISO C++ standard, excepting enumerated limitations.
The fact that signed integer overflow results in undefined behavior can (and is) exploited by the compiler for certain optimizations, and therefore it is advisable, from a performance perspective, to use ‘int’ for all integer data unless there is a good reason to choose some other integer type.