Integer arithmetic overflow

Is it defined how integer arithmetic overflows?

For instance, is it guaranteed that adding or multiplying two large unsigned ints will “gracefully” overflow like modulus 2^32?

I imagine this is somewhat hardware specific, so I’ll take GeForce 8800 as the example platform.

Let’s not take GeForce 8800.

On recent GPUs (Kepler and newer) using recent versions of CUDA (8.x, 9.x, 10.x) the language you have within a single CUDA thread is expected to be compliant to a particular ISO C++ standard, excepting enumerated limitations.

[url]https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#c-cplusplus-language-support[/url]

The behavior should comply with that standard.

In C and C++ AFAIK, unsigned integer overflow involves modulo wrapping behavior, and signed integer overflow is UB.

This. To wit (from the C++ standard):

The fact that signed integer overflow results in undefined behavior can (and is) exploited by the compiler for certain optimizations, and therefore it is advisable, from a performance perspective, to use ‘int’ for all integer data unless there is a good reason to choose some other integer type.