Unexpected result with 64-bit integer left-shift

The following code produces some unexpected (and I would say incorrect) results after a 31-bit left-shift operation is applied to a 64-bit integer. The result seems to have its sign flipped, although not consistently depending on the context. Am I missing some subtle peculiarity here?

#include <stdio.h>
#include <stdint.h>

__global__ void gpu(int32_t w, int32_t u) {
        printf("gpu:\n");
        int64_t d = w;
        printf("<< 30: %li (%lx)\n", d << 30, d << 30);
        printf("<< 31: %li (%lx)\n", d << 31, d << 31);
        printf("<< 32: %li (%lx)\n", d << 32, d << 32);

        if ((d << 31) < 0)
                printf("<< 31: negative\n");
        else
                printf("<< 31: positive\n");

        int32_t v = (d << 31) / u;
        printf("v: %i\n", v);
        if (v < 0)
                printf("div: negative\n");
        else
                printf("div: positive\n");
}

void cpu(int32_t w, int32_t u) {
        printf("cpu:\n");
        int64_t d = w;
        printf("<< 30: %li (%lx)\n", d << 30, d << 30);
        printf("<< 31: %li (%lx)\n", d << 31, d << 31);
        printf("<< 32: %li (%lx)\n", d << 32, d << 32);

        if ((d << 31) < 0)
                printf("<< 31: negative\n");
        else
                printf("<< 31: positive\n");

        int32_t v = (d << 31) / u;
        printf("v: %i\n", v);
        if (v < 0)
                printf("div: negative\n");
        else
                printf("div: positive\n");
}

int main() {
        int32_t w = 165963120;
        int32_t u = 581830501;

        cpu(w, u);
        printf("\n");
        gpu<<<1,1>>>(w, u);

        cudaDeviceSynchronize();

        return 0;
}

Output:

cpu:
<< 30: 178201543185530880 (279195c00000000)
<< 31: 356403086371061760 (4f232b800000000)
<< 32: 712806172742123520 (9e4657000000000)
<< 31: positive
v: 612554834
div: positive

gpu:
<< 30: 178201543185530880 (279195c00000000)
<< 31: -356403086371061760 (fb0dcd4800000000)
<< 32: 712806172742123520 (9e4657000000000)
<< 31: positive
v: -612554834
div: negative

Version:

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Sun_Feb_14_21:12:58_PST_2021
Cuda compilation tools, release 11.2, V11.2.152
Build cuda_11.2.r11.2/compiler.29618528_0
Device 0: "GeForce GT 710"
  CUDA Driver Version / Runtime Version          11.2 / 11.5
  CUDA Capability Major/Minor version number:    3.5

Hi, type cast from int64 to int32 is implementation-defined because it’s a signed target type.

It doesn’t look like the type-cast is the problem though? The value comes out fine after the cast. It’s the bit-shift that makes it flip to negative.

Ok, you are right, it should be positive. So I run the code on my PC, I don’t have the bug, the 2 very last lines :
v: 612554834
div: positive

Versions :

Device: NVIDIA Corporation GA104 [GeForce RTX 3060]

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Mon_Oct_11_21:27:02_PDT_2021
Cuda compilation tools, release 11.4, V11.4.152
Build cuda_11.4.r11.4/compiler.30521435_0

$ gcc --version
gcc (Debian 10.2.1-6) 10.2.1 20210110

Ok thanks, perhaps this is specific to certain capability versions then.

1 Like