Char1 int1 float1 -- why scalar built-in vector types?

why would I want int1 vs int in my code?

I think this question has come up before.

One possible reason is to write code that can flexibly use varying size vector types. The first element of the vector type can always be referred to with .x

1 Like

This matches my recollection: It was done to achieve full orthogonality in the vector type system, which creates maximum flexibility when generating type names via token pasting or when accessing structure members (as Robert Crovella already pointed out).

Contrived example:

#include <stdio.h>
#include <stdlib.h>

#define STRINGIFY2(a) #a
#define STRINGIFY(a) STRINGIFY2(a)
#define PASTE2(a,b) a##b
#define PASTE(a,b) PASTE2(a,b)

#define T      int 
#define WIDTH  4     // try 1,2,3,4

typedef PASTE(T,WIDTH) my_vec_type;

T horizontal_sum (my_vec_type a)
{
    T sum = a.x;
#if WIDTH >= 2
    sum += a.y;
#if WIDTH >= 3
    sum += a.x;
#if WIDTH == 4
    sum += a.w;
#endif // WIDTH == 4
#endif // WIDTH >= 3
#endif // WIDTH >= 2
    return sum;
}

int main (void)
{
    my_vec_type a;
    T sum;
    memset (&a, 0x01, sizeof a);
    sum = horizontal_sum (a);
    printf ("horizontal sum is: %08x\n", sum);
    return EXIT_SUCCESS;
}

2 Likes

Hello, it actually matters to people which handle generalized vector operations in term of algebra. You often end up filling up the extra spaces with unit vectors none of the less those “void” operations are performed.

1 Like

Yes, like applying a scaling factor. Something to keep in mind, these days a lot of people use GPUs, not for the G in it, but for their computational capabilities, they don’t care much about pixels and frame-buffers. So, other needs and concerns are coming into prominence.

1 Like