Copying elements from a c-style array of std::array<float, 3> to another in a for_each leads to a misaligned address error

Hi,

I have simple code where I copy the content from an array of std::array<float, 3> to another in a parallel for_each loop.

#include <execution>
#include <algorithm>
#include<array>
#include <vector>

using real_t = float;
using Vector3D = std::array<real_t, 3>;

int main() {
    Vector3D* v1 = new Vector3D[10];
    Vector3D* v2 = new Vector3D[10];

    std::vector<size_t> indices{0, 1, 2};

    // copy the contents of v1 to v2 for the indices in indices
    std::for_each(std::execution::par_unseq, indices.begin(), indices.end(), [=](size_t i) {
        v2[i] = v1[i];
    });
}

It compiles but I get the following error when I run the code (I am using nvc++ 22.11):

terminate called after throwing an instance of ‘thrust::system::system_error’
what(): for_each: failed to synchronize: cudaErrorMisalignedAddress: misaligned address
Aborted (core dumped)

I specify that the code runs fine if I use an std::array of double (std::array<double, 3>). The only way I found to make this code work with floats is to use a temporary variable instead of directly copying the element from v1 to v2:

#include <execution>
#include <algorithm>
#include<array>
#include <vector>

using real_t = float;
using Vector3D = std::array<real_t, 3>;

int main() {
    Vector3D* v1 = new Vector3D[10];
    Vector3D* v2 = new Vector3D[10];

    std::vector<size_t> indices{0, 1, 2};

    // copy the contents of v1 to v2 for the indices in indices
    std::for_each(std::execution::par_unseq, indices.begin(), indices.end(), [=](size_t i) {
        const auto element = v1[i];
        v2[i] = element;
    });
}

Is there something I have misunderstood?

Regards,

Raf

Hi Raf,

Is there something I have misunderstood?

No, I believe the program should be fine. Looks to be an alignment issue with the addressing of the arrays. Another work around would be to pad the size of the array to an even value (in this case “4”).

I’ve added an issue report, TPR #32901, and sent it to engineering for investigation.

-Mat

Hi Raf,

Engineering just let me know that TPR #32901 was fixed in our 23.11 release. Apologies in the delayed notification.

-Mat