I think I may have overthought this. My original assumption was that with the ARM processors’ bi-endianness, I could switch it between little and big endian mode. However, after doing some more research (courtesy of the gcc man page) there are some compiler flags which can be used which will inform the ARM processor to act as if it were a big or little endian processor.
Generate code for a processor running in little-endian mode. This is the default for all standard configurations.
Generate code for a processor running in big-endian mode; the default is to compile code for a little-endian processor.
This option only applies when generating code for big-endian processors. Generate code for a little-endian word order but a big-endian byte order. That is, a byte order of the form ‘32107654’. Note: this option should only be used if you require compatibility with code for big-endian ARM processors generated by versions of the compiler prior to 2.8. This option is now deprecated.
Can anyone confirm my new assumption that compiling my CUDA program with -mbig-endian will allow my program to interpret big endian data files correctly and that the GPU will understand the big endian data?