Hey guys. I’m writing a binary data translator, and am starting to deal with the issue of big vs. little endian. I understand the issue of byte-swapping just fine; that’s easy. But there’s one little issue I’m getting confused on: whether implementing endian checking should be done as a preprocessor stage, or as runtime process.
The basic question I have is this: Is the endian type of data in a program determined by the machine running the compiled code, or is the endian type a factor of the compiler being used, or both?
And if this is something decided by the compiler, are there any common predefined macros in c++ to identify the type of endian, or is this the sort of custom little macro you’d have to define yourself, based on your knowledge of your compiler and settings? I’ve looked around for such a macro, but can’t seem to find any standard predefined types for it. Yet, I have seen at least one example of endianness being checked as a preprocessor step… So, just looking for the proper way of setting all of this up. If I have a definite way of knowing my endian type, then my job is really quite easy.
Thanks