When I first heard about endianness, I freaked out a bit.
I was wondering, why not settle on one rule, such as big endian? It is just the way the majority of people write numbers, therefore it is easier to understand for us. I thought there are other endianness rules out there because some people just wanted to swim against the stream. I was quite wrong. I was worried that this will mess with my number definitions, bit-shifting operations, etc.
Naturally, I put on my tinfoil hat and started doing my homework and a few experiments!
First thing I discovered was that all my machines use little endian. I did this using this field. As you may see, the page is pretty plain, only mentioning what the property represents. Only a few good months later (yesterday), when I looked up this page, the first paragraph in the Examples section struck me like a stray falling chunk of ice. All Windows systems run on little endian?! What is this sorcery?
So I had to look that up. It turns out the statement is almost true. 
One detail which was missed there is the fact that Windows 8 runs on ARM chips too, which could be big endian. 
This, coupled with the fact that .NET runs on Windows on ARM too and on Windows Phone and Xbox plus the fact that Mono, one of my target platforms, runs on Linux, which runs on a variety of architectures with various endianness rules, means that I must consider endianness in my code.
Upon experimenting a bit, I have removed one of my fears, regarding bit-shifting: “left-shift” and “right-shift” imply a dependence on the order of bits/bytes.
Well, there isn’t any. One can assume bits and bytes are in big endian when using bit-shifting. Shifting to the left means shifting towards more significant positions (and could overflow into the least significant one if circular) and shifting to the right means shifting towards less significant positions (and could underflow into the most significant one if circular).
This seems obvious when assuming everything is in big endian, as we are used to from the numbers we represent in natural language. The knowledge of little endian managed to confuse me about this at first, but gladly I figured it out before I did anything silly. I hope this bit of knowledge comes in handy for someone else.
As for number definitions, compilers take care of them, no matter what type and size. 0x00FF0000 is a big endian hexadecimal integer (00 FF 00 00), and the compiler will automatically convert it to 00 00 FF 00 for little endian targets.
The reason why I state this which may seem obvious to some is that thinking of this together with bit-shifting, bit masking and endianness can create a conglomerate of confusion.
So, why would endianness even matter to software if this is the case? Well, transmission.
For example, convention says that networked numbers should be big endian (a.k.a “network byte order”).
This is so all nodes of a network interpret the sequence of bytes correctly.
Similarly, any network protocol designer should settle on endianness, and every implementer should be careful about it.
I failed to do it in vProto… here, here and here. I’ll fix this soon.
I will go for little endian to maintain backwards compatibility.
Another case for transmission is files. When a file has any chance of being moved/copied to another computer, its format better have an established number endianness.
Or, pray that all your numbers are palindromes in base 256.
And finally, the one thing you actually don’t have to worry about is bit endianness (inside a byte/word). The hardware already takes care of it: CPUs use little endian bytes, networks use big endian, and I haven’t found a single issue due to bit endianness in my research.
After finding this bit of information, I felt safe to remove my tinfoil hat.
 ‘‘Following Intel convention, word data always is stored with the most-significant byte in the higher memory location (see figure 2-13).’’ @ page 2-10 @ iAPX 86, iAPX 88 user’s manual
 ‘‘[…] including ARM-based systems from partners NVIDIA Corp., Qualcomm Inc. and Texas Instruments Inc.’’ @ first paragraph @ Microsoft Announces Support of System on a Chip Architectures From Intel, AMD, and ARM for Next Version of Windows
 ‘‘The processor can treat words of data in memory as being stored in either: / • Byte-invariant big-endian format / • Little-endian format.’’ @ 3.3 @ Cortex-R4 and Cortex-R4F Technical Reference Manual
 ‘‘[…] express numbers in decimal and to picture data in «big-endian» order’’ @ 2nd paragraph of page 3 @ RFC 1700 – Assigned Numbers
So, why am I posting about this?
Mainly because a worrying number of friends of mine have no idea what endianness is and how it will affect their code, which they hope to be cross-platform and often includes networking and/or saving files.
I’m just trying to save some people from a few headaches.