If you’re comfortable with hex, you have no business in this post. If on the other hand, when faced with a mask like 0xFFA8 you’re forced – like me – to translate it on paper into 11111111 10101000, then by all means, read on.

Being the hex-challenged (hexicapped?) that I am, I badly need to somehow express 0x7F800000 as (0111 1111 1000 0000 0000 0000 0000 0000). With some nerve, I’m not willing to sacrifice *any* run-time performance to get there. Good news are, this can be done. Better news are, it involves some neat preprocessor and metaprogramming tricks!

C++ Template Metaprogramming gives an excellent starting point:

template <unsigned long N> struct binary { static unsigned const value = binary<N/10>::value << 1 // prepend higher bits | N%10; // to lowest bit }; template <> // specialization struct binary<0> // terminates recursion { static unsigned const value = 0; }; // usage: unsigned const five = binary<101>::value;

This trick of recursive template instantiation is already considered standard metaprogramming, and will not be covered here. Sadly, an attempt to apply it directly to real-life constants –

unsigned const mask = binary<01111111100000000000000000000000>::value;

Fails miserably. A decimal literal like 1111111100000000000000000000000 goes *way *above anything the tokenizer can interpret as an integer. In fact, the largest such constant is 18446744073709551615LL, which still is 11 orders of magnitude too low.

Ok then, that one is easy: let’s break the input into 4 arguments:

unsigned const mask = binary4<01111111, 10000000, 00000000, 00000000>::value;

We can apply the template recursion on each argument separately, and eventually shift the four results and push them into a single DWORD. Why oh why then, do we still get ridiculous failures??

Alas, the real fun begins: a leading zero is interpreted as an octal prefix! An argument like 01111111 is interpreted as the decimal 299593 – and the template recursion breaks entirely.

What if we create a separate template recursion, specifically for octal numbers? Doesn’t seem so hard, just replace N%10 with N%8 in the code. But wait… how can we know whether an argument was originally interpreted from an octal literal (01111111) or a decimal (1111111)?

Pushing forward – let’s take the less-than-elegant assumption that each argument is exactly 8 binary digits. The highest number to be interpreted as octal in this setting is 01111111, which is 299593 in decimal. We can just compare: any argument equal to or below 299593 is subject to an octal template recursion, and any argument above – to a decimal one.

Ummm, wait. we need to perform that comparison, *and branch on its result* at compile time.

This is in fact possible, and the given reference is an excellent source for such tricks (hints: grok boost::mpl::bool_, and trade specialization for overloading). Anyway, for me personally, this is also where the fun stops. A much dumber approach is in order, and luckily – one exists. We can pre-process the input to pad 1 to its left, and subtract 100000000 in the code.

#define PAD1(N) (1##N) #define BINARY32(N1, N2, N3, N4) binary4<PAD1(N1), PAD1(N2), PAD1(N3), PAD1(N4)>::value template<DWORD N> struct binary_Unpad { // for now assume input is 1 + 8 binary digits static const DWORD Unpadded = N - 100000000, value = binary<Unpadded>::value; // back to classic recursion }; template <DWORD N1, DWORD N2, DWORD N3, DWORD N4> struct binary4 { static const DWORD val1 = binary_Unpad<N1>::value, val2 = binary_Unpad<N2>::value, val3 = binary_Unpad<N3>::value, val4 = binary_Unpad<N4>::value, value = ( (val1 << 24) | (val2 << 16) | (val3 << 8) | (val4) ) ; };

It is also possible to assert on input digits being 0/1, and on being exactly 8 such digits. Personally, I was very happy at this point, to just be able to write-

#define FLOAT_EXP_MASK BINARY32(01111111, 10000000, 00000000, 00000000)

Some day I’ll write about *viewing* binary as binary – more autoexp stuff coming up there.