Why did IEEE 754 decide to allocate the field bits in the order that it did [closed]

There is already a prior question dealing with why certain bit-widths were chosen (although I do find it somewhat insufficient, but that's another topic), but what strikes me as unusual is how the bits are distributed rather than anything else. If we are to divide a 32-bit, single-precision floating point number into bytes, we see that the exponent is split across bytes 3 and 4, with one bit in byte 3 and seven in byte 4. There are various "homebrew" floating point libraries around online targetting chips without FPUs that consolidate the exponent bits into one byte; one such example is Zeda's Z80 FP routines library. Is there a concrete reason why the sign bit "pushes out" one exponent bit into a byte of the mantissa? Is this perhaps just irrelevant and I'm focusing too much on byte-level alignment? Wouldn't putting the sign bit with the mantissa make more sense, and if not: why?

Feb 28, 2025 - 22:24
 0
Why did IEEE 754 decide to allocate the field bits in the order that it did [closed]

There is already a prior question dealing with why certain bit-widths were chosen (although I do find it somewhat insufficient, but that's another topic), but what strikes me as unusual is how the bits are distributed rather than anything else. If we are to divide a 32-bit, single-precision floating point number into bytes, we see that the exponent is split across bytes 3 and 4, with one bit in byte 3 and seven in byte 4. There are various "homebrew" floating point libraries around online targetting chips without FPUs that consolidate the exponent bits into one byte; one such example is Zeda's Z80 FP routines library.

Is there a concrete reason why the sign bit "pushes out" one exponent bit into a byte of the mantissa? Is this perhaps just irrelevant and I'm focusing too much on byte-level alignment? Wouldn't putting the sign bit with the mantissa make more sense, and if not: why?