Skip to content Skip to sidebar Skip to footer

Why Does Numpy's Float128 Only Have 63 Bits Mantissa?

I'm sure this is a daft question but I'm genuinely puzzled: >>> import numpy as np >>> >>> f1, f2, f64 = map(np.float128, (1, 2, -64)) >>> f1 +

Solution 1:

Reading the docs:

np.longdouble is padded to the system default; np.float96 and np.float128 are provided for users who want specific padding. In spite of the names, np.float96 and np.float128 provide only as much precision as np.longdouble, that is, 80 bits on most x86 machines and 64 bits in standard Windows builds.

So it appears it isn't going to actually use all those bits. I suppose, it doesn't account for the missing two bits, 15 + 63 = 78 if we assume 80 bits on x86 architecture (what I have as well).

Post a Comment for "Why Does Numpy's Float128 Only Have 63 Bits Mantissa?"