Classical issues: Imprecise computing (part 1)
I'm starting today a new series of articles called Classical Issues. In it, I'll address, one after the other, classical issues encountered through my software engineering years.
This first article is targeted to demystify computing and give some best practices for an enterprise application. By enterprise application, we mean an application working on things like money, prices and quantities. It will be in two parts. This one is about explaining the root of the problem. The second one will show how to handle it in Java and .Net.
Bill is developing a software doing commission payments. He needs to add 1.2$ to each transaction. He codes a method doing just that and the unit test coming along.
@Test
public void testAddCommission() {
double actual = addCommission(1000000.1);
assertEquals(1000001.3, actual, 0);
}
public static double addCommission(double nominal) {
return nominal + 1.2f;
}
java.lang.AssertionError: expected:<1000001.3> but was:<1000001.3000000477>
"Darn! It's not working!".
What's going on?
Floating points vs Decimals
Floating point numbers were introduced in computers for performance reasons (and only for that). They became ubiquitous since the Intel 80486 arrival and its floating point unit (FPU). They allow to perform a multiplication or division on a decimal number using only one CPU cycle. They are the float
, double
and quad
we find in our code (their respective precision depend on the language).
With decimals (which are not floating points), it's slower. More or less as much operations as what you used to do at school on paper. Luckily, computers are much faster than you at doing this and they make less mistakes. By decimals we mean the decimal
(in .Net) and the BigDecimal
(in Java).
If they are slower, why use them you'll say? Because they are not using the same internal representation.
In both cases, there are a significand (also called mantissa) and an exponent (also called scale). However, the significand of a decimal is an integer. The one of a floating point is a number between 0 and 1 in base 2. It is really important to understand that it changes everything. For a floating point, the numbers on the right of the point will be a power of fractions of 2 since we are in base 2. For instance, 0.1 en base 2 is equal to 0.5 in base 10.
Another example: 7.5, written 75E-1 in decimal, will have a significand of 75 and an exponent of -1. For a floating point, we will have 0.75E1 in base 10, which is 0.11 in base 2 ((1/2)^1 + (1/2)^2) for the significand and 1 for the exponent. So, it's a power series of 1/2 instead of the power series of 1/10. It's this representation that allows faster computing. As we know, computers love binary numbers.
It's a base problem
Problems occur when a number can be represented perfectly in base 10 but can't in base 2. The usual example is 0.1. It's value in base 2 is periodic (0.000110011001100...). We can't represent it precisely. The IEEE 754 standard managing floating point computing is then trying to give the best representation possible and to handle roundings. But the precision lost stays.
It is really important to understand we are facing a representation issue, not a precision one. We know how to handle precision issues. You just need to increase the precision and your good to go. A double
is too short, just switch to a quad
. But our 0.1 will still can't be represented precisely. Also note that this is language independent. They all handle floating points the same way and use the floating point unit for computing.
The 0.1 example in more details:
float f = 0.1f;
System.out.println(f); // prints 0.1
BigDecimal d = new BigDecimal(f);
System.out.println(d); // prints 0.100000001490116119384765625
We could think the conversion to BigDecimal destroyed the number. No. To make things a bit more complicated, it's the float print out that is misleading. The real float value is the one shown by the BigDecimal. But the printing algorithm of a floating point is to print enough bits so that two adjacent values will be printed differently. So, in fact, the printing algorithm is trimming the real value.
Conclusion
As we have seen briefly above, the decimal is different. You still have a significand and an exponent but the significand is an integer. Because all integers can be represented in base 2, the significand can be perfectly represented and does not need any rounding. This is essential in an application working with amounts for instance.
And by the way, our elders knew that. No Cobol developer would dare use a floating point. Sadly, this knowledge was lost along the way. One striking example is Java that didn't even have the BigDecimal at first.
This mistake was repaired and as a bonus, the BigDecimal
is not restricted in length (so no precision issue is possible). On the other side, the decimal
of .Net is on 128 bits. But it's large enough to prevent precision issues for the usual amounts manipulated in enterprise applications.
First rule: Never use a floating point in an enterprise application. Ever! Even for literals. Add a Checkstyle rule making sure of that.
There are obviously exceptions to this rule (performance), but they are extremely rare.
As mentioned earlier, the second part of the article will detail the specificities of the BigDecimal and decimal. Indeed, even if they prevent representation issues, some other errors are waiting for you in the shadow. See you there.