# Why 0.1 + 0.2 ≠ 0.3

If you’ve ever encountered a situation in Python (or other programming languages) where adding `0.1`

and `0.2`

doesn’t yield `0.3`

but instead gives something like `0.30000000000000004`

, you’re not alone. This discrepancy is a common occurrence in floating-point arithmetic, due to how numbers are represented internally in binary. Let’s explore why this happens and discuss practical solutions to handle these tiny inaccuracies.

## Why Does 0.1 + 0.2 Equal 0.30000000000000004?

### 1. Floating-Point Representation in Computers

Computers represent numbers in binary (base-2) rather than decimal (base-10). Certain decimal fractions, like `0.1`

and `0.2`

, cannot be represented exactly in binary, similar to how `1/3`

has a repeating decimal representation in base-10. In binary, `0.1`

and `0.2`

become repeating fractions:

`0.1`

in binary is`0.0001100110011...`

(repeating)`0.2`

in binary is`0.001100110011...`

(repeating)

Since these values cannot be represented exactly, Python (and most other programming languages) store an approximation. This approximation leads to tiny errors in arithmetic operations.

### 2. IEEE 754 Standard for Floating-Point Numbers

Python uses the IEEE 754 standard for floating-point arithmetic, which defines how real numbers are stored and manipulated. Under this standard:

- Floats are stored with 64 bits of precision.
- This allows for high precision but comes at the cost of occasionally producing slight inaccuracies in certain calculations, such as
`0.1 + 0.2`

.

To see this in action, let’s try it in Python:

```
a = 0.1
b = 0.2
print(a + b) # Output: 0.30000000000000004
```

Here, the result is close to `0.3`

, but not exact. This is due to the small inaccuracies from rounding the binary representations of `0.1`

and `0.2`

to fit within the limitations of 64-bit floating-point numbers.

## How to Handle Floating-Point Errors

Although this floating-point behavior is natural, it can be problematic when you need precise calculations. Fortunately, Python provides a few solutions.

### Solution 1: Using `round()`

to Limit Decimal Places

You can round the result to the desired number of decimal places, which is often sufficient if you’re dealing with approximate values in everyday applications.

```
result = round(a + b, 1)
print(result) # Output: 0.3
```

This approach works well when you only need to control the precision for display purposes or simple arithmetic.

### Solution 2: Using the `decimal`

Module for Exact Decimal Arithmetic

Python’s `decimal`

module offers the ability to perform exact decimal arithmetic. By using `decimal.Decimal`

, you can avoid the limitations of binary floating-point representation.

```
from decimal import Decimal
a = Decimal('0.1')
b = Decimal('0.2')
result = a + b
print(result) # Output: 0.3
```

Since `decimal.Decimal`

stores numbers in decimal form, it represents `0.1`

and `0.2`

exactly, yielding an exact result of `0.3`

.

#### Advantages of `decimal.Decimal`

- Exact representation of decimal numbers.
- Avoids common floating-point issues, making it ideal for financial and other precision-sensitive calculations.

#### Tradeoffs of Using `decimal.Decimal`

- Performance: Decimal arithmetic is generally slower than floating-point arithmetic.
- Memory Usage:
`Decimal`

values consume more memory than floats.

### Solution 3: Using `math.isclose()`

for Comparisons

When you’re comparing floating-point results, it’s often better to check if two numbers are approximately equal rather than exactly equal. Python’s `math.isclose()`

function makes this easy by allowing you to set a tolerance level for comparisons.

```
import math
a = 0.1
b = 0.2
result = a + b
print(math.isclose(result, 0.3)) # Output: True
```

`math.isclose()`

checks if `result`

is close enough to `0.3`

within a small tolerance. This method is especially helpful when working with measurements or scientific data where slight deviations are acceptable.

#### Syntax and Parameters of `math.isclose()`

**Syntax:**`math.isclose(a, b, rel_tol=1e-9, abs_tol=0.0)`

**Parameters:**`a`

and`b`

: Numbers to be compared.`rel_tol`

: Relative tolerance, default is`1e-9`

.`abs_tol`

: Absolute tolerance, useful when comparing numbers close to zero.

### Solution 4: Using `fractions.Fraction`

for Rational Numbers

If you need to work with exact fractions (like `1/3`

), the `fractions`

module allows you to store numbers as rational fractions, which avoids rounding errors entirely.

```
from fractions import Fraction
a = Fraction(1, 10) # Equivalent to 0.1
b = Fraction(2, 10) # Equivalent to 0.2
result = a + b
print(result) # Output: 3/10
print(float(result)) # Output: 0.3
```

`fractions.Fraction`

represents numbers as exact fractions, which avoids all rounding issues. It’s particularly useful in scenarios involving exact mathematical computations.

## When Should You Worry About Floating-Point Precision?

For many applications, small floating-point inaccuracies won’t cause problems. However, there are certain scenarios where precision matters more:

**Financial Calculations:**In financial applications, even tiny rounding errors can add up to significant discrepancies. Use the`decimal`

module for such cases.**Scientific and Statistical Analysis:**When working with very small or very large numbers, floating-point precision can affect results. Consider using`math.isclose()`

for comparisons.**Graphics and Game Development:**Minor inaccuracies can affect pixel-perfect rendering or physics calculations. In such cases, small tolerances in comparison are often used.

## Summary

Floating-point arithmetic in Python sometimes produces results like `0.1 + 0.2 = 0.30000000000000004`

because of how computers represent decimal numbers in binary. While this is expected behavior, there are several ways to manage it:

**Use**for simple rounding of results.`round()`

**Use**for exact decimal arithmetic, especially in financial applications.`decimal.Decimal`

**Use**for approximate comparisons.`math.isclose()`

**Use**for exact rational arithmetic.`fractions.Fraction`

Understanding floating-point arithmetic quirks and learning these techniques will help you write more accurate and reliable code. For most applications, these minor errors don’t have much impact, but knowing how to handle them effectively can make a big difference when precision truly matters.

See the official document on this issue or limitation: Click here.

See more python related advanced topic here.