A variable of the type decimal can hold decimal numbers.

The numbers are floating point. This means that the fraction (mantissa) and exponent are separate.

The numbers can be negative and positive.

## Sizes

A decimal variable can hold each whole value between +/-1 quadrillionn (-10^15). Above 1 quadrillio and below one, it can hold 15 decimal digits and an exponent of two digits.

- Fraction (Mantissa): 15 decimal digits and sign
- Exponent: 2 decimal digits and sign

## Values

This table defines the minimum and maximum values:

Value Type | Value |
---|---|

max | `+0.999 999 999 999 999*10^+99` |

min | `-0.999 999 999 999 999*10^+99` |

min positive | `+0.000 000 000 000 001*10^-99` |

max negative | `-0.000 000 000 000 001*10^-99` |

smallest change* | `0.000 000 000 000 001*10^sxx` |

zero | `0` |

* `sxx`

is the current exponent, one sign and two digits. For example, if the current variable value is `0.1*10^+10`

, then the smallest change is `0.000 000 000 000 001*10^+10`

.

## Note on special values

Question | Answer |
---|---|

Can be Not a Number (`NaN` )? |
No |

Can be Infinity? | No |

Can be Negative Infinity? | No |

Can be Undefined or Null? | No |

Can be negative zero? | No |

In summary, a decimal variable can only hold numbers.

## Precision

All computations must be done using a certain precision. However, when using a variable of decimal type, do not depend on the actual precision being used. In other words, use a decimal as if it has high enough precision to do what you want.

If the precision needed is higher than the bounds of the decimal type, then it is not suited for the computations.

The ProgsBase system may be able to detect dependencies on the precision by

- running the tests with different precisions
- using program analysis.

### Example: Depending on the precision

If the actual calculations on the decimal variable type is done using double precision IEEE Standard for Floating-Point Arithmetic (IEEE 754). Then, the following calculation would result in `y = true`

, but will result in `y = false`

if using 100 decimal digit precision.

```
x = 1 / 10
x = x * 10
x = x - 1
x = x * 1000000000
x = x * 100000000
x = round(x)
y = x == 6
```

### Example: Not depending on the precision

The following calculation works as long as the precision of the calculations used are at least 15 decimal digits floating point with three decimal digit exponent. This will always result in `y = true`

as long as the requirements of the decimal variable type are met.

```
x = 1 / 10
epsilon = 0.00001
y = |x - 0.1| < epsilon
```

This is known as an *epsilon comparison*.

### How to work with limited precision

An important note about finite precision (including binary and decimal) floating point numbers is that they are an approximation of the actual value you are storing. Hence, they must always be rounded before considered. e.g. when storing `34.65 / 10`

in a double, you are actually storing `3.464999999999999857891452847979962825775146484375`

, which must first be rounded to 15 decimal digits before it can be considered, i.e. it must be rounded to `3.46500000000000`

. Hence, if we now are to round to 2 decimal digits, to get a currency amount, we get the correct `3.47`

.

The same problems happens with decimal floating points: `3.465/27*27 = 3.464999999999991`

which if rounded directly to two digits gives `3.46`

. The correct way is to round once to get the actual number: `3.465000000`

, and then again to get the rounded number: `3.47`

.

The following is an elaboration with examples. The parts of the process are:

- Decide on a precision
- Enter the value and do calculations
- Stored value
- Get the actual value using the precision
- Round the value

#### Decide on a precision

First, decide on a precision that the final calculated value will have. This depends on

- the precision of the underlying hardware, which in this system is 15 digits.
- the amount of calculations you are doing and their type.

For example, we decide on 10 decimal digits.

The field of mathematics with the theory for this is called the *Calculus of Errors*.

For more information, see arithmetic expressions.

#### Entered value and do calculations

The value you give to the program. Almost exclusively, numbers are written as decimal, even when stored as binary.

For example

- Example 1:
`34.65 / 10`

- Example 2:
`3.465 / 27 * 27`

#### Stored value

The value stored, e.g. in memory or on disk.

For example:

- Example 1 (53 binary digits):
`3.464999999999999857891452847979962825775146484375`

- Example 2 (15 decimal digits):
`3.464999999999991`

#### Actual value

After calculations have been done on the stored value, we want to read out the value to use it. The first stage is to calculate the actual value. This means that we round to the precision we have determined is sufficient. For the example we are following, we determined that 10 digits was sufficient.

The actual value is stored in the same number system as is was entered, in this example decimal.

- Example 1:
`3.465000000`

- Example 2:
`3.465000000`

#### Rounded value

After the actual value has been determined, we can do further calculations if necessary. In this example, we are calculating with money, so we want to round so we get two digits after the decimal point.

- Example 1:
`3.47`

- Example 2:
`3.47`