Sisense supports floating-point numbers (IEEE 754 standard) and allows you to perform an arithmetic calculation on these numbers.
Floating-point numbers suffer from a loss of precision when represented with a fixed number of bits (e.g., 32-bit or 64-bit). This is because there is an infinite amount of real numbers, even within a small range like 0.0 to 0.1. Irrational numbers, such as π (pie) or √2 (square root of 2), or non-terminating rational numbers (like 0.333...), must be approximated, and calculations that are done by a computer Central Processing Unit (CPU) will need to be rounded.
It is important to note that due to this loss in precision, you should almost never directly compare two floating-point numbers. A better way to do it is to compare numbers with some precision epsilon.
if (a == b) – problematic!
if f (Abs(a – b) < epsilon) – correct!
Floating-point numbers lose precision even when you are just working with such seemingly harmless numbers like 0.2 or 76.5. You should be extra careful when working with a large amount of floating-point operations over the same data as errors may build up rather quickly. If you are getting unexpected results and you suspect rounding errors, try to use a different approach to minimize errors. To do that, please consider the following:
In the world of floating-point arithmetic, multiplication is not associative:
a * (b * c) is not always equal to (a * b) * c.
Additional measures should be taken if you are working with either extremely large values, extremely small numbers, and/or numbers close to zero: in case of overflow or underflow those values will be transformed into +Infinty, -Infinity or 0. Numeric limits for single-precision floating-point numbers are approximately 1.175494e-38 to 3.402823e+38.
Sisense production deployments that include more than one CPU will use parallel calculation algorithms to speed the calculations and spread the arithmetic operation between several CPUs. Given that each CPU may result in a rounded calculation, the total arithmetic result can be different every time. This means that while running the calculation on a single CPU the result is approximated, but the same result every time, while the same calculation on a multi-CPU system may produce different results every time. This is normal behavior and should be addressed by using an epsilon during the compression of results from different iterations.