I have a handler that loads a two separate floating point numbers into two separate variables. The actual code snippet of interest is:
Code: Select all
put fixtureLength / boardLength into x --286.65 / 11.025
put fixtureLength div boardLength into y -- 286.65 div 11.025
The division gives the answer 26 in x.
The div gives 25 in y.
May I assume there are tiny differences somewhere in the 17th decimal place that make these two different? I tore my hair out trying to find a bug in my code, until I just happened to try the division in place of the div, just for grins, and now all is well. Except my understanding of how computers actually work.
Craig Newman