In the example below, I divide a integer by a literal and put it into a float. The float does not show the result as a decimal. Should it?
No. I suspect you’ll see what you expect when you change the literal 2000 to 2000.0.
This is one of the more subtle points of programming in OptoScript. The compiler sees a numerator that’s and integer, a denominator that’s and integer, and gives you an integer for the result (even though you’re assigning it to a float).
Thanks for the insight.
This has bit me on the debug so many times, I now, out of habit, put ‘.0’ on the end of just about every number I use.
(Unless its a flag, or a count, ie, I KNOW its going to be an int from now and until the end of time).