I've had calculations not come out as I expected in the past -- happens more often that I’d like to admit. The trick I use to sort it out: break down the formula into smaller pieces for debugging. Since your literals didn’t originally include a decimal point, but you were dealing mostly with floats, I figured that might be the problem, so I confirmed that by putting just that part in both an Int32 and a Float 32. The OptoScript compiler's methods for when/how to do conversions behind-the-scenes aren't always as I've expected or assumed initially.
In general, it is a good habit to try to keep the data types consistent throughout, as you suggest.
Anyway, for literals just adding a .0 at the end is all you want to do for this situation. I hope I didn’t mislead you with that Int32ToFloatBits command (which, BTW, is new in 9.2 so you won’t see it in 9.1). It’s not exactly a type-casting command like those you might be used to in other programming languages.
For example (and I wasn’t going to get this detailed, but now I’m on a roll!) if you had the following OptoScript:
FONT=Arial fFakeData = 1/2;
/FONT fFakeData = 1.0/2.0;
(3) fFakeData = Int32ToFloatBits(1)/Int32ToFloatBits(2);
The first (1) would yield 0 (this is what you ran into), because the division happens on the 2 integers, and the 0 integer result gets converted to a float AFTER the division.
The second line (2) yields the 0.5 result like sought in your example.
And (3) will get you… (guesses, anyone?) a divide by zero error in your queue and a QNAN in your float. Not what you expected? I had to think about that a bit and use this IEEE float converter: http://www.h-schmidt.net/FloatConverter/IEEE754.html
to figure out that 0x00000002 / 0x00000001 as floats works out to be: 2.8E-45 / 1.4E-45. In other words, your denominator is a really small value, smaller than what’s supported by our 32-bit floats. (For more info, see this Using Float Tech note.)
Clear as mud? Anyone? Should we make a little animated video to show how these types all work?