Totalizing: 101 - On-Time totalizer pump example

While automating the control of a set of pumps, a customer wanted to keep track of how long each had run so far, then decide which pump to turn on next. The next pump to use would be the least-run pump.

The perfect place for one of our many “intelligent I/O” features: the On-Time totalizer! (This comes in both digital and analog flavors, FYI.)

Once a point is configured with that feature, it’ll keep track of the amount of time that point is on… for up to a maximum duration of 4.97 days (when the memory map value for that point will roll over). Neat-o!

This particular customer had a PAC controller in the mix too, and wanted some sample logic for using this feature in his strategy. My solution involves three parts, which happen in a loop.

[INDENT]Part 1: Get & Restart each on-time totalizer for all the pumps we’re monitoring, add those to running totals we’re keeping in a persistent float table.

Part 2: Loop through the elements in that persistent float table and find the lowest.

Part 3: Copy the index of that lowest table element into a value called nNextPumpToUse, and delay for a while before starting over…[/INDENT]

Note in the sample pictured and attached below, I used regular (mostly rectangular) Action Blocks, but I also included some (hexagonal yellow) OptoScript blocks for Parts 1 & 2, for those of you who’d like to see how the equivalent OptoScript code would look.


Note: since I’m doing a Get & Restart On-Time Totalizer, and adding those values to a cumulative persistent float in a table, I don’t have to worry about losing the values in the event of a power cycle. I also won’t have to worry about those values rolling over after 4.97 days of on-time for each pump.

Also note that if I maximize my point inspect window there in the debugger, I can see my X- and I-Vals for the (not-yet-cleared-since-last_time) On-Time totalizer values in real-time.

Okay, extra bonus points (and prizes!) for the first to post correct answers to these questions:

  1. What else does this customer need to do to his strategy to make sure it keeps running if/when the power blips?
  2. How long should the delay in the loop be, and why is it important for all your looping charts to have a delay?
  3. Now that those times (in seconds) are safely accumulating in that persistent float table, how long until we have to worry about roll-over (or loss of resolution) there, and how might we make that a longer time?

Do share & totally happy totalizing!

-OptoMary

p.s. Here’s the strategy…
9.4BasicPumpOnTimeMonitor.Archive.D10212015.T122515.zip (5.74 KB)

  1. To make sure it works through power failures: Be sure to run this chart in the ‘Powerup’ chart, and also be sure the AutoRun flag is set on the controller. Also be sure your accumulators are set as persistent variables.

  2. How long should the delay be? That’s not critically important, because if it’s terribly fast, then it’ll just get a zero time (or something incredibly small) for each pass on the running items. As for a delay in all looping charts, again I’m not sure why that’s a big deal, but if you do put in a delay, it gives other things (possibly more important things) more processor cycles to operate, or react faster. If you’re counting seconds of runtime, there’s no need for this to run more than once per second.

  3. Though the manual doesn’t explicitly specify the range of a floating point (page 216 of 1700 PAC User Guide), it does say they’re IEEE single-precision, which gives it a max range of about +/- 3.4e38, which is a Quite Large Number, in anyone’s reckoning. The math on these huge numbers gets weird on calculators, but best I can figure, if you totalize seconds, that’s 10790283070806014188970529154990 years, which will probably do. If for some reason your pump lasts longer than that, you could check for rollover and add to a second table.

However, I should note that before I did all this, I’d try to talk my customer out of this bad practice. If he ‘runtime balances’ his pumps, he may delay his inevitable overhaul for a year or three, but when it does come, he’s going to have four worn out pumps instead of one or two. His maintenance budget will take a huge hit in a single year, rather than spacing out overhauls say one a year, or two every two. Also, with four worn pumps, you don’t really have a reliable standby if one fails due to wear- in this case, they’re all about to go. Best practice is to use your primaries, test-run your standbys regularly, then rotate standbys to primaries when it’s time to overhaul the most worn pump. :slight_smile:

Wow, excellent points, Todd! I’m not 100% sure why he needed to know which pump had been run the least (made some assumptions there, and you know what THAT can lead to). But I’ll make sure he gets this valuable advice.

My question on floats was a bit tricky, and as you mentioned, things “get weird” when you start to try to understand them in too much detail or push their limits. That’s why we came up with [U][B]this handy tech note on floats[/B][/U] which I hope will help clarify where one might run into rounding errors, etc. (I came up with a different timeframe and method for avoiding those rounding errors sooner rather than later.) Hoping others will chime in too! In the meantime, I’ve emailed you for your address info so I can send you some fun OptoStuff…

Thanks for the great reply!!

Largest consecutive integer in IEEE single precision float is 16777216 (16,777,220 according to the tech note interestingly), so you are above seconds level resolution at that point which is after ~194 days. Converting to minutes (over 31 years) or even hours prior to adding to the persistent float table would work much better (19 centuries, and the Opto stuff is still under warranty right?).

but…

The delay for this should be long, due to[B] inability to add a tiny-tiny float to a big-big float[/B]. The tech note goes in to this quite well.

In this application it probably isn’t that important if you lose an hour of runtime from the accumulated data due to power failure, reboot from powerup, or an IO unit/module failure (gasp!), so getting and clearing the accumulators could be done on an hourly or daily basis.

If tracking precise runtime over long periods of time is important - then floats may be the wrong choice, and using integers for lack of another sensible option would be better.

Ah! I shall check that “largest consecutive integer” in the tech note. Thanks for mentioning it. I’m also thinking of adding QNANs to that tech note since those often trip people up, especially since if you get one in the mix while doing something like this cumulative function (or an average) you can wipe out all your data with one forgotten sanity check!

I’m glad you mentioned [B]choice of data type[/B] here, philip. At the brain level, this value is actually stored in the mem map as an [I]un[/I]signed 32-bit int in units of 1/10th of a millisecond (that’s where the 4.97 days comes from).

If one were to read from the mem map and accumulate the total in an [B]int32[/B], that would give us:
2,147,483,647 max int32 for our 0.1 milliseconds
/ 10,000 = 214748.3647 seconds
/ 60 = 3579.139411667… minutes
/ 60 = 59.65 hours
/ 24 = 2.48551348032407407407… or about [B]1/2 of the promised 4.97 days[/B], because here we’re using a SIGNED int (and therefore toss out about 1/2 of our values, those negative ones). Not helpful, but at least we’re seeing where that 4.97 days came from.

Instead we could just use an [B]INT64[/B], which would get us:
9,223,372,036,854,775,807 max int64 for our 0.1 milliseconds increments
/ 10,0000 = 922337203685477.5807 seconds
/ 60 = 15372286728091.293011667… minutes
/ 60 = 256204778801.5215501944… hours
/ 24 = 10675199116.73006459143518518… days
/ 365.25 = approx. [B]29,227,102 years[/B] which should about cover it.

If we did the read & clear every 4 days w/a max of up to 0.1 milliseconds of error that often, I think we’d still be [B]< 0.1 seconds of drift per decade[/B]! Whoo hoo! I’m guessing that would be accurate enough and no need to worry about floats. Although I[I] do[/I] like the root beer kind. Oops, sorry, corny float joke!

Anyway, my point here: it’s nice to have PAC Control do some mem map and float math for you, but sometimes if you need better resolution and/or accuracy, it’s good to have the option of getting “closer to the metal” and accessing the in the mem map directly.