Latch states?

What is the benefit of using latch states over just reading the point? In looking at the User Guide (bad cookie detection) I can’t see any benefit over just reading the detector directly. What is a real world example of the usefulness of latch states? It seems to require an extra step in programing (reset, get/clear, etc.)

Thanks,
Nick

Latches come in handy to capture high speed events locally at the rack brains which monitors the specific D/I. Then the controller can get/clear which is at a slower rate. A good example for us using them here is to monitor multiple voltages at our main switchgear buses to determine power brownouts or short outages which cause our motors to trip. Using the high speed latches and get/clear instructions, our plant recovers automatically which is very helpful during the summer Houston thunderstorms disruptions to our power grid!

Thanks for the example, PilotMan!

So Nick, I’d say in general folks like to use latches if they’re worried about missing something. For example, if you have a button attached to an input and a human presses it very quickly. Your looping logic (which should almost ALWAYS have a delay in it) may read that input just before/after the button press, missing it.

Instead of checking the current state over and over in a tight loop, you can let the brain handle the constant checking (using the latch feature) and have the strategy logic, at a more leisurely rate, see if the state has latched since the last time we looked…

Make sense?

Why should looping logic almost ALWAYS have a delay? If the loop is long enough, doesn’t that create it’s own delay? What is an ideal time to go through a loop?

Nickvnlr,
Check out the throughput doc;
http://www.opto22.com/site/documents/doc_drilldown.aspx?aid=3965

It shows how adding a delay into any chart frees that chart from the round robin and thus allows all the other charts and host task to run more often.
Even a delay of 1msec will cause the chart to drop out once. (But honestly, you could (should) delay longer than that in most applications).

The ideal delay time is determined by your application. An Estop will have a smaller delay than say HVAC (air-con).

I have seen that before. I kind of understand what’s being said, but at the same time, if every chart running gets a 1 mSec slice of processor time/round, what difference does it make if I add a delay?

Does the delayed chart get a suspended status or does it still get polled (eating up processor time) to check the status/count-down of the delay timer?

If a strategy has 9 charts running (10 mSec round-robin) what difference would a 5 mSec delay make if it takes 10 mSec to get back around to that chart? Would it need to be an 11+ mSec delay to make any difference (forcing it to skip 1+ rounds)?

In my particular situation I don’t think the standard rules/best practices necessarily apply:
I am only running 6 charts simultaneously (so I’m looking at a theoretical 7 mSec round-robin, OK), for the most part they have been designed pretty efficiently (no small loops looking for a status change, they all run straight through and then restart the loop), and they need pretty precise timing to start/stop things in the appropriate position.

I’m not trying to be difficult or obstinate. I am just trying to understand the situation better so I can make better code.

Thinking of it in terms of absolute time is confusing for me sometimes, so I usually think of it in relative time. If you have 6 looping charts (+ a host task) and there are no delays, each chart gets only 1/7th of the processor time (14%). If I have some charts that have more critical timing requirements, by putting delays in the other charts I can increase the percent of processor time allocated to the critical ones, making them more responsive.

There is a limit, of course, to the minimum cycle time a chart can achieve, even if it has 100% of the processor (depends somewhat coding technique).

I have timers and embed a chart cycle time monitoring variable in each of my looping charts to keep an eye on the balance between them.

Nickvnlr,

Anytime any chart hits a delay command, that chart is [I]pulled out[/I] of the round robin for the duration of that delay.
Thus, there is one less time slice in the round robin, thus the whole cycle is shorter for those still in the round robin.
They will have their commands executed faster.
With 6 charts, when one hits a delay, it is as if there are now only 5 charts for the duration of that delay in the 6th chart.
Those 5 charts are going to step through their instructions faster because they each get a turn at the CPU more often.

I like how sensij worded it, think of it as relative time rather than absolute time since the charts can all hit their delay commands at different times and so drop in and out of the round robin at different times, now and then perhaps 2 or 3 of them might all be in delay and thus the round robin only has half the number of charts to share the time slice with.

Please keep asking questions till it is clear, I love that you are working on making better code and OptoMary and I are always looking for ways to make this clear and actionable.

OK so here is my dilemma:

I only have 6 charts, this is due to the fact that I am trying to keep my chart count down (increasing dedicated CPU time per chart, yes?) and hedging space for future (planned) expansion. So I have combined somewhat independent functions into single charts. They are related, but need to run despite what is happening somewhere else. There are some functions that could handle 1+ Sec delays, but the next function section of the chart needs to continue running. Would I be better off splitting into multiple charts with some having delays or leaving it as is?

For example:


The four different sections of this chart need to run (basically) at the same time. Independently they could probably handle a short delay, but when together in the same chart, if I put a delay into one section another section might miss an input. If I split this into 4 charts and give each a 2 mSec delay have I gained anything since that would increase the total number of time slices and decrease the dedicated CPU time per chart from 14% to 9%?

I have 2 other charts like this so you can see my dilemma since I am limited to 16 charts on my current hardware. Honestly, looking over my strategy, I could probably combine a few more charts. Would this be good or bad?

Without looking more closely at it, I’d say that by moving the functions that can tolerate running less often into a different chart (with a longer delay), you will get better average response time on the functions you want to run more often. That is true especially if the 1+ sec delay functions have complicated stuff in them (string manipulation, communication).

Here is another way to look at it. Lets say you have five charts, each with 5 functions, and each function takes the same amount of time to execute. With no delays (and ignoring the host task), you could expect all your functions to each get 1/25th of the processor.

Now, move 5 of those functions (one from each chart) into a new slow chart, looping once per second. Most of the time, your critical functions now get 1/20th of the processor, reducing their cycle time by 20%. Once per second, they only get 1/25th, no worse than they did before, even with the extra chart running.

Edit:
One other minor point to add… if the overall loop timing isn’t too important, but you absolutely want Functions A, B, and C within a chart to execute sequentially as fast as possible, you could add a 1 ms delay before function A. That would force the chart to give up its time slice, so that when it gets its turn next, it has the whole 1 ms to get through A, B, and C. Without the delay there, the amount of time slice left when beginning A would be random, and it could run out before getting all the way through C, introducing a relatively long and maybe unpredictable delay into the critical function sequence.