Minimizing communications with I/O


#1

In writing a strategy I have always set a flag whenever I turn on an output. Then I check in a decision block (before turning it on again) if that flag is set, and if the flag is set, I do not turn that output “on again”. I have done this to minimize communications with the I/O in order to speed up the running of the program. Is this an exercise in futility? It adds extra blocks and variables to the program. Am I being counter-productive with this practice? The chart would be simpler to just keep hammering the I/O with on and off commands, than to have all of these extra decision blocks and corresponding variables.


#2

Short answer to your questions: probably.

But I’d like to know why you feel the need to "speed up the running of the program."
Also, I’m guessing your controller and I/O are far apart (e.g. not an -R1 or -R2)? Do you have to pay for every packet that goes across the network from the controller to the I/O?

I’d like to hear more about the big picture but I suspect your barking up the wrong tree and possibly hunting when you could relax and fish if you’ll forgive the tormented metaphor here…


#3

This is more of a philisophical question regarding the designing of charts to control Opto 22 equipment, as opposed to trying to solve an existing problem. I am trying to establish my standard approach to writing strategies for Opto 22 controllers. Where I work we have competing opinions about this from more experienced programmers and I am trying to decide what approach I am going to take. So I figured I would get a recommendation straight from Opto 22.

In this case I am using an R1 on the same rack as the I/O. Let me rephrase my question. Is there a compelling reason I should put in the necessary programming to avoid commanding an output (that is already on) to turn on?


#4

I try hard to use best practices to keep my programs efficient. I have R1+rack like you, and I started with the extra blocks and decisions for all of the outputs as you described. More recently, I have been drifting towards just telling the output to turn on and turn off, regardless of whether it is already on or off. I am not seeing a difference.

However, in most cases I still maintain a variable that follows the state of the output. That is useful when I wish to know the status of the outputs for other reasons, and frequently, a single i32 can be used with each bit corresponding to an output. The special case of all outputs off (i32 = 0) can be checked in a single conditional instead of polling each output, or the state of particular combination of outputs can be checked with an appropriate bitmask tested against the i32.


#5

From a software engineering perspective, you don’t want to make your code (strategy) more complicated than it needs to be. Our time writing the project, debugging the project, and future maintenance is more costly when we try to optimize for the sake of optimization.

So I would read and write directly to the IO unit every single time, UNLESS, there was a compelling reason not to. On an R1, I can’t think of any compelling reasons when dealing with it’s own local I/O.

Take a look at the following wikipedia entry on optimization. Read down to the section on “When to Optimize”

The key phrase here is on premature optimization:

“Premature optimization” is a phrase used to describe a situation where a programmer lets performance considerations affect the design of a piece of code. This can result in a design that is not as clean as it could have been or code that is incorrect, because the code is complicated by the optimization and the programmer is distracted by optimizing


#6

Interesting question. I understand the logic behind it and look forward to hearing an official response from Opto, but I can tell you this… the controller can handle it. We have designed sites with 10+ EB2’s and a single S1 with large databases with no extra considerations for whether the outputs are already on/off or not and they run great. We just “hammer” away. Of course its added traffic to the network, but our networks have had no issue with it. My guess is that there my be some benefit to doing it how you are now, but that those benefits do not even come close to justifying the amount of work it takes to implement.


#7

Thanks, Philip/sensij/Nate, great answers!

That’s one of the fun things about OptoStuff, lots of ways to solve problems depending on your specific needs.

A few more thoughts… I’d make sure you’re doing a couple of things (which wouldn’t apply so much for an R1, but also wouldn’t hurt):

  • Hopefully you're already doing this: Make sure you have some kind of I/O Unit re-enable logic in place in case your I/O is somewhere else. Here's the Opto 22 PSG-recommended [[U][B]sample to get you going[/B][/U]](http://www.opto22.com/site/downloads/dl_drilldown.aspx?aid=3604).
  • Leverage the intelligence in the I/O Unit. Here's a white paper on the topic of [[U][B]Intelligent Remote I/O[/B][/U]. For example, as described in [URL="http://www.opto22.com/community/showthread.php?t=373"][U][B]this post[/B][/U]](http://www.opto22.com/documents/2119_Intelligent_Remote_IO_white_paper.pdf), instead of doing a "Turn On" then wait a little and later a "Turn Off"... use the Start On-Pulse.

Those suggestions go more toward reliability (e.g. in the event of communication loss between the controller and the remote I/O), vs. speed or the related more broad/complicated topic of [U][B]throughput/performance[/B][/U].

FYI, there are more ways to push logic down into the I/O (e.g. using Event/Reactions) but those are neither simple nor easy to maintain–but an option if you’re on the bleeding edge of what your hardware can handle. Ditto options like using an R1 or R2 instead of an EB1 or EB2 then turning of the control engine in that R1/2 so it’s a super-duper EB1 or EB2.

Often, though, there are better/simpler ways of improving speed/throughput depending on where the bottleneck might be. For example, a certain green submarine originally had some higher-density modules in the system which are not as fast as the equivalent 4- or 2- channel analog input modules. So spec your system carefully at look at those “Data Freshness” times when selecting modules.

One last example: in another part of that submarine system, they had a bunch of serial (mostly Modbus) devices routed through serial I/O modules instead of the built-in serial port(s) on the controller. Moving the most time-sensitive communications to that on-board serial was super easy to do, just a re-config of the comm handle.

In any case, that’s why my short answer is often: “it depends” and why I’ll ask for the big picture view. Good question, keep 'em coming!

-OptoMary


#8

Great answers!! Thank you all for helping me resolve this issue going forward.


#9

I have been using the same methods for all my I/O’s which involves creating one chart to handle reading and writing to all I/O’s using MOVE NUMERIC TABLE TO I/O UNIT EX and MOVE I/O UNIT TO NUMERIC TABLE EX. This chart runs with only a 10 ms delay thus all my IO’s are always up to date. Of course all the I/O’s are assigned to variables thus the other charts that control them write to the variables only and not to the I/O’s. This has greatly improved overall controller loop times on my larger strategies by over 50%.


#10

I try to keep charts as simple as possible and just hammer away at the I/O.

I keep all Opto stuff on it’s own subnet with the exception of the groov and the controller that I need to communicate with, but they still communicate with each other on the Opto22 subnet.

Other then network traffic, is there any physical issue with continuing to turn on I/O that is already on? will the modules wear out more quickly?


#11

The controller speed internally is so much quicker than the time to I/O that if you need speed, then writing to IO in block reads is by far the most most efficient. On the other hand the absolute simplest means of writing code in opto is direct comms to I/O and if the program is not large and not required to be < 100 ms loop times, then this is acceptable. A former Opto employee claims he always programs the R1 this way and does not see a problem. My take on this is if your program turns out to be bigger and req to be faster that you thought, changing it is a PIA. Therefore I generally stick with the same method of block R/W.

In so far as checking your writes, I always assume that if I had previously turned it on, it is on, and therefore you can rely on the state of the internal I/O variable to check status for internal condition commands. This saves considerable speed. Of course you can also (although makes code more complex) create a separate set of like name variables for just reading the state of the outputs versus the vars used to write them. This allows you to check the actual status, but I considered this completely unnecessary.

I asked Opto early on whether using the extra Ethernet port was faster and was told essentially no. I suspect this is due to the fact that there is only one CPU and therefore the two Ethernet stacks only get one slice each. Also, most do not appreciate how fast 100 MB is. Since the IO is transmitted via UDP, the time required to send a little packet is in the lower u-seconds and the time for each bit is 1.25 nano seconds. Using both ports for the purpose of speed is not necessary unless there is a lot of traffic on the network you are using. Make sure the network is clean first.

The issue of whether or not to create a separate chart for IO is dependent on your strategy. My personal opinion is that if that chart is running fast enough, it may not matter, otherwise it can affect timing. Also, it is one chart running all the time at high slice rates using up more CPU time than including that chart in your main chart. Chart switching takes time, I use one chart if possible with the IO Read at the top of the loop and the write at the bottom. This is the most efficient means and guarantees the sequential operation therefore making it much easier to troubleshoot. Remember, this is a single CPU, therefore everything happens in a single thread in terms of CPU time.

I do advocate the use of block reads and direct writes. This simplifies the code since the number of output writes are much less than the need to read inputs. You can also read the status of the outputs with every read cycle therefore no hit on CPU time and you can check the last status of the output internally. Unless the strategy is large and requires speed, this is a good method.

Using an Int32 for status flags is very efficient and does provide the means of checking status on all 32 IO at one time, however, I find that I do that very seldom, therefore I find using like named variables (reflecting the IO) much easier to use and an Int32 will only hold one 32 point module therefore complicating the issue. If you are not using the Excel method of creating rack table to variable script (and back), then you should check it out. This makes creating a perfect set script commands to load/unload the R/W of the rack, once you have completed the spreadsheet you can paste all 1000 IO vars scripts into a block and get a compile with no spelling errors.

One last thing, if you have done your homework and cannot get enough speed, you can use SoftPac which I suspect runs at greater then 10 times the speed of the S1. At this point the bottle neck becomes the EB1/EB2 and the speed of the I/O modules.