SoftPac Speed as Related to Windows

I just ran into a problem where I have softpac running for my programming and testing of same using a simulation chart. For a number of weeks it was running at what I thought was way way too slow for SoftPac. Also, it was running at exactly the same speed (312 ms +/- 0.1ms) regardless of what I was doing to it. For instance, I would cut out 640 temperature alarm checks, and no difference in the chart speed, none. Also, I know how fast SoftPac is (roughly 100 times faster than an S1) and have seen it run a massive qty of code along with 64 charts and was still running in the single ms digits. Of course all this is subject to the windows operating system (unfortunately). I tried running this through support but did not get the answer (yet) that I wanted.

In so far as the problem above, magically, the exact same program is now running at 41 ms…
We (all of us who apply this to the real world) all need to know what the specific things that affect the speed of SoftPac when setting up a PC to run softpac.

Paragon (not all of you will recognize this name NEMATRON) handled this by creating another OS that took control of half of the PCs CPU resources and removed Windows OS completely from access to the hardware level.

My first stab at this is that you MUST MUST MUST use delays in EVERY chart.
I know, I know, super obvious, and I know you have enough experience to know this Barrett, BUT, in SoftPAC on Windows this is beyond critical. Every single loop or path through EVERY chart MUST have at least one delay.
Without, the CPU will try and give her all shes got captain and you get a pretty nasty speed hit.

Well I thought I did and yes I have at least a 1 ms delay in each chart. Actually, 2 charts have 1 sec and the other two have the same adjustable delay and I have set that from 1 to 15 ms, and it will run at exactly 312 ms regardless of whether the delay is 1 or 13, above that, it starts to increase as expected. In fact the other chart that uses the same delay, runs at the same speed even though it has much less to do. Then yesterday, it magically dropped to 41 ms and was like that until this morning (left it running all night) and it is back to 312 ms…I put the pc into performance mode in control panel/system/performance settings and and the PC is running like lightning but the chart is still running at 312 ms. Very weird.

I sent this into Randy #106373 and he saw some of the same things but not all.

Obviously, SP is dependent on the OS here, but I have to believe there is some way to setup or otherwise prevent the OS from parsing out the resources to the point where it affects the normal operational speed. I also think that considering the resources available, that SP could be the means for a whole new set of super fast IO and possibly servo control capabilities that could match the best servo systems out there. In order for any of that to happen, there needs to be a means of restricting the OS.

I find this interesting as well. I have a “Test” control engine running on a Windows 10 CPU using SoftPAC. I also have a sample PAC Display running on the same CPU. There is an R1 Processor connected as well. The display program only communicates to the SoftPAC control engine. SoftPAC sends values to the R1 via Scratchpad.
What I have been experimenting on is what affects the speed of the whole system. Right now the loop time on the SoftPAC Control engine (when you inspect it) is <.500 ms (meaning <500 micro seconds). Now, since I have a 1 mSec delay in both charts that are running atm, how can the loop time be under 500 micro seconds… or am I misunderstanding something here in the relation these items communicating?

The com loop time is the PING time, plus the host task response.

So, that means you always have network delay showing up in that PAC Term inspect com loop time number. No matter what charts are running or what they are doing, there is the distance between the PC running PAC Term and the controller.
ie, my buddy in Australia, when I PAC Term inspect his controller, it is always ~200ms.

If your charts have delays, if you only have a few charts and you are not doing a lot with the host task (com handles, PAC Display, OPC, groov etc) then you may well see a com loop time less than your delays in any chart.

To see your best com loop time - ONLY do this if it safe to do so - stop your charts. There is a button right there in that PAC Term inspect box, what this will do is give PAC Term 100% (roughly) of the host task and so you will see your network delay and not a lot else (PAC Display and groov aside).

The fun part for you as a programmer is if you have say 200ms com loop time, and 20ms when you stop the charts.
If you see this, you know your network PING times are good, but there is something in your strategy that is bogging things down.
Remember, adding delays speeds things up.

Ben, from an efficiency stand point would it be correct to say that more frequent smaller delays is better than a single large delay? For example, in a chart that loops to the begin block at its end, would it be more efficient to have 20 delays of 10msec peppered strategically throughout the chart vs a 200msec delay located in the begin block and 1msec delays at the end of each block in the chart?

It depends on what that chart and to some extent your other charts are doing.

The whole purpose of the delay is to free up the charts time slice so that either the other charts get a bigger slice, or that the host task gets a bigger slice.
If the host task is not heavy loaded, then adding one delay, or lots of little ones will not make much of a difference.

But, to try and answer your question, put delays where they make sense. If you are doing a lot of I/O read’s writes, add them around that section. If you are doing a lot of comm handle work, add them in that area.

Keep in mind that doing a comm like getting IO is actually a chart delay or i.e; a chart suspend while it waits for a response. I think the reason Ben says to add delays there is to guarantee that other charts can get a chance to access the comm stack.
Remember the main thing regarding speed in Pac is making sure you are accessing only internal variables in your main body of logic. If you’re doing large strategies, limit how often you get IO. The internal processing speed is very fast (considering the age of the platform), and by making sure you are not going out and getting IO throughout your logic, you will soon discover that a surprisingly large amount of that logic gets processed in one time slice. Generally, I only put in a delay to keep it from running too fast for the process (although you need to put in at least a ms delay). Actually, I usually limit the loop speed to either the process needs or the operator interface needs so that when an operator pushes a button on the screen, he gets a response quickly enough that he doesn’t try and push it again…I like to think that this needs to be a turn around of less than a second. So if I limit the chart to 250ms and the process doesn’t require anything more, then I’ll add a delay to achieve that.
Also, if you don’t go nuts with adding charts, you don’t need to give up as many time slices in the main logic. I stick to the theory that less is more…less charts also mean easier troubleshooting (in most cases).