Alternatives to Persistent Variables

Hello all… My company has been using Opto products for a very long time, and we’re in the process of updating old LCM-4 controllers to Groov EPIC controllers. One thing I’m struggling with is persistent variables.

On the LCM4 controllers, using OptoControl, persistent variables didn’t exist. We use recipe files, configured for upload and download in OptoDisplay, to save variable values across downloads. There are benefits and drawbacks to this method. The biggest benefit is that the recipe file can retain variable values in situations where PAC Control loses persistent variable values: clearing controller RAM, a failed/cancelled download, changing table lengths, etc. The drawbacks, or really the one huge drawback, is that it relies on OptoDisplay Runtime. OptoDisplay Runtime is incredibly slow to process recipe files, regularly corrupts the files, and at times crashes and therefore doesn’t save or restore data at all.

I like the concept of persistent variables in PAC Control, but there are drawbacks, some of which I already mentioned. If I change a table length, the persistent data is gone. Some of the things that cause persistent data to be lost should be quite rare, but other events (like changing table lengths, downloads being cancelled or failing) are not so rare, and I don’t want to lose data in those cases. Perhaps there’s an argument to be made that my programming needs to be better so that those things don’t happen, but that’s another discussion.

My production people are on my case about developing a strategy to make sure that persistent data is as foolproof as possible, which is why I’m here. For those of you who have been using PAC Control longer and have had access to better tools with Groov EPIC controllers longer (Node RED, MQTT, etc.), does anybody have a good strategy they could share with me?

One other reason I’m interested in this topic is because we’re considering other HMI options (Groov View), so I’d really love to not depend on PAC Display if possible for recipe data.

There is the Opto Tag Preserve utility. I’m not sure if it will work after changing a table length, I’ve never tested that scenario. It would be great if this was built into PAC Control … the cancelled/failed upload loss of persistent variables is extremely annoying.

One thing I like to do for all persistent variables (since you can’t set a default) is to have a persistent flag in the strategy that is checked on power up and if it is 0, then I initialize ALL persistent variables to sensible defaults, then set the flag to 1.

For those constantly changing values, you could set up your strategy to save those persistent variables in the PAC file system using a file comm handle and then download them for backup. Then have the PAC check for this file when the strategy starts.

There are a number of ways this could be handled, looking forward to see what others do here.

Nothing is foolproof. If you start there, its then a matter of working out how to make the backups livable.

The way I manage my backups and thus variables is to use Opto Tag Preserve.
Since I have a Windows PC in the mix (yuck, but a necessary evil that comes with its own set of issues), I run Node-RED on it and since Opto Tag Preserve is a Windows only app, it all sort of works together.
Use the exec node to run it.

You can put a button in OptoDisplaly/PACDisplay/groovView or even just a timer to trigger a backup.
By putting it in Dropbox I get another layer of resilience.
The backup files are date time stamped, so you can quickly see when it was captured.
I also do as Philip said and have a persistent variable in the powerup chart, if is cleared, don’t run the strategy until a restore takes place.
Taking snapshots of a live system is tricky as a variable you want to keep a record of might change 100milliseconds after you take the backup, but nothing is foolproof…

@beno: Thanks for your thoughts. I’ve been looking into OptoTagPreserve, and it’s definitely in the running. Here’s what I’ve come up with as an experiment over the past few days:

My coding standards for Opto within my company dictate prefixing every variable with a type, and we use some underscores as separators. So variables that begin with i_… are integers, f_… are floats, s_… are strings and so on. I wanted to come up with a variation where simply creating a variable matching a specific naming convention would cause that variable to be saved across downloads. For now, I settled on prefixing the type with x. So variables named xi_…, xf_…, etc. should be saved. As a bonus, when I see these variables used in my code, I have an indication that it’s a persistent variable just by seeing the variable prefix.

I set up the following flow in Node-RED on the EPIC:

The top half checks a status variable to determine if I’m past my post-download routine. If I’m past that routine, I make an API call to get all integers, floats, strings, integer tables, float tables, and string tables. I iterate through the keys of the returned objects, determining the variable name prefix for each variable. If the variable prefix matches my pattern, I save the variable value in a JSON file in the EPIC’s filesystem:

{
    "xf_Test": 654324.6
}

It ends up creating a significant number of files, but they’re tiny and it’s easy to figure out what’s in each file. The filename matches the variable name: xf_Test.json.

The bottom part of the flow looks at the same parameter, but becomes active if I haven’t yet completed my post-download routine. In that case, issue PAC Read commands to get lists of variable names and loop through the resulting object keys to find variables that should be restored. Read the JSON files and PAC Write the values back to the PAC controller before setting the finished flag.

In addition to the Node-RED flow, I have a cron job running on the EPIC to rsync the entire directory of JSON files to a remote file server. The directory on the remote file server contains a Git repository which is automatically committed and published every 5 minutes. The remote file server is RAID 6 and part of a realtime high availability cluster, so data is pretty safe once it’s there. That file server is backed up locally and on the cloud every night, in addition to the data published to GitHub.

My biggest question is how this solution will scale. It works great on a fairly small strategy, but I’m sure it’ll require some tweaks for a larger strategy.

1 Like

I think it would scale just fine other than the large number of files it creates. I’m sure with a bit more work you can get the flow to save it all to a single file.

I like this.

@philip: I could definitely do a single file, but I see benefits to both options. With individual files, I can easily check GitHub to get an approximate timestamp of the last change of the variable value. I know I could do that for a single line in a larger file, too, but it’s a bit of extra work.