My company recently purchased several Groov RIO modules for a customer project and so far the experience has been really great. I do a lot of PLC and DCS work and these modules are a great addition to the standard Allen Bradley, Emerson, Schneider, etc offerings depending on the customer application. We end up leaning on NodeRed to do some calculations and pass data up to a data aggregation system and it got me wondering, what options are there to manage NodeRed instances across multiple Groov RIO modules? Instead of logging into each module then going to the editor, is it possible to centralize that? We have tried a single NodeRed instance on the server and using Opto22 nodes to pull data off each module and that works well, but I’m concerned as we scale the system up. We’d like the multiple user support and multithreaded processing that are more difficult with a single server based instance.
I ran into FlowForge, the development tool for NodeRed, that seems perfect for our application (central on prem server, multiple users, multiple NodeRed instances, etc). Can the Groov RIO modules run the FlowForge agent on each of the modules, with a centralized FlowForge application on the server? Is there a better way to do this using another product that Opto22 offers? We’re looking to build this system in a way that in can scale to multiple physical locations, but still have a centralized configuration location, so any suggestions are appreciated. I’m not very familiar with the other Opto22 product offerings at the moment either.
As you already know, we are huge fans of Node-RED.
As such we have been following along very closely with all things FlowForge. While we have nothing to announce yet, as you point out, its a natural fit for both products.
Thanks Beno, I appreciate that Opto22 makes Node Red so easy to work with. Am I right to assume then that the recommended method of working with multiple (30+) RIO modules is to host Node Red on each one and then connect to each instance individually?
Right now we have 4 RIO modules talking to a central instance on a server and polling the IO every 0.5 seconds or so. I’m a little worried how that setup will hold up as we scale across buildings. I’m thinking of pushing the frequent IO polling and basic calculations down to each RIO.
We’re also looking at groov Server for Windows as an option to tie browser based HMI screens (Groov View) into Node Red as well, so I think there will always be an instance on the server for those tie ins and larger calculations.
Your concerns are valid and its just a little too soon to say too much about FlowForge on groov hardware.
For now, I’d be looking at using MQTT on each RIO to publish data to a broker.
This way there is no poll/response and each RIO and the main server is very lightly loaded.
groov Server for Windows could either have each RIO configured as an IO unit and then you can get the data directly from the RIOs. Or again, with Node-RED running on the same Windows PC as gs4w it could be subscribing to the topics on the broker and putting the tags into the groov Data Store.
EDIT: To clarify why MQTT. Currently with the server polling ever half second, if a digital point is off, each time you poll it, its still off. There is a lot of network traffic and CPU load on everything every half second to just find out that the point is still off.
Using the MQTT data service on the RIO, point state is only published on change.
Same with analog values. Set a dead band for them. By default its zero, change it. If you can be honest with just how much analog resolution you need you can cut down your traffic quite a lot.
(Note when I say ‘MQTT’, I can just as easily say Sparkplug. In-fact, if you can go end-to-end sparkplug, then its even better, but when you throw in gs4w and other non-sparkplug apps, I’m thinking vanilla MQTT might be smoother in the short to medium term).
Sounds like I need to get a trial version of gs4w and test some things out. Definitely want to push as much of the fast polling off on the RIO modules as possible. The groov palette makes that a pretty painless move. Thank you very much for all the help!