groov/Epic Multiple Connection Performance

I’m planning a new application in which a groov or Epic will collect data from multiple (a dozen or more) devices to display on groov View and store in a SQL database. Most of these devices will not be Opto but some may be. I plan on using Node-Red to query the non-Opto devices via TCP messaging and the Opto devices (SNAP-PAC) via RestfulAPI on a regular basis. The flow will then parse the data into a local data store for groov View and will write the data to a SQL database. Has anyone here had experience with connecting a single groov box or Epic to more than a dozen different devices? I’m wondering about performance loss the more devices I add.

Hi Joey, Connecting to multiple devices from groov has not been an issue. Where I am having issues is writing to the SQL database from NodeRed. When you have more than one write to a db you will probably get “connection closed” errors. I have posted about this a couple of times but to no avail. So before you dive in too deep you may want to mock up the NodeRed sql writes and see how it works in your application. You can try adding time delays to the writes so they don’t hit all at once but instead a few secinds apart, or possibly setting up each table in a different db which seems ridiculous but might work.

Did you have any success creating a separate sql configuration node for each flow? That should open up a separate connection, but I haven’t tried this myself. Never heard back from you on the old thread. ConnectionError: Connection is closed - #7 by Norm_Freeman1

More info here: How to Insert Table Values Into MS SQL and MySQL as Separate Records - #9 by philip

I did add different connections and it did not help. I have been trying to juggle it with different time delays. I don’t think adding different connections makes them “different” enough for sql server to realize they are unique. When you have flows with multiple sql nodes and multiple flows it gets hard to manage. I haven’t had time to dig deeper into the MSSQL node itself on github to see if I can tweak something. Although this could possibly be a server issue too. So I love it when the planets align and the time delays are in the right “orbits”. When that happens it is more reliable than Opto Data Link (ODL). But because I have had issues with both, for critical operations I run both so that we can get reliable data with the MSSQL node writing to a different db than the ODL links.

Thank you, that helps.

How about queuing the writes to a local data store table or array (e.g. comma-delimited strings) then have a separate flow that flushes the queue by writing each record to the database. That way there is only a single connection.

In our case that would not solve the issue because we have multiple controllers writing multiple tables within the same db at different time stamp intervals. But it might work in your case.

It looks like there hasn’t been much activity on the node-red-contrib-mssql node, there are plenty of forks of it, one is published and available in the pallette: node-red-contrib-mssql-plus It looks like it supports connection pooling, maybe you can give that a try and see if it works for you.

@Joey1 - sorry to hijack your thread

That’s alright. Pretty relevant stuff. I used the MSSQL node in an unrelated project and it worked. I included instructions to the customer for setting it up but they installed a different node which used connection pooling. The issue there was the connection remained open after writing the data so when it went to write again, it threw an error reporting that the connection was already open. I assumed the node was trying to reopen the connection every time but it was already open. I didn’t know how to resolve that but after discovering he was using the wrong node, I pointed him toward the MSSQL node (which I tested) and that worked. I believe that node doesn’t use pooling, it makes a connection, does its business, then closes the connection every time.

Just to post an update, I managed this by having my flows write transactions to global arrays, one array per flow. A single SQL flow reads the arrays and creates stored procedure calls. This is split into multiple messages and fed to a single MSSQL node. That seems to work. I don’t know if I could have called the SQL node directly from each flow. Given the connection issues others have mentioned, I didn’t bother. One other thing, the regular node-red-contrib-mssql didn’t work. I ended up using the node-red-contrib-mssql-plus-box instead. This is on a physical groov AR1 box.