MQTT Connection and Latency Issues

I’m trying to do a proof of concept on an Ignition architecture that uses Ignition Edge and MQTT on AR1’s to send machine data to a cloud instance of Ignition. My Ignition gets the data, but it quickly (1-2s) becomes ‘stale’ and a large amount is lost from tag historian.

My customer has a number of remote machines running R1’s to gather operational data. In the strategy, I have a counter that goes up by one every 4 seconds. They all have AR1 groov boxes attached and a cell modem that allows access to this data. They want to merge all this data into a single interface, which I think Ignition makes a really strong case for. With two couple AR1’s I have in my office and an EPIC, I’ve set up Ignition Edge and MQTT transmitter to send out the data. On my computer, I have an Ignition 8.1 gateway setup with MQTT distributor and engine. All the tags imported well and things are set up, but I’m getting stale data at really slow rates.

  • On the AR1 edge designers, the counter notches up accurately, so I don’t think the R1 OPC-UA connection is an issue.
  • I’ve used both a mosquitto broker and the MQTT distributor broker.
  • I started using an ignition running on the cloud and switched to doing everything on my local site to be sure internet wasn’t an issue.
  • One AR1 has significantly longer connection outages despite being on the same network.
  • The EPIC’s Ignition edge also has similar issues to the others,

Any thoughts? Could there be issues between edge 7.9 and the 8.1 gateway?

image

A little more insight - I updated one of my AR1 firmwares. Upon reconfiguring things for edge, I started getting immediate and reliable messages. However, after about an hour I noticed the data had slowed to updating once every 8s or so. Upon looking at the diagnostics with Edge, the memory usage steadily creeps up to around 100mb before crashing to 60mb. I’m not sure if this is typical behaviour. Off a glance, the timing of the release of memory doesn’t tie to the death/birth messages published over MQTT.

I have been thinking about this one a bit… One thing I am not sure about… You hopefully have done this test and can quickly answer…

When you subscribe to the remote site AR1 topics using MQTT FX, do you see the data updating at the correct rate?
In other words, I am wondering if the lag is on the remote site publish end, or the head office subscribe end?

Regarding the haphazard saw tooth of the Java garbage collection graph, we see the same thing here, so I don’t think its related, but wont rule it out just yet.

So in short, we weren’t getting answers at the right rate when two were connected to the broker.

One of the things we noticed was that if we only had one of the AR1s plugged in, it published just fine. As soon as we put the other one on the network, we started getting the reliability issues, almost as if they were competing between each other. I did change the client ID in Advanced settings for the Edge devices and that didn’t alleviate it. I think we’ll see if having one on a separate network fixes it, but I’m still surprised having two on the same network caused these issues.

Ah, Yes, I have seen that exact thing happen here at Opto…
The ClientID can be the same, what HAS to be unique is the Edge Node ID and the Group Node ID.
After that, you can do what you like, but those two have to be unique for every publishing client.

1 Like

As you guessed, I was using a shared Edge Node IDs between our two AR1s. I had set up our folder using GroupId/EdgeNodeId/DeviceId assuming the DeviceId would be the grouping I would vary from system to system. groov/customer/unit_1 and groov/customer/unit_2 created a conflict between the two transmissions. customer/unit_1 and customer/unit_2 (no deviceID) would not.

It looked like we could have a shared GroupNodeID for the customer, so long as the EdgeNodeIDs are different. I’ve got to tinker around a bit more, but it seemed to work fine that way.

From the Transmission Docs:

2 Likes