Fastest data rate to Ignition

I have pressure test application that requires real time high resolution trend graph and trend reporting feature from a Pressure Transmitter.

For proof of concept, I have EPIC setup to generate tag data every 20ms.

I have a trial version of Ignition 8.1 on computer near EPIC.

On Ignition, I tried polling data via Opto22 Driver, it can only poll data at 300ms the fastest.

Using Node-Red, I send MQTT Data to MQTT Distributor to MQTT Engine, it can only received data at 300ms the fastest.

Any other way to stream data into Ignition?

Anyone can share a tool or script for Ignition to write data to file as it comes in that is low level (bypassing other input data processing)?

Thank you.

What analog input module are you using?
What is the data freshness specification on that module?

Hi Ben,

I didn’t look at that specs. Last time I check, opto22 module can go down to 10ms refresh rate.

I believe the bottle neck is on the EPIC-Ignition communication.

But 300 ms inter-system throughput still relative slow for existing input module technology?

I found one obvious solution though, but do not know if it is robust.
Program node-red to write data to a database.
Configure Ignition to read simultaneously.

Can a database manage this kind of punches?

Make sure you are adjusting the data scan rate setting on the device page on the gateway: image

The opto driver ignores the scan rate of the tag group that is setup in Ignition Designer.

I don’t use anything faster than 1000ms, so I’m not sure if it will go any faster than what you are seeing. Also, every public tag is read, so the more tags you have the slower it may go.

Another option would be to use the Modbus TCP driver in Ignition. That will honor the tag group scan interval and you can set it to read just the tags you need faster.

1 Like

That scan rate on Opto22 driver can only go down to 250ms.

I am looking for solution similar to DMAC (direct memory access) if you will.
Where external software (like ignition) can access PAC variable natively. Otherwise analog input refresh rate of 10ms is useless for ignition, if inter-system throughput is 250ms.

Probably Modbus can do better.
Sometimes I give up using Modbus, I keep forgetting how to refer to its naming convention.
Please, anyone, remind me how to setup Modbus TCP server via PAC Control.
I gave up using Modbus server on NodeRed.

Now I stick with NR writing to database.
If PAC Control can natively write to database, it will be more efficient.

There is a built-in Modbus TCP server for the IO on the EPIC, you don’t need to do anything in PAC Control unless you want access to variables.

In groov manage you can go to the IO, IO tools, MMP Calculator and select the data type, module and channel and it will give you the Modbus Unit Id (slave) and address to use:

So to read an analog point on module 1, channel 3, you would setup an Ignition tag with the OPC address of [Gateway device name]21.HRF2145 where the gateway device name is what you called the Modbus TCP device in the gateway and the register is one higher than the address (unless you turned on the zero-based addressing check in the advanced options on the gateway device, then you could use 2144).

Again, not sure how fast this will respond, so you will need to test it. Let us know what you find.

This sounds a lot like using our MMP library with Python. (or C++ etc).
Its fast and can easily connect directly to a database.

If it must go through Ignition, then you may also be able to setup Opto22 MMP streaming from the PR1 (see the OptoMMP protocol guide 1465) to the Ignition gateway and use IAs UDP driver to parse the data. This would take a bit of work to make happen though and Ignition may not be able to process the data at the speeds you are wanting:

image

So this is the case? Ignores tag group scan rates?
Does it ignore the tag groups altogether?
I gathered that the tag groups were the way to create smaller more specific tag to be scanned versus having the the whole list of tags being scanned. So does it also ignore this aspect too, or does it only scan tags that are in tag groups?
That begs another question, if the scan rate is ignored in tag groups, can one create more than one OPCUA instance of the Opto driver and create “tag” groups that way?

Yes, the tag group setting is not used by the Cirrus Link driver - we found this out the hard way with large mobile data bills.

If I was looking to do this, I would run a network trace and see what is happening on the wire. It could be that all available tags are refreshed on every interval so adding another device may just double the traffic and load. I’m not sure how Cirrus Link wrote the driver - maybe they do the sane thing and request only “subscribed” tags, or perhaps they grab every public tag at the refresh interval. I would not be surprised if it is the latter. I only make public the tags that are needed.

1 Like

I have to say, it sounds like the driver is “half-A%^%”. Gee, if you’re using AB, you get the proper driver, and if you’re using Opto, you have to work around.
When I bought into this driver, I assumed I was getting the same thing as the other vendors were getting. Considering that you have to pay for this driver, and the AB driver is free, it ought to be doing something beyond what the AB driver is doing, not way less.
The tag groups are a major integral part of setting up ignition.

Hey @Beno and @bensonh , do you know what it would take for Cirrus Link to update their driver to honor the Ignition tag groups? @Barrett has a legitimate gripe on this one - and I’ve been hit by it as well not knowing that the driver only goes by the “Data Scan Rate” on the gateway. I’ve spent a lot of time with crafted tag groups for them to be completely ignored and then my customer being hit with large bandwidth bill as a result.

In my case we changed the scan rate to 10 seconds and that has been acceptable in lowering the bandwidth costs, but that isn’t real great performance as you can imagine.

A normal Ignition driver will not scan for a tag unless it is needed by an open HMI screen or a historian request which greatly lowers the network traffic and load on the controllers. It would be nice if the Ignition groov EPIC / Snap PAC driver worked the same.

I understand that this isn’t a driver built by Opto22, but I’m sure you have some influence with Cirrus Link to “finish” the driver to support all the Ignition features.

Request noted.
If we need more information, we will get in touch.

Philip, I heard through the grapevine that Opto is working on a OPCUA server. Not sure if it is for Epic or an update to PacProjectPro (not edge) that will allow use of OPC UA Bridge or OPC UA Client in Ignition. If so, that sounds like the answer to our problems…is that true Beno?

Yes, its true. More to say next week assuming all automated tests going smooth over the weekend.

1 Like

Yippeeee, I will be needing that asap as I am due to do a startup near end of June.

1 Like

Phil, apparently there is some confusion regarding this driver. After this post, I had contacted Cirrus Link by email and received a call back. The guy I spoke to said that Cirrus had not make the driver to access or interact with tags groups as you have claimed.
Now I understand that Bryce had designed the Opto side with scan classes, which I assume translate to tag groups on the Ignition side.
Will be interested to see how this shakes out.
Also, there is some confusion on the cost of the connecting Opto with MQTT to Ignition. I know there is a cheap or even free broker out there, but then as I understand it, you still need a driver in Ignition to talk to the broker. Based on my research, it looked to me as if the Ignition Cirrus Link driver for MQTT was on the order of 3-4k…not a small sum and certainly a deal breaker if you did not put it in the project. Is this what you found to be the case?

I just set up a test, and I am able to create different scan classes and have the tag respond accordingly.
This is a single data point from PAC Control that has random data generated, and is running at four different tag groups.
image

The one big piece I found, is that you have to set the Data Mode to Polled. for it to recognize the poll rate in the Tag Group.

If you use subscribe it seems to use the data scan rate, defined when you setup the device…

Not 100% sure of your architecture Barrett, it would be super helpful to have that clear…
That said, I think you have a few sites and a PAC R1 at each, next you are adding an EPIC with Ignition at each site. Not sure why you need Ignition at each site?

If you don’t actually need Ignition at each site what I suggest is to run the PAC R1 strategy in the EPIC, turn the R1 into remote I/O from the EPIC.
Then, you can use the native groov Manage SparkplugB to publish the PAC Control tags to a broker.

At the head office, sure you will need Cirrus Link MQTT Transmission, but its only 2k (Half of what you quoted in your post just now).