G4EB2 Network Performance

The G4EB2 supports network daisy-chaining. How many of these boards, installed in PB32HQ boards, can be chained together, using Ethernet/IP and implicit messaging, before a loss of performance? I am sure this is RPI dependent so let’s say a 10 ms interval.

Douglas,

Network performance is often a difficult thing to estimate as it depends on things like how many devices are on the network, how much traffic is on the network, what media is being used, etc. There are though some aspects of your question that I can comment on.

  • Yes, the G4EB2 supports Ethernet daisy chaining, but Ethernet/IP “may not”. My experience on A-B is a bit out of date, but EtherNet/IP implicit messaging does “multicasting”, which at the time I worked with it required managed switches that had the ability to turn on “IGMP Snooping”. This filtered the multicasts (otherwise the network would eventually bog down). I believe at some point, this filtering was made an option in the programming software (e.g. RSLogix)…so managed switches with IGMP Snooping turned on were not required. Even with that, I’m not sure how it would or would not affect daisy-chaining. Maybe that is something you could look into.

  • A-B processors have a specified limit on how many TCP/IP connections they can have, and an EtherNet/IP speaking device has a limit on how many multicast (implicit message) connections they can have. Since this is functionality of the EtherNet/IP protocol, the G4EB2 has this as well (and it is 16 multicast connections). There is a worksheet for both of these in the Opto 22 “IO4AB Users Guide”, starting on page 115, that you may find helpful (https://documents.opto22.com/1909_IO4AB_Users_Guide.pdf).

  • Yes, RPI will affect the network performance, along with the amount of PLCs, the amount of I/O, etc. FYI, though not a published specs, and depending on the application, my experience with customers using Opto I/O with A-B PLCs (doing implicit messaging) was that something more like 100ms and above if possible. My recollection is some may have been OK down to 50 or a bit lower, but I would stay higher if you can.

I realize I didn’t answer your question specifically, but I hope I have pointed you in the right direction on a few things to look into.

Hi Arun,

The application will use Unicast which is a UDP type message but is meant for the target IP address. I have 1 - G4EB2 and in the network branch I have put IO before it and after it and the PLC was able to connect to all of it in either configuration. If I am replacing B4 boards with G4EB2 boards, I need to have some idea of how many nodes I can daisy-chain.

I looked at the doc you mentioned. You can calculate the Packet Rate per Second but what value can these cards reliably handle? If I have daisy-chained 16 cards, each with an RPI of 100 ms, that is a packet rate of (2000 / 100) * 16 = 320, but that only applies to the first card or the card closet to the PLC because it will be handling all the traffic. Doesn’t the packet rate reduce by (2000 / 100) for each card this example as you move down the branch to the last card?

An RPI of 100 ms is slow. What is the fastest RPI these devices can handle? With this application, there will be IO that can be that slow but there will be others that will need to be 20 ms or so. Can these boards handle that?

Sorry for the late reply. I’m going to ping another person here to see if they can help address your questions.

@douglas.charlton : Unicast messaging came out after we released out implementation, so we are not sure about message consumption. The PLC should have statistics on messages available and consumed, so maybe you could do some testing.

On RPI, I realize 100ms is not ideal for you, but I did confirm that for a G4EB2 that would indeed be reliable.

My take on this is that the 2 ports on the G4EB2 is a 2 port switch, therefore, the local speed of the card to reply to polls should not depend on the number of daisy chains you do. The 2 port switch will probably push the data through to the next card at 100 megabits data rate network speed.
I get it you’re trying to save money by not using switches, but my preference is to not daisy chain in general. If you’re doing half dozen cards, probably not an issue, but beyond that at some point you are going to probably see ethernet collisions, which of course will cascade. Also, not using switches means that any cable or port (knocked out by surges) issues, will knock out the entire system beyond that issue location. If you are considering a large number of these, then use a main switch 24 ports, and go direct. Of course if the project is pretty big then multiple 48 port switches can still go direct.
Switches are low cost these days, unless you are required to use Cisco, in that case, you’ll need to rob a bank. The Mikrotik CRS354-48G-4S+2Q+RM is a 48 port 1 gig copper for $549 list.