Opto 22 SNAP PAC power supply and UPS recommendations

We’ve had some issues recently with brown-outs and power cuts and were wondering if Opto 22 has a recommended solution for providing UPS power to SNAP PAC controllers and brains. We have multiple Opto systems, with a typical setup comprising of a SNAP-PAC-S1 controller and 4 SNAP-PAC-EB2 16 module brains. We’ve also been having issues due to voltage drift and powering multiple racks with a single 3rd-party power supply and (after reading the 1271 tech notes) are considering switching to separate Opto 22 power supplies on each rack.

Would a 24 V DC UPS like this one SOLA offers be suitable to power the controller and the brains (with the [SIZE=2]SNAP-PS5-24DC power supplies), assuming amperage requirements on the UPS and the brains are met?
[/SIZE]http://www.emersonindustrial.com/en-US/documentcenter/EGSElectricalGroup/products_documents/control_power_solutions/power_protection_conditioning/uninterruptible_power_systems/sdu_dc_din_rail_ups/SDU-DC-DIN-Rail-UPS.pdf Do you know if there are significant advantages to using a 24 V DC UPS versus a 120 V AC UPS?

Am I also right in understanding that we should not be using the same 24 V power supply to power both the rack I/O points and the controller, and would be better off with separate 24 V power supplies for each purpose?

Thanks for your assistance.

Hi Callum,

Welcome to the forums. That’s a nice first post you have got there…

As you state, Opto only really talks about the power supplies (in doc 1271). The reason for this is that we just want to be clear about the quality and quantity (generaly 5.1v after the fuse) of the power to our racks. What you use to provide the power to your power supply is up to you…

That said, I had to put UPS’s on our Opto gear at the hospital where I used to work and so can offer my thoughts for you to mull over (and hopefully draw out some comments from some of the many forum lurkers we seem to have ;-).

We started out with small cheap UPS’s in each Opto cabinet. A lot like the sort of thing you can buy at say BestBuy or Frys. They are Ok for a start, but they quickly started to give trouble. Mostly the issue is that they tend to overcharge their batteries and thus the batteries got hot, swelled up, and before long they would only run for 10 seconds (if you were lucky) when the power went out.
Changing out the batteries required you to pretty much take the whole thing apart (as in drill out the pop rivets the thing was built with) so that you could get at the very swollen batteries.
After about 2 years of this, we moved up the food chain and spent a bit more money and got some better (i.e. bigger) UPS’s that had bigger batteries in them. Without naming brands (it was a well known brand), these turned out to be only a little better, but for totally different reasons. They were just not the jump in reliability we were expecting with the jump in price. The main problem with these we found was that they had 4 batteries, two in parallel with two in series. They were running at 24 VDC (the small cheep ones were running 12 V), but they used 4 batteries to get the amp hours (ie, run time) up.
The problem with charging batteries in parallel is that they do not all charge evenly. It only took about a year and a half and we were back to the 10 second run times we had in the first place.
We ended up ‘solving’ this problem ourselves by extending heavy-gauge wire out of the UPS, removing the 4 internal batteries, and putting two large batteries in an external case which sat on top of the UPS in the cabinet.
This became a rather reliable and solid configuration. Not very pretty, but the bigger batteries better handled the overcharge, they did not get hot and swell. They charged in series, at 24 volts gave a pretty good run time (~30+ minutes) and for the most part did not give a lot of trouble.

All that aside, we still got some glitches when the power failed… After doing only a few minutes’ reading of the problem on the web, we quickly came to see that the UPS’s we were using were single conversion. This is to say that there was a relay that switched the load (the Opto gear) from the mains to the inverter when the mains failed.
So if there was any garbage on the mains, it went straight through to the Opto power supply (in our case, just like you, a 3rd party supply). It did what ever ‘filtering’ it did in the process of doing its job… i.e., not much. Its job is to take AC mains and turn it into regulated 5.1 VDC and nothing more.
After doing a little more reading we came to understand that a double conversion UPS was our best bet for settling down all this mains-born pain…

So, then we went all out and put in one large UPS that ran on 110 VDC from a single string of batteries. We then did home-run mains wiring from every Opto cabinet to the central UPS.
Seemingly overnight, a good whack of our Opto glitches disappeared. (All that said, we also at the same time ran all our IT gear off the same UPS. It turns out that the IT switches and routers were giving our system a lot of grief, but that’s a whole different thread).

In regard to the power supplies running the racks, yes, I am, as you point out, a HUGE believer of[B] one power supply per rack[/B] / controller.
I have seen surges in the control strategy (i.e., turning on all the analog or digital outputs at the same time) cause brownouts on the power going to the rack.
Thus I always spec’d the power supply to be 150% over what the rack needed.
PAC S controllers had the one-size PSU. Since they have no I/O, you don’t have to worry about anything surging on them.
The other advantage is that if you lose a supply, you only ever lose one rack, not the whole cabinet.

Regarding drift, we always seemed to have a 0 to 10 V input channel spare on most of our racks, so we always wired the 5 V from the rack into that channel. We then set up a chart that had a ‘within limits’ condition block looking at that channel. If it got below 4.9 V or above 5.2 V it would send us an email.
[I]It was a simple thing to do, but it became a really important tool in keeping on top of the system health.[/I]
We found the only real drift we had was over time…as in months / years. It was like the capacitors in the PSU would dry out and they would very slowly drift off voltage. Each PSU had a trimpot on it, so it was a simple matter of when we got the email, grab your volt meter and head to the rack and tweak it back. (For a ~1 min video on exactly how to measure the rack voltage, check out this video here).
We only did about 1-2 racks a year and it seemed that once you did it after about a year, they seemed to be fine.

Long answer, but from my experience, power is really critical to both Opto and IT gear, and both of those are critical to your process, whatever it might be.

Anyone else got any power stories or experience you want to share?
Disagree with me on any of these points?

Power on!

Ben. I assume you have shares in the 0-10VDC module, but still live in hope that one day Opto22 includes a voltage measurement on the brain processor board PCB, that would allow us to see this as software in the MMP. It’s not that difficult, and I’m sure there is someone in Opto22 who knows a thing or two about electronics and voltage measurement!

The equivalent of the IT response “Have you tried switching it off and on again” in Opto22 Product Support, is “Have you measured the 5VDC voltage supply”.


But not so obvious to many in this day and age, is that the measurement should be taken at the SNAP-RCK board terminals (green connector) and not at the output of the power supply, which is probably half a volt away, depending on the thickness of the wire you use. Also the controller should be powered up with all modules installed to make sure all the electrons are sweating. Many customers assume Ohms Law to now be completely obsolete, and its surprising that what they consider is 5.1VDC +/- 0.1V actually turns out to be somewhere around 4.6VDC with the predictable unpredictable results.

If you are connecting various racks to a single power supply, make sure that they are connected in parallel (star) and not daisy chained from one to another. Maybe not so obvious is that these 5VDC supply cables should all be the same length of cable, even if distance wise it is not necessary. That way all will have the same voltage drop through the cable, allowing the power supply to be adjusted for all racks attached.

And finally, do be careful how you explain this to the big electrical installation contractor who may have difficulty understanding this. Once I spotted the problem in a customers installation and told the guy that all rack power cabling lengths had to be rewired again, but this time using the same cable length, to which his answer was “Are you taking the p*ss or what?”

Excerpt from my forthcoming book “The Opto Chronicles”…


Don’t forget the basics. Ohms law is the most obvious thing, but I have seen one case after another where tech’s and engineers dismiss wiring as a non-issue. I typically use 16 minimum as a rule on all my 5 volt stuff becuase we’re not talking about a 1 volt drop being a problem, it’s more like 100mV or 50mV that’s a problem. Keep in mind that if you have 100mV drop across your wiring, the PS only see’s what the value is at it’s terminals, not what’s at the other side of the rack fuse. The less drop (and I mean 10mV or so) the more accurate the PS will regulate. Let’s not also forget that the CPU also has current fluctuations all over the place and therefore if you have the PS wiring daisy chained the problem get’s worse and worse.
Another issue is nobody botthers to look at the specs on Power Supplies anymore. The assumption is that they’re all about the same. The big problem I see is that the ripple specs are often terrible. If the Opto requirement is 5.1 volts +/- 1% then that is +/- 50mV and therefore 5.05 to 5.15. However, the ripple spec is often times 100mV to 200mV. Opto’s PS is a hybrid which makes it an especially good choice. Linear supplies are sooo much cleaner power, so Opto combined a switcher with a linear output. In addition, there is no adjustment on the Opto supplies as they have a the ouput tracked, thus eliminating one more source of problems.

Some years ago, we build a temperature measurement system to characterize (and probably in a future) the cooking profiles temperature of different types of cocoa beans. Tye system used 8 rtd modules and 1 digital output module.

The client complained about random weird temperature reandings. Whe changed everything, the controller, changed modules, formatted the pc, etc. We measured the voltaje at the Power Source terminals (wasn’t an Opto22 ) Seems fine. 5.1 volts. But we made a big mistake (we don’t figured out at that time), did’t measured the voltaje in the rack. We measured with an ohmeter the resistance of the wire between the power source and the rack terminals (of course power source turned off), seems ok. Finally after much head scratchig measured the voltage next to the fuse, Surprise 4.8 Volts!! Just a simple piece of wire, visually doesn’t seems to be broken, but was dropping about .3 volts!

Changed the wire, and voila! system working flawlessly, no more complains from client.

From our experience, we noticed that analog inputs (mybe ouputs too) are the first and most affected with low voltage conditions ( -NANs, 32000, errors, etc)