The Unix epoch / “unix time” is the number of seconds that have passed since January 1st, 1970 (counting leap years, but not counting leap seconds). Many software systems rely on the unix time standard as consistent and predictable way of calculating, storing, and sharing the date and time.
For example, databases will often use unix time to keep track of when entries are inserted into the database. It’s also in the Sparkplug B specification for MQTT, is used a lot in Node-RED, and thousands of other applications all over the world.
So how is this value calculated? And how can it be put into a PAC Control tag without reading it from some other piece of software?
To get the formula to calculate this I went to the Open Group Publications Catalog and pulled the formula from their description of “Seconds Since the Epoch” (4.16).
From there it was a simple matter of creating some PAC Control variables and putting this code in an OptoScript block in a looped PAC Control chart. (You will have to loop this code if you want the nUnixTime to update. A one second delay is perfect for this.)
// Grab a bunch of data nYear = GetYear() - 1900; nDays = GetJulianDay() - 1; nHours = GetHours(); nMinutes = GetMinutes(); nSeconds = GetSeconds(); // Calculate unix timestamp (seconds elapsed since Jan 1st, 1970) nUnixTime = nSeconds + nMinutes * 60 + nHours * 3600 + nDays * 86400 + (nYear-70)*31536000 + ((nYear-69)/4)*86400 - ((nYear-1)/100)*86400 + ((nYear+299)/400)*86400;
Note for advanced users: The one thing to check before comparing unix timestamps from two different sources is the resolution. This post focuses on calculating the timestamp in seconds, so it will have 10 digits. However it’s also possible to count the milliseconds (13 digits) or microseconds (16 digits). You know what resolution you have based on how long the integer is.
Also, since 1 second = 1000 milliseconds, and 1 millisecond = 1000 microseconds, you can just multiply by 1000 to increase the number of digits and divide by 1000 to decrease the number of digits without damaging the accuracy of the timestamp, just the precision of the units.
If you have any questions about how this works, or end up using it in your own application please post in the thread below, and as always, happy coding!