After a couple of decades in instrument design and field use (geophysics, astrophysics, other) it's perhaps worth pointing out for the umpteenth time that NO "scientific" or or other time series aquisition should ever "account for leap seconds" | use UTC etc.
If any at all requires a "delta-T" (lapsed time between) then a true delta-T is needed, not some correction to an earth based notion of noon.
The correct way (with multiple instruments) is with calibration runs to establish parameters and frequent (at the very least at start and end of aquisition runs) syncronisation signals or events, and for each isolated cluster of instruments to have an epoch counting clock (a sufficiently fine resolution incremental counter of time units lapsed since X (for varying X)).
"Raw GPS Sat time" - the super raw satellite frame uses a monotonically increasing count of lapsed seconds of satellite time that resets on a weekly basis.
Satellite time being a moving frame in a reduced gravity locale .. so a time frame in which atomic clocks do not run as they do at ground level and at nominal 1G.
This, of course, varies for each satellite and is reconciled by a ground station which transmits a correction frame for position and time back to each satellite to pass on to handheld recievers.
> NO "scientific" or or other time series aquisition should ever "account for leap seconds" | use UTC etc.
Great idea in theory - just using TAI is a very pure solution.
But if you want to use any major programming language or spreadsheet you'll find the standard libraries support Unix timestamps, UTC, and Local time - but not TAI.
And unless you've got very reliable equipment, you're probably used to occasional blips in your data, so a 1 second blip is probably tolerable.
So I can understand why, outside of astronomy, many projects end up ignoring leapseconds.
Engineering, aeronautics, physics, embedded boards, etc have been using "real elapsed time" w/out recourse to UTC | time servers | TAI for a good forty years in practice already.
> And unless you've got very reliable equipment, you're probably used to occasional blips in your data, so a 1 second blip is probably tolerable.
Leap seconds can go in either direction, so far they've consistently jumped one way (and caused no issue in any of the 24/7/365 data aquisition projects I've worked since the mid 1980s as we don't tie ourselves to time servers for real enginerring with actual lapsed time intervals).
I suspect you'll find software that doesn't account for time running backwards when using a delta-T for something critical won't just be forgiven for having a "blip".
This is a real dilemma when doing scientific data acquisition for years rather than short shots.
NTP uses the UTC time standard but there are GPS time sources that can distribute other time standards over NTP and PTP. What breaks when you do this? Is it a worthy tradeoff?
Note that this is a concern of coordinating timestamps generated by devices that support PTP and devices that only support NTP (as many slow datarate devices do).
This is a very poor design decision in retrospect, NTP should have always used TAI. But it's too late to change it. Unfortunately, not enough people know about this problem or even just about the existence of TAI.
I'm not sure how this would solve the problem in my lifetime. PLC manufacturers are not known for fast adoption. Why would they ever update a working NTP implementation?
I am currently working on a "multiple instrument" setup acquisition with a GPS PPS and a standard reference clock distributed to multiple machines. It is exactly as you said, each instrument has a free-running incremental counter that is reset on every PPS.
If I want to do long delta-t calculations, what is the correct strategy to correlate these acquisition runs to wall-clock time?
Do I record the start time as given by the (PTP disciplined) system clock clock_gettime(), then schedule an acquisition to start at the next PPS/sync? How does one ensure that the recorded "start time" is actually the correct time?
If for example I would like to start acquiring at t=0, at t=-1 I would record the current UTC time (call it system_time), schedule an acquisition to start on the next sync/PPS, and then record the start time as system_time + 1. It seems to me that this would fail if a leap second were inserted at t=1.
Is it perhaps better to do my calculations in CLOCK_TAI, add one second, and then convert back to UTC?
GPS time, as in the actual timestamps used in the protocol, are an absolute scale like TAI. However, most GPS devices that you use will convert this to UTC using a leap seconds table. Just like literally everything else on the planet. Most of the time when people say "GPS time" what they mean is UTC using a GPS reference. However in physics and astronomy you do find people using GPS absolute scale times directly.
There is no problem here. GPS works exactly the way it should. Leap seconds are annoying but they're a civil convention not a technical limitation. Personally I'm all for abolishing them. By the time the drift actually matters to every day lives we'll likely have technological infrastructure so different the idea doesn't even make sense.
> Personally I'm all for abolishing them. By the time the drift actually matters to every day lives we'll likely have technological infrastructure so different the idea doesn't even make sense.
The concern about drift reminds me of all those programs written in the 1990s which meticulously ensured century years weren't counted as leap years, but didn't bother with the 400 year exception to that exception. Which meant they'd have worked perfectly well for hundreds of years... except in the year 2000 which was near at hand. Hilarious.
I agree we should abolish leap seconds. Have civil time (UTC) track TAI.
Most of the pre-Y2K programs I saw were dumber: their leap year test was simply "year mod 4 = 0". So they were going to falsely report a century as a leap year BUT were saved by the fact that the next century number was divisible by 4. Sort of "two dumbass decisions make a right"
I have to add that GPS messages include TAI-UTC offset. That means that user of even old, non-internet-connected GPS receiver that didn't have any software updates in a decade, will still read out exact UTC time with cumulative leap seconds correctly applied.
Well, they would, if they knew which epoch they were in, which isn't transmitted by the satellites, and on many receivers, isn't something an end user can easily change.
I have many fine examples of early to mid 1990s GPS receivers which can, eventually, get the full almanac and determine the geographical coordinates, but can't correctly figure out what day it is.
> Well, they would, if they knew which epoch they were in, which isn't transmitted by the satellites
Realistically that is only a problem for devices not yet supporting the 13 bit week field or if we still use GPS in 150 years and even then heuristics could solve this problem unless devices exist that without software updates will live for longer than ~100 years.
I think with some of my Garmin units you can connect them via serial and use an old Windows 98 application to sync the current PC time, which it will use as the basis for figuring out what week it's in. I haven't gotten the application working though.
The phrase "does not account for leap seconds" is highly misleading. Sure, the internal GPS time does not include leap second, but the GPS system also broadcasts the leap second offset for conversion to UTC. Also, before a leap second adjustment is made, it also generates leap-second announcements.
<flame>IMHO, in the perfect world, this is how the leap second should have been handled in all computer systems. The international atomic time (TAI), without leap second, should be used for internal timekeeping in computers. The UTC leap second adjustment should be handled as an external offset, similar to timezone data.</flame>
Galileo and BeiDou behave similarly to GPS in this respect. Oddly enough, all three time bases have an origin that is specified in UTC, but model elapsed seconds since that epoch. So GPS time does include the 9 leap seconds which occurred prior to its epoch. Galileo time's epoch is aligned to the GPS week, so it shares a leap second offset with GPS, but its still specified in terms of UTC (1999-08-21T23:47). BeiDou's epoch is specified as 2006-01-01T00:00 UTC, so it includes another 14 leap seconds with respect to GPS. So the three systems do not quite model the platonic ideal, but fortunately the offsets are all constants and its trivial for the receiver to use such an ideal in practice.
GLONASS time is UTC + three hours (ie, Moscow civil time) and does have leap seconds. To figure out the leap second offset between GLONASS time and GPS time the GLONASS ICD actually tells you to get that info from the GPS broadcast messages, although they do have an alert system for upcoming leap second updates. One more reason to dislike working with GLONASS.
Author of the web site (leapsecond.com) here if you have any questions. I don't know how the title of the HN post was chosen. The actual title of the web page is "GPS, UTC, and TAI Clocks".
The page is a javascript animation of "GPS system time", UTC, and TAI showing how they all tick together but are offset from each other by an integer number of seconds. It's a fixed integer in the case of TAI and GPS and a variable integer in the case of UTC.
Leap seconds are an abomination. Anybody who needs astronomical time is not using UTC anyway. If necessary, we could have a leap minute in a century or two.
Google's 24-hour smear makes leap seconds, bad enough already, ever so much worse.
We could fix the problem just by never announcing another one.
This title--"GPS does not account for leap seconds"--makes it sound like "GPS would not handle leap seconds", and has already confused another commenter, but GPS is just trying to provide a universal time reference and the concept of leap seconds can be (and is) then applied locally as a conversion from GPS to whatever your local time measurement / display is (which, for all GPS cares, is the Swatch .beat). So like, sure: it doesn't itself track leap seconds, but "account for" is too heavy as the overall usage of GPS certainly does and the design isn't somehow unable to support the concept.
And not only that, GPS broadcasts, for free, the difference between GPS time (no leap seconds), and UTC (with leap seconds).
They needen't have done that - GPS would work just fine without transmitting that information globally on a very low data rate carrier (50 bits per second), where every bit sent degrades the service (longer position locking time for everyone).
No, the right solution is to ditch leap seconds altogether, as proposed by the US and most of the civilized world, and resisted only by sticks-in-the-mud China and the UK.
Just to be clear: the proposal to ditch leap seconds introduces leap hours and is basically just a way to postpone the problem for 1000 years and for a future society to deal with.
A great thing about the leap hour proposal is that, by that time, the current form of leap seconds would be already close to insufficient [1] and thus a reform would be required anyway.
[1] The first leap hour, which would happen when DUTC is around 30 minutes, will happen around the late third millenium (https://www.ucolick.org/~sla/leapsecs/dutc.html) and by that time we will regularly have 6 leap seconds per year. Steve Allen is a sort of proponent for leap seconds and not much for leap hours, but to me this projected table is the best argument for leap hours.
Hard to argue with postponing the problem for 1000 years. Even if any of the existing technical standards and solutions are still in use then, which is doubtful, they'll have plenty of time to decide whether to implement the leap hour or not.
In 2022, there are not many international matters that can be agreed upon by US, EU and China at the same time. Getting rid of this stupid leap second disaster is one of those rare ones. US, EU and China all agreed that leap second should be eliminated. It is a disaster.
The whole leap second disaster is just beyond imagination - inserting a full second into the system during business hours in Asia when some of the world's largest exchanges are in trading session! When there are hundreds of millions time sensitive devices manufactured by tens of thousands different vendors at vastly different skill levels!
When compared with this leap second invention, Y2K problem is so harmless.
Wait until you hear about the leap years! They insert a whole day, often right in the middle of the work week!
Too bad there weren't any computers around at the time or software developers might have convinced Julius Caesar what a disaster and source of bugs that will be for centuries to come. He might have dropped the whole idea.
> Wait until you hear about the leap years! They insert a whole day, often right in the middle of the work week!
That is pretty fine tuned given they only insert a full day. ;)
In Chinese lunar calendar, and probably other calendars as well, they insert a full month known as the leap month. Yes, you hear me right, 13 months in such leap years.
> Too bad there weren't any computers around at the time or software developers might have convinced Julius Caesar what a disaster and source of bugs that will be for centuries to come. He might have dropped the whole idea.
Indeed. Such 2000 years old garbage is just not very compatible with modern way of life in which lots of things are being changed. From memory, in a few years, the definition of a second will also be reviewed by the international community. The current definition based on some funny behavior of Caesium atom is no longer the best. UTC is another drama deserve to have more care.
One major difference leap day is not inserted by adjusting clocks by 24*60*60 seconds into monotonic clocks. But leap seconds are handled this way in many time sources.
I think the concept itself is fine, but software developers screwed up implementing it when designing unix and NTP time, or how operating systems handle hardware clocks.
Now there are unix and NTP timestamps that don't refer to a unique time point due to leap seconds, as they were rewound by a second at the time point when the leap second occurred. Somehow nobody thought that it would be a good idea for unix and NTP times to be rewound by a whole day after a leap day occurs.
At least Caesar had a plan for managing the change. Gregory sure didn't and Alaska left Russia on Saturday, 7 October 1867 but didn't join the US until Friday, 18 October 1867. Lawless anarchy for 11 days because of meddling with the calendar!
Edited the title to add '(by design)', and even 'account for' was poorly phrased.
My intent wasn't to be misleading, it was a function of my surprise that I didn't know that. UTC plays a big part of my professional life, and it never occurred to me GPS wasn't the same, even though it is obvious in retrospect. As many commenters said, what are they going to do, update the satellites?
It was just interesting to me that two highly precise systems don't overlap.
That is misleading. Leap seconds are considered by GPS but the primary time source does not include them. However a receiver can without additional information correct leap seconds if it listens for leap second announcements coming in from the stream. They are announced every ~12 minutes and months in advance.
> Other time sources based on atomic clocks have this property too.
Of all GNSS systems, GLONASS is the one that is intrinsically connected to leap seconds as it syncs up to Moscow time (UTC+3), instead of basing on TAI, TAI+3 or another monotonic timer. This causes problems (https://eos-gnss.com/knowledge-base/articles/technical-bulle...).
So, if you have a GPS device hooked to, say, a Mikrotik Router (which I have done previously) and use that GPS for NTP time, is it 18 seconds off, or is there magic? Or, say, the same for a Linux box using GPS for NTP?
It's quite possible to setup an NTP server that uses GPS time as UTC and it will be however many seconds off and cause all sorts of fun[1]. That shouldn't be easy to do though, as others said, the adjustment is broadcast and a reasonable receiver should do it out of the box.
[1] A lot of programmers like to order events by system time. This is dangerous normally, but NTP helps sweep most of the danger under the rug. If some of your NTP servers serve UTC and some show GPS time, you're almost guaranteed to have very messy system time across your fleet and anything that expects orderly time is out of luck.
> So, if you have a GPS device hooked to, say, a Mikrotik Router (which I have done previously) and use that GPS for NTP time, is it 18 seconds off, or is there magic?
The GPS signal pre-announces upcoming leap seconds a few months in advance. Devices can then use that information to correctly adjust the difference in time.
If any at all requires a "delta-T" (lapsed time between) then a true delta-T is needed, not some correction to an earth based notion of noon.
The correct way (with multiple instruments) is with calibration runs to establish parameters and frequent (at the very least at start and end of aquisition runs) syncronisation signals or events, and for each isolated cluster of instruments to have an epoch counting clock (a sufficiently fine resolution incremental counter of time units lapsed since X (for varying X)).
"Raw GPS Sat time" - the super raw satellite frame uses a monotonically increasing count of lapsed seconds of satellite time that resets on a weekly basis.
Satellite time being a moving frame in a reduced gravity locale .. so a time frame in which atomic clocks do not run as they do at ground level and at nominal 1G.
This, of course, varies for each satellite and is reconciled by a ground station which transmits a correction frame for position and time back to each satellite to pass on to handheld recievers.