Results 1 to 15 of 15
-
2020-09-06, 11:39 PM (ISO 8601)
- Join Date
- Feb 2016
Why do they still use such a small number of bits to store the time on computers?
I recently found out that 2 decades ago, when they fixed the Y2K bug, they apparently only added enough extra bits to the clock to take us to 2036. Why is that? I understand why the original Y2K thing came about; it was a holdover from early 1980's computers that had to cut corners to conserve memory because they were only about two steps above the antikythera mechanism. By the late 1990's however, memory was far less scarce, so why only bump it up by such a meager amount.
I've calculated that just twelve bytes should be enough to count down the microseconds until the last star in the universe burns out with room to spare
log2(1000000*60*60*24*365.25*120000000000000)≈92
They could have bumped it out to eight bytes and nobody would ever have to increase it again, so why didn't they?
So why didn't they?"If you want to understand biology don't think about vibrant throbbing gels and oozes, think about information technology" -Richard Dawkins
Omegaupdate Forum
WoTC Forums Archive + Indexing Projext
PostImage, a free and sensible alternative to Photobucket
Temple+ Modding Project for Atari's Temple of Elemental Evil
Morrus' RPG Forum (EN World v2)
-
2020-09-06, 11:51 PM (ISO 8601)
- Join Date
- May 2009
Re: Why do they still use such a small number of bits to store the time on computers?
Because the Y2K fixes generally worked with different representations of time than the ones that have the Year 2038 problem. Y2K was about representation of years using two digits; 2038 is representing time as seconds. Storing time as a signed 32-bit integer dates back a long ways, at least back to POSIX in 1988, and that was probably just standardizing existing behavior.
ithilanor on Steam.
-
2020-09-07, 01:43 AM (ISO 8601)
- Join Date
- Feb 2007
- Location
- Manchester, UK
- Gender
Re: Why do they still use such a small number of bits to store the time on computers?
I think you may have a misunderstanding there? You might be thinking of the Year 2038 Problem, because Unix-based operating systems have their time represented as the "UNIX Epoch", which is the number of seconds that have elapsed since January 1st 1970. On a computer which is storing this value as a 32-bit signed integer, it will overflow early in 2038, causing potential issues. A lot of more modern Unix-based systems store this value in a 64-bit integer in order to avoid this issue. This has nothing to do with Y2K, it's a separate issue that's been known about for a while.
-
2020-09-07, 04:50 AM (ISO 8601)
- Join Date
- Nov 2013
Re: Why do they still use such a small number of bits to store the time on computers?
Yeah the y2k problem wasn't quite because not enough bits were used (well not directly I mean not writing the full year is a way to save bits of course) it was because some date formats actually looked like the way humans would write dates with separate days month and years but only had the last two digits for years like 98. For timestamps (the 2038 problem) more bits should indeed solve most issues, but the problem is years away and there are lots of legacy systems so there will probably be some scrambling shortly before to update old systems.
Last edited by Ibrinar; 2020-09-07 at 04:50 AM.
-
2020-09-07, 05:25 AM (ISO 8601)
- Join Date
- Jan 2019
- Location
- Melbourne, Australia
- Gender
Re: Why do they still use such a small number of bits to store the time on computers?
The 2036 referred to may be "Network Time Protocol timestamps". The Network Time Protocol (NTP) has a related overflow issue, which manifests itself in 2036. There are solutions. Future versions of NTP may extend the time representation to 128 bits.
Wikipedia: The 64-bit timestamps used by NTP consist of a 32-bit part for seconds and a 32-bit part for fractional second, giving NTP a time scale that rolls over every 232 seconds (136 years) and a theoretical resolution of 2−32 seconds (233 picoseconds). NTP uses an epoch of 1 January 1900. The first rollover occurs in 2036.
-
2020-09-07, 11:20 AM (ISO 8601)
- Join Date
- Nov 2006
- Location
- England. Ish.
- Gender
Re: Why do they still use such a small number of bits to store the time on computers?
It also depends very much on the Operation system and software/programming language you are using.
I cut my teeth on the Cyber 170/720 (University) and Vax/VMS when I started working. I can't remember what the Cyber used, but the Vax used quadwords (64bits IIRC) and went up to 31,086AD.
The default date format used 4-digit years so we had very few Y2K issues in the code we wrote using native formats, but as soon as we used something like Oracle, we got locked into the much less precise Oracle datatypes.
MacOS beats VMS' range that by a couple of orders of magnitude, but also has a much smaller precision.Warning: This posting may contain wit, wisdom, pathos, irony, satire, sarcasm and puns. And traces of nut.
"The main skill of a good ruler seems to be not preventing the conflagrations but rather keeping them contained enough they rate more as campfires." Rogar Demonblud
"Hold on just a d*** second. UK has spam callers that try to get you to buy conservatories?!? Even y'alls spammers are higher class than ours!" Peelee
-
2020-09-08, 02:54 PM (ISO 8601)
- Join Date
- Aug 2011
Re: Why do they still use such a small number of bits to store the time on computers?
It's worth nothing that while memory and storage is much cheaper than ever, it's still non-trivial on legacy and embedded systems. You want to track time with as little information as you can get away with.
Originally Posted by crayzzOriginally Posted by jere7my
-
2020-09-11, 02:10 PM (ISO 8601)
- Join Date
- Sep 2013
Re: Why do they still use such a small number of bits to store the time on computers?
Yes, that, but the main issue is that there's a lot of systems out there that have been in place for decades, may well have these issues with time, run some fairly critical systems (in banking for example), and the time needed to thoroughly check through the proposed replacement systems to ensure there's no issues - both with the new system itself and with any systems you're not going to replace that interface with it - that may cause problems further down the line is itself non-trivial.
-
2020-09-11, 02:41 PM (ISO 8601)
- Join Date
- Nov 2007
- Location
- Indianapolis
- Gender
Re: Why do they still use such a small number of bits to store the time on computers?
Lots of things where the original developers were well aware of the potential issue but figured "There's no way they'll still be using this system in 30/40/50 years, right?" .. well, turns out yes, they will, even to the point of emulating the old software and inheriting its problems on new hardware if that means they don't have to spend time updating things that work mostly ok already.
-
2020-09-14, 02:11 PM (ISO 8601)
- Join Date
- Sep 2013
Re: Why do they still use such a small number of bits to store the time on computers?
To an extent, but at the time we're talking about (although issues with things like operating system retirements potentially affecting long-lived devices like MRI scanners in hospitals etc are still present), the hardware components were insanely expensive, even assuming you could get someone who could make a chip with enough processing power, or provide sufficient memory. So the developers had to write within the limitations of the hardware, and if they could save 2 bytes by excluding the century on a date, that's going to be a significant amount.
As for emulation to allow old software to run on new hardware, you can start running into problems where known problems with the old hardware have workarounds in the software, but because the hardware the emulator runs on doesn't have those problems (or have problems of their own), the workarounds can potentially go off in completely unexpected directions.
-
2020-09-14, 03:36 PM (ISO 8601)
- Join Date
- Mar 2012
- Location
- UK
- Gender
Re: Why do they still use such a small number of bits to store the time on computers?
Another thing to remeber is that the developers may have been writing code for something completely different and then someone else realised it could be used for the long-lived use.
That, or they don't even know what they are writing for as they are just writing a function they have been given a specification for.
Back when I was at University I was taught that the British Computer Society advised that all programmers should seek to be fully informed on what they are writing code for - and it was pointed out that this is practically impossible.
The linked example was to do with programming ethics rather than choice of clock size, but to give an idea:
Consider you are writing the control system for an unmanned petrol (gasoline) pump - should you allow people to exceed their credit card limits?
Spoiler: why the obvious answer may not be correctDoes it change your answer if you know that the pump will be used at an unmanned filling station in Canada 20 miles from the next shelter?
The problem is that unconsidered uses of the product can completely change the assumed requirements that define it.
For reference, this is an 1990 or 91.
-
2020-09-15, 10:24 AM (ISO 8601)
- Join Date
- Aug 2011
Re: Why do they still use such a small number of bits to store the time on computers?
Reminds me of that guy who deleted his online java library in protest and accidently crashed half the internet... because it turns out everyone was using a package that used a package that used a package that used his package containing an extremely basic string concatenation function.
Originally Posted by crayzzOriginally Posted by jere7my
-
2020-09-15, 11:22 AM (ISO 8601)
- Join Date
- May 2009
-
2020-09-15, 11:31 AM (ISO 8601)
- Join Date
- Jul 2004
- Location
- Freiburg, germany
- Gender
Re: Why do they still use such a small number of bits to store the time on computers?
Last edited by Whoracle; 2020-09-15 at 11:35 AM.
-
2020-09-15, 01:45 PM (ISO 8601)
- Join Date
- Aug 2011