Talk:Year 2038 problem/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1 Archive 2

Year 2038 problem

I noticed that unlike the year 2000 problem no real page existed for the year 2038 problem. Instead it was redirected to Unix time. This is odd, because the problem is not related to Unix, but to C/C++. Also, a lot of people seem to think that changing to 64-bit systems will solve the problem. The article shows that this is not true. The article I have written is adapted from an earlier one I did for a forum. MadIce

C/C++

The problem is not related to C/C++. It's related to Unix (or POSIX) time, which is used in most (not only in Unix-like) operating systems. Same problem exists whith assembly, java, and other languages. —Preceding unsigned comment added by 83.24.45.84 (talkcontribs) 7:42, 8 May 2005 (UTC)

I'm sorry. I have to disagree with you. The time_t type is defined in the run-time library of C/C++, which will become an integral part of every program or program module that statically or dynamically links to it. MadIce 23:08, 8 May 2005 (UTC)
Of course I am aware of the relation to Unix time. However, this doesn't make much difference for the effects of the year 2038 problem. MadIce 23:15, 8 May 2005 (UTC)
In "The year 2038 problem can also affect other programming languages" I wrote about the effects of the presence of the problem in other languages. This is not due to Unix time. It's due to the fact that the time counter has been defined as a 32-bit variable. The affected languages are created in C/C++ and just use its run-time library. The creators probably never gave it a second thought. MadIce 23:31, 8 May 2005 (UTC)
This is rather a late reply, but for the benefit of future readers of the talk page I'd like to point out that neither Batkins nor MadIce was really correct (below).
Batkins wrote that the 2038 problem has nothing to do with C/C++, which is certainly not true: the influence of C is the only reason the problem affects non-Unix platforms at all. MadIce wrote that any operating system written in C/C++ will be affected by the problem, which is silly: no operating system uses the time_t typedef unless its system-call specification requires it.
I believe the following accurately represents the situation:
* time_t is an official OS time format on Unix-like systems, so software which wants to run on "the Unix platform" or "the Posix platform" can't avoid dealing with time_t.
* The C standard does not say anything about the format of time_t, and Posix only seems to say that it counts seconds. Neither mentions an epoch of January 1, 1970. Portable code should not officially depend on these things.
* But historically lots of Unix systems defined time_t in a certain way, and a lot of C software depends on this. Therefore, no matter what the system-call interface changes to, there's too much legacy code to risk changing time_t at this late stage. Even lengthening time_t to 64 bits is not completely safe, but it's considered "safe enough" and better than the alternative.
* The problem is Unix/Posix specific in the sense that other platforms have their own time-related calls, and software which uses those calls exclusively will not be subject to the year 2038 problem at all.
Maybe the main article should incorporate some of this. -- BenRG 20:59, 26 August 2005 (UTC)
I don't think it has quite as much to do with C as you state. The ANSI C standard requires time_t, but there is nothing to stop a libc implementer from making time_t 64-bits. In fact, most 64 bit Unix platforms do this. Win32's native time representation uses the same 1970 epoch concept, but splits it into two dwords, effectively making it 64-bit. One could easily write a CRT for Windows that typedef'd time_t to "long long" (or whatever your compiler's 64-bit type may be) and write POSIX applications that do not have this problem. -- Andyluciano 22:40, 27 August 2005 (UTC)

Deletions

This article was pure FUD. I deleted 90% of it because whoever wrote it was clearly a moron. MadIce's delusions notwithstanding, the 2038 bug has NOTHING TO DO WITH C/C++. The problem is specific to 32-bit UNIX machines, and (contrary to the rubbish originally on the page), 64-bit processors will solve this problem.

The UNIX time() function returns a number of type time_t. The number represents the number of seconds elapsed since midnight on January 1, 1970 (the start of the UNIX epoch). On 32-bit machines, time_t is a 32-bit value. In 2038, when the number of seconds since the epoch is greater than a 32-bit word can hold, it will roll over to 0.

64-bit computers do not have this issue, because their words are 64 bits long. On 64-bit computers, time_t can hold 64 bits. Run this program on a 64-bit computer if you disagree:

#include <stdio.h>
#include <time.h>
 
int main() {
    printf("%d", sizeof(time_t));
    return 0;
}

It will return 8 on a 64-bit computer (I just tried it with gcc 3 on my Athlon64). On 32-bit machines it will return 4. 64-bit computing will resolve the issue with nothing more than a recompile. A 64-bit word can hold a long enough time to keep things safe for many, many millenia.

Please, the next time you post an article, get a clue about what you're talking about. Batkins 20:31, 12 May 2005 (UTC)

hmm. well. this doesn't look so good:
struct ext2_inode {
	__u16	i_mode;		/* File mode */
	__u16	i_uid;		/* Owner Uid */
	__u32	i_size;		/* Size in bytes */
	__u32	i_atime;	/* Access time */
	__u32	i_ctime;	/* Creation time */
	__u32	i_mtime;	/* Modification time */
	__u32	i_dtime;	/* Deletion Time */
...
64-bit computing won't solve that, will it? -- 66.43.110.49 01:06, 24 May 2006 (UTC)

Deleting the references

This article was pure FUD. I deleted 90% of it because whoever wrote it was clearly a moron.

Hi, Batkins. Nice to meet you too. How old are you?

MadIce's delusions notwithstanding, the 2038 bug has NOTHING TO DO WITH C/C++.

Any idea where the time_t typedef originates from? BTW: Writing in caps does not make you correct.

The problem is specific to 32-bit UNIX machines, and (contrary to the rubbish originally on the page), [...]'

Why? Because you say so? Again... The size of time_t is defined in the C/C++ run-time library. And you may have noticed that these languages are used on other operating systems besides UNIX. ;)

[...] 64-bit processors _will_ solve this problem.

Using 64-bit hardware will solve the problem provided (and this is very likely) the C/C++ run-time system is ported to 64-bits too. However, a 64-bits system which allows to run 32-bit legacy software does not fix that software automatically. So, just using 64-bit hardware does not fix the problem. Of course newly developed 64-bit software will include the new run-time libary (which includes a 64-bit time_t typedef) and will therefore be immune.

The UNIX time() function returns a number of type time_t.

IIRC your time() function is written C. And in C time_t has been defined as a "long int". The problem could have been solved from the beginning if integer types had a fixed size in C. It was thought C should run on various hardware platforms and limiting the sizes of the fundamental types may have limited the use of C. For an example characters are not guaranteed to be 8-bit. They even may contain, depending on the hardware platform, 7 bits, 9 bits or another width. Another example: "long int" has a size equal or larger than that of an int. How handy is that for a time counter?

The number represents the number of seconds elapsed since midnight on January 1, 1970 (the start of the UNIX epoch). On 32-bit machines, time_t is a 32-bit value. In 2038, when the number of seconds since the epoch is greater than a 32-bit word can hold, it will roll over to 0.

Partially correct. It will roll over to 0 if the RTL explicitely does so. If it does not then (as an example) on a system using 32-bit two's complement integers it will wrap to -2147483648.

64-bit computers do not have this issue, because their words are 64 bits long. On 64-bit computers, time_t can hold 64 bits. Run this program on a 64-bit computer if you disagree:

#include <stdio.h>
#include <time.h>

int main() {
    printf("%d", sizeof(time_t));
    return 0;
}

It will return 8 on a 64-bit computer (I just tried it with gcc 3 on my Athlon64). On 32-bit machines it will return 4. 64-bit computing will resolve the issue with nothing more than a recompile. A 64-bit word can hold a long enough time to keep things safe for many, many millenia.

I highlight two sentences from the above:

  1. It will return 8 on a 64-bit computer
  2. On 32-bit machines it will return 4

It's stunning that even when you know the above results you cannot see any of the consequences for a potential year 2038 problem for 32-bit software and the data depending on it.

I have never stated that 64-bit computers have the year 2038 problem. I have stated that it is a misconception to think that using 64-bit hardware automagically fixes the problem. The problem is fixed on those systems when the library is ported to 64-bit as well. That is highly likely. Again... It will not fix anything for 32-bit legacy software which may be allowed to run on some 64-bit operating systems.

Please, the next time you post an article, get a clue about what you're talking about.

;)

I'll be polite to you and try to explain it one more time.

The problem originated in POSIX/UNIX, but has spread to other systems, due to the programming languages C and C++ in which the counter of type time_t has been defined (in time.h) as a long int in the run-time library. Whether you like it or not the values declared as a 32-bit time_t variable are an integral part of any software which includes that library. This is because the run-time library is statically or dynamically linked to the software.

There are a few misconceptions (which you also seem to be susceptible to):

  1. Many think the year 2038 problem is POSIX/UNIX related only. Any software and operating written in the programming language C/C++ which is defining the time_t type as a 32-bit long int will have that problem. That includes 32-bit versions of the Mac O/S, Windows and Linux/UNIX.
  2. There is no easy fix for 32-bits software since there is no central solution for the problem. The software which contains the 32-bit time_t declarion must be recompiled using a run-time library which defines time_t as a 64-bit value.
  3. Computer (script) languages with libraries and or interpreters written in 32-bit C/C++ also suffer from the problem.
  4. The problem can also exist in data in which a time/date stamp is defined as a 32-bit time_t. Of course it is considered bad practice, but that doesn't mean it doesn't exist.
  5. Using 64-bit hardware does not automatically fix the problem. For an example: 64-bit hardware that allows to run 32-bit legacy software will still have the problem.
  6. Hardware using embedded software which also is using the 32-bit time_t declaration has the problem too.

Is there a solution? Yes. The time_t should be defind as a "long long" and not as a "long int" AND the software should be recompiled. That of course will only be possible when the source code is available.

BTW: Your sample code above is a good sample of the problem. On a true portable system (meaning time_t would be defined correctly from the start) it should have produced the same output no matter what platform or run-time system. As you might have noticed this is not the case. Hence the problem.

I'll be deleting the references since you a) haven't read them and b) didn't use them.

Have a nice day. MadIce 09:42, 25 May 2005 (UTC)

The problem is NOT specific to C/C++. First of all, the C/C++ standards do not specify the internal representation of the time_t type. (AFAIR it does not even have to be an integer!) Therefore, specific IMPLEMENTATION(s) of the language and standard library can be guilty, but not the language itself. The same can be said of any other language - depends on what representation for date/time was used in the implementation.
Any software and operating written in the programming language C/C++ (...) will have that problem.
LOL! Man, you really killed me with that one. Who said you have to use time_t and related libc functions when writing in C? Even better, how do you want to write an OS using standard library? Usually it's the standard library that is built on top of the OS, but I may be missing some new design philosophy on this one ;). Jokes aside, you could easily write software in C using OS APIs for date/time manipulation and be completely safe. If those APIs are safe, of course. Which is usually the case for most non-Unix systems. - Anonymous 04:27, 26 Mar 2006 (UTC)
If you think Unices have horrible APIs, try reading MSDN. squell 12:05, 26 March 2006 (UTC)
I didn't say that Unices have bad APIs. I only said that most modern operating systems provide a date/time API that can handle a date span much larger than the 32-bit integer time_t representation could. Only on (32-bit) Unices 32-bit time_t is the standard date format, and therefore Unices are most affected by the y2038 problem. And I know the Win32 API, so you don't have to tell me ;). - Anonymous 18:17, 28 Mar 2006 (UTC)
Ok, in that case, you obviously had a different definition of 'safe' in mind ;) squell 20:39, 29 March 2006 (UTC)
I meant 'safe' regarding year 2038. If I used 'safe' in the usual meaning, then putting this word together with 'Win32' in the same sentence would be an oxymoron ;) - Anonymous 00:34, 30 Mar 2006 (UTC)

Redundancy

Is it just me, or do we have a lot of redundancy in this article? A rewrite is in order, methinks. Ambush Commander 19:34, Jun 25, 2005 (UTC)

I second the idea of a rewrite. I am going to try to take care of it in the coming days... --jonasaurus 21:46, 12 July 2005 (UTC)

1901?

Times beyond this moment will be represented internally as a negative number, which may cause some programs to fail, since they will see these times not as being in 2038 but rather in 1901.

Should that be 1970? I thought that the rollover would go back to the initial system condition. Captainmax 06:20, 18 July 2005 (UTC) (not an expert!)

Nope definately 1901 because the variable is signed. And will go back 2^31 seconds ( ~68 years ) when it overflows ( IIRC overflow is the term ) ( adding 1 to a 32 bit unsigned int which is already 2^32-1 will overflow to 0 ). However i think windows uses an unsigned int so will goto 1970. See also Integer (computer science) --2mcm 01:09, 20 July 2005 (UTC)
This depends on the implementation. If the library strips the high bit, it will roll back to 1970. If it doesn't, it might revert to 1901. I can also imagine an implementation which would produce negative hours after 2038. If time_t is treated as an unsigned, the "year 2038 problem" should not happen at all until 2106. Note that C and POSIX themselves only say that time_t should be an arithmetic type. Doesn't even have to be an integer type. squell 16:38, 26 August 2005 (UTC)

Year 292,471,208,678 problem

I changed it to the year 292,271,021,075 problem. I belive the person who wrote the year 292,471,208,678 problem used 365 days as the length of a year, and start at the Unix epoch, but rather at year 1.

2 ^ 63 / 60 / 60 / 24 / 365 = 292,471,208,677.54

Using 365.25 as the length of the average year, and subtracting 1970 for the Unix epoch, I reached the result of 292,271,021,075. This doesn't include leap seconds, and is probably slightly inaccurate in other ways, but it's closer than the previous value.

2 ^ 63 / 60 / 60 / 24 / 365.25 - 1970 = 292,271,021,075.31

Or I may have made some stupid mistake, and the previous value might be right, and mine might be very far off. But I don't think that's the case.

Unix time doesn't count leap seconds ( well it does ... twice ) so you are currect --2mcm 22:29, 25 July 2005 (UTC)
Actually, the correct computation would be
2 ^ 63 / 60 / 60 / 24 / 365.25 + 1970 = 292,271,025,015.31
But that's still incorrect; the actual number of days in a tropical year is in flux, but can be pegged at approximately 365.24235; realistically, of course, over the course of the billions of years involved in this calculation, any number is an approximation at best. At any rate...
2 ^ 63 / 60 / 60 / 24 / 365.24235 + 1970 = 292277146631.07406
Or, January 28th, 292,288,146,631 at a little before 1:12 A.M.
But, unfortunately, that doesn't actually correspond to our/UNIX's/POSIX's/time_t's calendaring system. So, let's try again.
The current calendar represents leap years as occuring 97 out of every 400 years. Here goes!
2 ^ 63 / 60 / 60 / 24 / 365.2425 + 1970 = 292277026596.92771
And now, as a date: December 4th, 292,277,026,596 at shortly after 8:08 P.M. (Since the year 292,277,026,596 is a leap year, the 339th day (*cough*) "occurs a day earlier".)
For those keeping score at home, that's approximately 22 times the current age of the universe. Jouster 21:05, 28 July 2005 (UTC)
Whatever the actual year is, I have to say that's a hilarious sentence. Not seen as a pressing problem, indeed. Hee! ekedolphin 01:04, August 10, 2005 (UTC)
not to mention the fact that the earth probably won't be around for that long, so the exact number of years is meaningless anyway as there will be no way to count them :P --Sysys 04:24, September 8, 2005 (UTC)
Why is is that most attention on this article goes into the last brief remark about the year 2982374927348932 problem? I think those who object that humour has no place in the article are missing the point that, in that case, the entire year 238748923742 issue should be removed, because it is an absurdist statement otherwise. Better to use wit to drive a valid point home than creating an absurdopaedia, no? squell 18:44, 14 February 2006 (UTC)
One paragraph does not an obsession make. Now, on the /talk/ page, sure. But the article focuses on 2038. Of course, if you feel it's too focused on the secondary problem, by all means, create a Year 292,077,026,596 page and link this one to it. --Jouster 20:01, 8 March 2006 (UTC)
You misunderstand me. Look at the history of this article: most recent edits go to that paragraph, which was perfectly fine. Yet tidbits get added to it, someone else removes it entirely, then someone tries to make it read like a serious mention, it gets reduced again, et cetera. The amount of energy people are spending on it is ridiculous. squell 21:35, 8 March 2006 (UTC)
To make this concrete: I think the article would be much better like this. Brevity is the soul of wit, after all. squell 23:48, 8 March 2006 (UTC)
That's how it used to be, and I think it was a good edit. – Mipadi 00:17, 9 March 2006 (UTC)
I restored it to the simpler version and added a 'see also' link to Ultimate fate of the universe--agr 03:55, 9 March 2006 (UTC)
"This problem is not, however, widely regarded as a pressing issue." - whoever wrote that: keep it in!! I laughed my ass off and a bit of black humour here and there adds to my love for Wikipedia =) Endymi0n 23:09, 27 March 2006 (UTC)
Hear, hear! — squell 01:38, 28 March 2006 (UTC)
I'd just like to extend my rofls to whoever wrote that last line :) Lovok 14:17, 21 July 2006 (UTC)
Lol!! =) Zanter 22:18, 10 August 2006

Commas in the Year 292,277,026,596 Problem

Okay, I make two cases against the argument "Years aren't formatted with commas." First of all, technically, they are when they're really big (see History of the World). Second of all, for all practical purposes, it is a better idea to leave the commas in, so the reader knows it's roughly 300 billion years in the future, and not 292277026596 (which is harder to decipher). — Ambush Commander(Talk) 01:47, August 15, 2005 (UTC)

One could format it in the non-American centric way using spaces? Maybe that would be a good compromise?
Generally, Wikipedia allows lots of leeway in choosing between British and American usage, but according to the manual of style, we should be using commas in numbers. — Ambush Commander(Talk) 01:17, September 6, 2005 (UTC)
Commas in numbers for digit seperation isn't American centric. It's English native-centric. The vast majority of native English speakers whether from the US, UK, Australia, India, NZ, etc use commas (or spaces sometimes) to seperate numbers, and periods for the decimal point. Non English native Europeans generally use commas for the decimal point and periods for seperation (or spaces). For Asians and Arabs, it's somewhat more variable. In the case of those that were colonised, they generally follow whoever colonised them (last). For example, the English-native 'standard' is used in Malay Nil Einne 04:31, 29 December 2005 (UTC)
You can't round 292,277,026,596 to 300 billion, if thats what your proposing, when the exact year is central and necesary to the article. Philc TECI 11:33, 24 June 2006 (UTC)

As of 2005

I love the fact that this article says "the earth is not expected to last beyond 5 billion years more (as of the year 2005)." I've made a note to myself to come back in the year 1,000,002,005 and change this to "4 billion". --OpenToppedBus - Talk to the driver 13:25, 6 December 2005 (UTC)

Hmm, I just removed that reference because I felt it didn't really add to the 'gag' which was already there. Feel free to put it back though (somewhere between now and the year 1,000,002,005). squell 17:27, 6 December 2005 (UTC)

Rounding up over 7 billion years

Rounding 292,471,208,678 up to '300 billion' as if 7 billion, 429 million years were insignficant is just wrong. (Bjorn Tipling 07:09, 28 December 2005 (UTC))

I didn't do it but it is fairly insignificant. It doesn't matter if it is 7 billion years or 700 billions years if the percentage is small. The percentage error is only around ~2.5% so it isn't really a significant error. However, generally speaking, when you round things, it's wise to keep all digits significant IMHO. So it would be better to round it to 292 billion rather then 300 billion. Nil Einne 04:35, 29 December 2005 (UTC)
Yeah that's much better. I'd still like to see anyone who'd say 2 billion years are insignficant hold their breath that long. (Bjorn Tipling 05:26, 29 December 2005 (UTC))
Well, all of us will do it, eventually ;) —Preceding unsigned comment added by 150.254.143.180 (talkcontribs) 00:33, 26 Mar 2006 (UTC)
So we can change your DNA around 2.5%, and it won't matter, because it's not really a significant error? Neat. Let me know how life as a chimp is. ;-) krikkert (Talk) 10:17, 26 February 2007 (UTC)

Extending the problem beyond the year 292,471,208,678

The 128-bit solution does extends the problem date to December 31, 17,014,118,346,046,923,173,168,730,371,588,410 hence being much farther away than the problem with the 62-bit solution. ftp://ftp.rfc-editor.org/in-notes/rfc2550.txt I've added that to the article. Voortle 01:31, 1 August 2006 (UTC)

Maybe we should talk about this. Why isn't a 64 bit solution enough? ~a (usertalkcontribs) 01:33, 1 August 2006 (UTC)
Fyi, RFC 2550 (no link) works. ~a (usertalkcontribs) 01:38, 1 August 2006 (UTC)
The 64-bit solution is enough for right now, as we're so very far from the year 292,471,208,678. The 64-bit solution would be able to fix the year 2038 problem. I just thought that the more powerful solution was worth mentioning in the article. Voortle 01:42, 1 August 2006 (UTC)
Why does WP need to talk about the 128 bit solution? In the year 292,471,208,678 we'll have gone through millions of generations and we'll have switched solar systems either hundreds of times or thousands of times (see Life cycle of a star#Maturity). Also, why did you remove "is meant to be humourous and" from the comment in the article? ~a (usertalkcontribs) 01:51, 1 August 2006 (UTC)
One may similarly ask, why does Wikipedia need to talk about the year 10,000 problem? In the year 10,000 problem, we'll likely have colonized many other planets and moons. Likewise, by the year 17,014,118,346,046,923,173,168,730,371,588,410 we might have gone through several universes[1]. The thing is, we often include things in Wikipedia that seem irrelevant. That's because an encyclopedia is supposed to contain certain facts and pictures, no matter how irrelevant they may seem. The year 10,000 problem for example, seems irrelevant to many people, but we still include it in Wikipedia. Voortle 02:13, 1 August 2006 (UTC)
I thought we had resolved this issue when you removed the text here. Apparently we had not resolved the issue, because you added it back in a few hours ago. "Individual scheduled ... future events should only be included if the event is notable" (from WP:NOT). This event is very unnotable. Can someone else please weigh in here, because I'm almost sure I'm not the only one that thinks this text does not belong. ~a (usertalkcontribs) 16:49, 3 August 2006 (UTC)
I also refer back to the origins of this page: "It should be immediately obvious why the year 298237492374 issue is not a pressing issue." The year 17,014,118,346,046,923,173,168,730,371,588,410 does not belong. Right? ~a (usertalkcontribs) 17:00, 3 August 2006 (UTC)
(weighs in) The 64-bit solution is widely proposed, so mentioning its wrap date is appropriate. Adding the 128-bit date is totally speculative, and, absent a cite, original research. Generally humor is disapproved of in Wikipedia. I think the mild "not considered a problem" could be a reasonable exception, but the vast amount of edit churning it seems to generate may prove otherwise. --agr 17:32, 3 August 2006 (UTC)
It's not original research to mention the 128bit figure of RFC2550, it's the question whether adding it makes the article better. Is it a relevant solution? No; counting seconds using a 128bit counter is technically inferior. Does it help illustrate why large counters postpone the wraparound indefinately into the future? I don't think so either. I'd guess that this is the consensus of the editors on this article. Perhaps agr is right that this article should really explain why the 292,471,208,678 problem it is not a pressing issue. Sadly. — squell 21:41, 3 August 2006 (UTC)

Y2K38 compliant

Will we be seeing things at the store that say they are Y2K38 compliant by the time of this year, similarly to how we saw things that said they were Y2K compliant near the end of 1999. Voortle 15:51, 3 August 2006 (UTC)

Y2038 compliant more likely, but yes, that will undoubtedly happen.--agr 04:08, 11 August 2006 (UTC)
Yes there will undoubtablly be some hesteria, how much is hard to predict. Plugwash 17:43, 11 August 2006 (UTC)
Hopefully, there will be little hysteria and lots of work behind the curtains. Zuiram 04:17, 30 October 2006 (UTC)
Indeed, while the Y2K issue was overhyped there were serious problems that were generally solved through countless manhours of heroic programmers trying to remember COBOL. Let's hope enough remember C in 2038. 149.167.217.20 10:32, 23 January 2007 (UTC)
I found the media attention after January 1, 2000 very interesting. The message seemed to be there were no disasters, planes didn't crash from the sky; and this proves what a huge waste of money and what hype the Y2K thing was. A very odd argument, like saying a zero incidence of disease shows we wasted all the money spent on vaccination. Notinasnaid 12:09, 26 January 2007 (UTC)
Except that even those who didn't bother to make their systems "Y2K compliant" had no problems. Y2K was a joke and that was obvious to many of us well in advance. 71.203.209.0 10:16, 24 February 2007 (UTC)
Are you trolling, or is this your actual belief? Jouster 17:08, 24 February 2007 (UTC)
In spite of all of my efforts as a software developer and firm knowledge of the Y2K issues, I had co-workers who wrote software in 1999 that was not Y2K compliant, and it got shipped to customers as well. In the case of this particular software, it created the date "January 1, 19100" and was written into some log files, causing some interesting problems. While this was a minor subsystem that didn't necessarily "crash", the threats of ignoring the Y2K issue had significant economic importance.
Frankly, I consider this Y2K38 bug to be a much more significant software engineering problem, as the solutions are not nearly as simple as the fixes which were done for Y2K compliance, even though the solutions may use some of the same Y2K strategies, like a window offset variable that would "reset" the clock to say January 1st, 2000 instead of 1970. That would still require 64 bit calculations for actual date conversions, but it wouldn't force a file format change. It is also a "hack" like most of the Y2K fixes as well that may all still creep in. It is just that not all of the Y2K fixes will fail on the same day, fortunately. --Robert Horning 21:04, 8 March 2007 (UTC)