The Millennium Saga … Continued
I read with great interest (actually panic!) Maggie Macary`s article on Year 2000 compliance. The panic came from the phrase “exception to the exception” relating to whether or not 2000 is a leap year. After research and testing, I came across two articles from the Royal Greenwich Observatory that set the whole thing straight. The rule Macary was referring to was a purported rule that years ending in “00” are usually not leap years, while the year 2000 is a leap year. Actually, the purported rule is not a rule at all. The rule is that every fourth year is a leap year (starting from the appropriate point of origin) but century years (years ending in “00”) are only leap years if evenly divisible by 400.
For those interested in marking calendar dates when they should be retired from the computer business, mark your calendar Jan. 18, 2038 at 3:23 am. According to my calculations (and this might not be entirely accurate), that`s when time, stored in the very popular UNIX time format, will have its high order bit turned on for the first time. Thus any program with time incorrectly coded as a signed integer will fail date comparisons.
This will be a MUCH MORE DIFFICULT PROBLEM TO DETECT. Rather than simply being a problem germane to application programming, language compilers can screw this up as well. The bulk of most integer arithmetic is performed on 16 rather than 32 bit values, and compiler bugs can fester for years without being caught. If the year 2038 seems sufficiently far off for you to not worry about the problem, then you`ll understand why we`re in the current mess!
Donald S. Berkowitz, Berkeley, Calif.