Ever since the development of the electronic computer, we have shot up the development curve. Instead of a single, large computer tied to a teletype machine for input and output, we now use interdependent systems, connected over networks, capable of sending, receiving, and processing data from around the world. In essence, the mainframe computer serves as the brain for these complex networks, with PCs acting as gateways between the mainframe and the embedded systems generating data. While microprocessors lie at the heart of the computers that handle most of the communications and control functions, microcontrollers dominate the embedded applications.
Open And Closed Systems
Microprocessor-driven computers fall into the category of open systems, while microcontrollers pervade the low-end, closed systems. An open system is loosely defined as a system that can be programmed by the end user, with full access to the hardware. The best example of an open system is the personal computer. The supplier can't control the compliance of applications developed by the end user. However, the liability of the supplier is greatly diminished when it provides a fully year-2000-compliant platform along with the hardware specifications and a year-2000 (Y2K) impact statement.
In fact, the open PC-AT architecture has become a major force in the embedded-systems arena. PC motherboards are used in applications ranging from ATM machines to monitoring and control systems in electric power plants. Systems range in size from the ruggedized rack-mount, highly visible industrial chassis to the single-board computers in small black boxes. The flexibility of open systems makes them attractive for embedded-systems applications.
The closed system denies the end user direct access to the hardware. The manufacturer installs the operating system (OS) or hardware interface, and end-user applications can only access the underlying hardware through the OS. The system is supplied with the OS interface specifications only. Information covering direct access to the hardware is not provided.
Y2K Failure Points
All of the layers of a computer system must process date-related information before, during, and after the century date change. If any part of a system fails to provide, recognize, or process dates into the next century, complete system failure may occur.
When viewed from a data-processing perspective, applications programs occupy the top level of the system hierarchy. While the OS lies between the program and the hardware.
The applications program handles the manual and automatic database entries, and manipulates the data. Date entry should require a four-digit year with valid date checking. Invalid dates such as February 30, 1999, must not be allowed. If the two-digit-year method is used, the application program should use a windowing technique to expand the year to four-digits prior to entry into a database. Database sorts will fail if it is not compliant. An applications program requiring the current date will either access the system clock through the OS or bypass the OS and go directly to the hardware clock.
Most OSs will maintain a system clock. The OS, of course, gets the time for the system clock from the Real-Time Clock (RTC). Some OSs do not maintain a system clock, and depend on direct access to the RTC for the date and time information.
Some closed systems do not have a true OS. The applications program handles all of the OS functions. This type of program consists of a compiled, higher-level language intermixed with assembly language. The assembly language routines take the place of the compiler run-time routines that make OS calls to access the hardware.
The Software Clock
The system clock maintained by the OS is actually a software clock. Software clocks exist in a umber of systems, even those with a hardware RTC. The software clock is synchronized with a time tick derived from a crystal oscillator, and provides a periodic interrupt to the processor. It's usually maintained by the OS in the form of a binary counter, which is incremented by the processor with each time-tick interrupt.
The system clock usually provides the OS higher time resolution than the 1-s resolution provided by most hardware RTCs. The resolution appears normally in fractional seconds based on the period of the time-tick interrupt of the processor. The count in the system clock usually represents the passage of time since the beginning of a reference year. The reference year, or epoch, is usually the year that a particular system or OS was developed, and is different for each system.
Battery-backed CMOS RTCs did not exist in the early days of computing. The current time was entered manually every time the system was powered-up. Modern systems have a battery-backed RTC built-in to remove the requirement for manual time entry.
The current time, whether entered manually or read from the RTC when the system is powered up, has to be converted to the correct format needed to initialize the system clock. The conversion process involves subtracting the epoch from the current year. The remainder is the elapsed time since the epoch that needs to be converted to seconds. The elapsed time is used to initialize the software clock, which will count from that value at the period of the time tick.
A common source of error with the system clock is the failure to service the time-tick interrupt every time it occurs. The operating system will normally service interrupts from multiple sources as part of its housekeeping functions. Critical service routines will leave the interrupts disabled until they are finished. If the interrupt cannot be serviced fast enough, two or more time-tick interrupts can occur while the interrupts are disabled. Only one of the time-tick interrupts can be serviced under these conditions. All critical interrupt-service routines that can not be interrupted must complete their service within one time tick.
If the system clock is in a format that is not readable by humans, problems arise for the OS, and in some cases, applications programs. Date and time entry to the system is in a human-friendly format, as is the date and time output. There are three conversion routines that can be a source of errors. The routines that convert data from the RTC format to the system format, humanly readable format to system format, and system format to humanly readable have to be written and tested for correct date conversion before, during, and after the century date change.
Leaping With The RTC
The typical RTC has all of the counting circuitry needed to provide seconds, minutes, hours, days of the week, dates of the month, months, and two-digit years. Only two of the clock registers will be examined here. The day-of-the-week counter has a counting range of 1 to 7. The RTC does not have enough processing power to calculate the day of the week from the information entered into the date-of-the-month, month, and year counters. The day of the week has to be calculated externally, and entered into the RTC when the other registers are loaded. The two-digit year counter has a counting range of 00 to 99; it will count from 00 to 99, then roll over to 00.
Basically, RTCs fall into two categories: Y2K capable and Y2K compliant. The limitation of the Y2K-capable RTC is that it only has a two-digit year counter. It will provide the correct time of day, date of the month, month, and two-digit year with proper leap-year compensation up to 2099. Software intervention is required to determine if the first two fields are 1 and 9 or 2 and 0.
Close examination of the leap-year rules shows that using the rule of four is all you need to provide correct compensation to 2099. The leap-year rules, as shown below, are that every year evenly divisible by four is a leap year, except if the year is evenly divisible by 100—unless it is also evenly divisible by 400, in which case it is a leap year. The year 2000 is evenly divisible by four and 400, which makes it a leap year. The years 1900 and 2100 are evenly divisible by four and 100, which prevents them from being leap years.
leapyear = NO; if((year mod 400)
The Y2K-compliant RTC expands the year counter from two digits to a true, four-digit year counter. No software intervention is required to calculate the century, because the entire four-digit year is represented. Table 1 shows the architectural differences between a simple, Y2K-capable clock and a Y2K-compliant clock. In many instances, the newer Y2K-compliant clocks can directly replace a Y2K-capable clock, with a minor software change to utilize the century counter (Table 2A and Table 2B). This minimizes the interruption to the manufacturing process because all that's required for a given product is a simple software change, and a bill of materials alteration.
Y2K RTC Does Windows
The only challenge posed when working with a Y2K-capable RTC is the century calculation. But, there are several ways to calculate the century value in software. One common method is a windowing technique that uses a pivot year.
If the pivot year is 1990, the application program or OS will first read the two-digit year from the RTC. If the two-digit year is greater than or equal to the pivot year, the value "19" should be prefixed to the two-digit year from the RTC. This will cover the years 1990 to 1999. If the two-digit year from the RTC is less than the pivot year, the value "20" should be prefixed to the two-digit year from the RTC. This will cover the years 2000 to 2089.
A variation of the windowing technique uses 1 byte of the non-volatile RAM built into the RTC to store the century. The PC-AT architecture uses location 32h of the DS12887 for the century storage, while the PS/2 architecture uses location 37h. Reserve 1 byte in the memory space to store the century value. When the application program or OS reads the two-digit year counter and the century location to get a full four-digit year, the four-digit year is compared to the entire four-digit pivot year. If the four-digit year from the RTC is less than the four-digit pivot year, the value 20 should be written to the century location. This will cover the period from the pivot year to 2099.
The windowing techniques will work fine in closed systems, however, it does provide some risks in open systems. Various OSs, compilers, and assemblers from various vendors support the most successful open systems.
The compiler run-time library is another source of date-handling errors. Compilers that have routines to get and set the time and date are normally designed for a specific hardware platform, where the location of the raw system clock and the RTC are well defined. The compiler library routines take on the responsibility of converting the system-clock data to a form required by the compiler. The compiler library routines have to be verified for correctness, and possibly replaced with your own routines.
Most compilers allow assembly-language routines to be mixed in with the compiler code, using pseudo operands to show the start and finish of the assembly code. Some allow assembly code routines to be linked into the object during the linking process. This allows the programmer to replace run-time library routines with in-house routines. Assembly language routines allow direct access to all of the I/O ports and memory in the system, so that they can be manipulated by the application program. In-house-developed code that accesses the date and time registers in the RTC or the system clock must be tested to ensure there are no conversion errors, and that they handle the date properly through the century date change.
In a nutshell, developing systems expected to function properly before, during, and after the century date change requires thorough testing of the hardware platform and the platform-development tools. Year-2000 failures can occur in the hardware, operating system, compiler, date-conversion routines, and application programs. Everything needs to be checked for correct date handling.