Moore’s Law: The Challenge And The Solution For Test
Corresponding verification
Most test engineering challenges can be traced back to Moore’s Law—the observation from Gordon Moore of Intel that the number of components in integrated circuits doubles approximately every 18 months. This law has become a proxy for the tremendous increases in performance and reductions in cost not just for semiconductor devices, but for all electronics products.
The implications of Moore’s Law—the exponential increase in performance and functionality of devices—have put tremendous pressure on test, which struggles to keep up with the new capabilities without impacting the overall cost to design and produce a device. On the other hand, Moore’s Law has also driven incredibly powerful processing capabilities for improving the speed and precision of test systems, provided they are architected in a way that inherently benefits from these processing improvements.
Moore’s Law: The Challenge
As semiconductor density increases, so does the functionality of electronic devices. Today’s semiconductor chips and electronic systems are adding new capabilities at a frenetic rate. Think of the phone in your pocket. It likely has multiple radios (GSM, Wi-Fi, Bluetooth, and others), can take pictures, play audio, and perform myriad other tasks in addition to just making a phone call. While the cell phone is the obvious example of this trend, it is by no means the only device seeing an increase in capability due to Moore’s Law. Computers, consumer electronics, automotive electronics, and even military technology are all experiencing the same phenomena.
To add all of this functionality, design engineers have turned to higher levels of abstraction in designing semiconductors and electronic systems. Increasingly, they can reuse existing intellectual property, or IP, as building blocks of a new design. This abstraction allows them to design at a system level and get new products to market with new features faster than ever before. And, increasingly, this abstraction is in the form of embedded software that defines many of the capabilities of these devices.
Moore’s Law For Test
Keeping up with this pace is a tall order for a test engineer. As consumers, we expect equivalent or even greater levels of quality with each generation of these products. To maintain this quality, new designs must be fully characterized before release and each device must be functionally tested before it is shipped. Using the same instrumentation and the same test techniques, it is reasonable to expect test time (and by extension, test cost) to scale at the rate of new features. This is rarely acceptable, however, as Moore’s Law also predicts all of this additional functionality at the same or even a lower cost to the consumer.
While Moore’s Law presents these daunting challenges to test engineers, it also provides the path to the solution. Test systems, just as the devices they are built to test, are increasingly defined in software. And unlike its rack and stack predecessor, a well-architected software-defined test system can ride the performance curve of Moore’s Law to keep up with ever-increasing test needs. Two processing technologies are critical to keeping up: multicore processors and FPGAs.
Continue on next page
Modern processors are taking the multicore path to continue doubling their transistor count every 18 months. However, this increase in performance doesn’t come from an increased clock rate, but rather from an increase in the number of parallel operations that can be run on separate cores. This concurrent processing architecture requires a significantly different software paradigm.
Traditional text-based programming languages do a poor job of representing concurrency, and programming in these tools is very difficult to scale beyond a few cores. Higher-level abstractions, such as graphical dataflow, are required to properly represent concurrency and thus maximize the total system performance.
Programmable silicon, with FPGAs in particular, is now coming to the forefront for high-performance test applications. An FPGA is inherently parallel, deterministic, and reliable, and it can perform functions close to the I/O pin with performance that a traditional processor cannot meet.
For example, in an RF acquisition system, functions such as digital downconversion, filtering, and time-to-frequency conversion can all be performed in an FPGA at the rate of the incoming data. The downconverted frequency domain data then can be passed to a multicore processor for further analysis to complete the measurement operation.
In combination, FPGAs and multicore processors provide a high-performance architecture for a software-defined test system.
The Role Of IP
Scaling a test system’s performance with Moore’s Law is part of the solution, but that alone may not enable test engineers to keep up with their colleagues in design. Since design is increasingly done at a higher level of abstraction using existing IP, test systems must also be able to leverage the same IP.
One example is in RFID, or radio-frequency identification. To test an RFID tag, you may want to emulate an RFID reader. An emulator of an RFID reader, then, is a test system that can run many of the same IP blocks as an actual reader, though it is likely desired to have more flexibility for reconfiguration and embedded measurements.
Test engineers must be able to use these IP blocks in their test systems to keep up with the added capabilities of the devices that they test. Again, an FPGA-based architecture is ideal as many embedded algorithms can be run on the FPGA in real time.
Reusing IP from design on a test system is a part of the ultimate goal of concurrent design and test. Often represented by the “V diagram,” each phase of design should have a corresponding verification, or test, phase (see the figure). In this way, a design team can work its way “down the V” from the highest-level modeling and design to lower-level implementation, testing at each stage.
Concurrent design and test, built on the technologies driven by Moore’s Law, will enable test engineers to keep up with the rapid increase in functionality of the devices they test.