Keynoter describes ubiquitous computing from data center to pacemaker
San Francisco, CA. This year represents a critical juncture for the semiconductor industry, as organizations look to 3-D IC packaging, extreme ultraviolet (EUV) lithography, and 450-mm wafers in pursuit of moving Moore's law ahead. That's according to Karen Savala, president of SEMI Americas, who kicked off Semicon West this morning by introducing keynote speaker Shekhar Y. Borkar, an Intel fellow and director of extreme-scale technologies.
Borkar described himself as not a process technologist but a “happy user” of the results technologies such as the ones Savala enumerated. “I will exploit [those technologies] to make computing ubiquitous,” he said. However, he said, many challenges remain with respect to compute capabilities, memory, interconnect, and reliability and resiliency.
Borkar traced the evolution of computing from gigaflops performance in the 80s, through teraflops performance in the 90s and on to petaflops performance in the last decade. By the end of this decade, he said, we will see exascale computing.
In general, Borkar said, one decade's server performance becomes the next decade's client performance and the subsequent decade's handheld performance. He noted that to make this happen, system-level performance improvements outpace transistor performance improvements—primarily through system-design innovations such as parallelism. Speaking of himself as a member of the system design community, he said, “We get greedy” and want more performance than what process technologists deliver.
Improving energy efficiency will be critical moving forward, Borkar said. Energy per transistor is falling, he said, but with each eight orders of magnitude improvement in transistor efficiency, the number of transistors increases by six orders of magnitude. Transistor performance alone won't completely address energy efficiency requirements, requiring design enhancements. He described one experiment involving near threshold voltage (NTV), which showed that energy efficiency peaks when devices are operated near their threshold voltage.
Borkar said that energy efficiency is inversely related to flexibility. Flexible microprocessors are less efficient than the more specialized DSPs, which in turn are less efficient than application-specific devices. Efficiency also differs among memory, with SRAMs offering best efficiency, followed by DRAMs, NAND and PCM devices, and disks. He did note that with DRAMs, only a portion of read memory is actually used, and he suggested a change in architecture involving 3-D integration to improve efficiency.
The movement of data also affects energy efficiency. An intrachip data communications operation might require less than 1 pJ to move data 1 cm. Moving that same data from chip to chip on a board might require 2 or 3 pJ, while board-to-board transfers might require 4 or 5 pJ. And moving the data within a cabinet might cost tens of picoJoules. “Keep compute elements close,” he advised, adding that it's important to investigate nontraditional low-loss interconnects, including top-of-package interconnects, low-loss flex connectors, and low-loss twinax interconnects.
Without sufficient care, he said, highly parallel designs can represent the road to unreliability. To avoid that, designers will have to focus on error detection and isolation as well as reconfigurability.
Borkar said he is often asked, why focus on local computing when everything is moving to the cloud? It all comes down to energy vs. distance, he said, with the energy cost of moving data over Ethernet, Wi-Fi, or 4G networks much higher than moving the data within a chip or board. Consequently, he sees a future of ubiquitous computing extending from the data center to the pacemaker. “We can make it happen, together,” he concluded.