I remember the day it arrived. All of our design engineers gathered around the lab bench waiting for our technician to unpack the box. As it was slowly lifted from the protective cardboard packing and set on the bench, we all looked on in amazement— 5 Mbytes in a single hard drive that could fit in your hand! It was a Control Data Corporation ST-506-compatible hard-disk drive that weighed 4.5 lb and consumed around 40 W.
Just about everyone made some statement that was the equivalent of “Who could ever use that much storage?” The drive didn’t even have an onboard disk controller. It also needed a circuit card loaded with electronics to allow a computer to store and retrieve the data! This was 1983, and 64 kbytes of RAM in your computer was a bunch.
In those days, we all did our designs on vellum paper with a mechanical pencil. It was not uncommon for me to have 15 to 20 “E” size pages of schematics all hand-drawn that were copied (using an ammonia process) onto blueprint paper for our peer design reviews.
I was a 23-year-old engineer then, and I remember being hacked to death by my older colleagues. I would get questions like, “Why did you use this octal latch here?” or “Is this extra NAND gate really necessary?” These were the days when an octal buffer would cost you $0.80 (about $1.72 today) in volume, so every transistor, gate, buffer, and latch was questioned.
My old boss and mentor always told me, “I can hire anyone off the street that can design something that will work in the lab… I pay you an engineer’s salary because I expect you to design something that we can mass-produce. We are going to build thousands of them and I expect all of them to work!”
I never forgot those words. It was a very interesting time. If you wanted to learn how to build a charge pump from transistors, a precision instrumentation amplifier from discrete JFETs, and op amps or a digital pipeline graphics engine from ROMs, latches, and gates, it was the time to be alive.
Sticks and Stones? Almost...
It was the day of the PAL or programmable array logic, which was the predecessor to the modern complex programmable logic device (CPLD). These little devices allowed us to build state machines and consolidate logic, even though they were quite expensive at the time.
We used primitive tools like PALASM (PAL Assembler) and other simple PC-based tools to construct the fuse maps. The maps were loaded into programmers that would literally blow out a titaniumtungsten fuse inside the PAL device. (You only got one shot.) In the early days, it was not uncommon for these “fuses” to actually grow back due to incomplete programming and high signal currents. That would always keep me guessing if I designed my logic correctly. Oh, those were the days!
All of this hardware required power, and at that time, the LM7800 (also known as the LM340) family was our standard component for power-supply design. We would use a transformer to step-down the ac line voltage to approximately 8 V, rectify it with a full bridge, and then use a linear regulator to produce the 5 V needed for the logic. We also needed ±12 V as well as –5 V, which were generated off split windings of the transformer.
These voltages were also controlled by linear regulators (both positive and negative versions). The –5 and +12 V in addition to the +5 V were required for the ultraviolet (UV) erasable EPROMs that were used for state machines or held processor code (see the figure). Such power supplies were extremely inefficient. Switching regulators existed, but for most engineers, they were far too complex to use in a design.
Most engineers would wonder why we needed extreme voltages for a digital IC. Remember, this was the early 1980s and state-of-the-art NMOS processes used to manufacture the 2708 and 2716 EPROMs had geometries on the order of 2 µm (no, not 0.2 µm, 2 µm). That would represent more than 40 times larger gate geometries than a modern 45-nm process.
Today, we have problems keeping charge carriers out of the conduction channel (keeping the transistors off). But back then, you had to help them out by biasing the channel near conduction. These PROMs contained a quartz window for erasing the programming with UV light. If you accidentally plugged one into a socket backwards, you created a light-emitting PROM (LEP), which glowed orange through the window for about a second while the gold bond wires were incinerated—really fun times!
By the mid 1980s, Daisy Systems and Mentor Graphics workstations started to appear—at an amazing cost by today’s standards. Only very large corporations could afford these tools, and the PCs of the time were really underpowered to handle large schematic capture or printed-circuit-board (PCB) layout tasks. Many were still done by hand. I watched technicians cut Rubilyth film with a razor knife to create the patterns used to fabricate PCBs way before CAD tools were used to do the same. That’s a lost art.
As the 1980s and 1990s progressed, we saw advances in computers and computer tools. In 1993, I remember my close friend calling me up to boast about his brand-new, 33-MHz Intel 80386-based computer and its 320-Mbyte hard drive. (I think he hocked his car to buy it.) I couldn’t wait to play with the system. It was so much more powerful than the 80286-based machine that I was using at the time. His new 386 machine had real memory management and a companion 80387 floating point unit (FPU), and it could run some serious software. I was extremely jealous!
So the digital devices and tools were getting better, but what about the rest of the analog system components? Remember the LM780x/ LM340 linear regulators I mentioned earlier? Well, they were still around (and still are), but they had company. A new breed of powersupply components was finding its way into engineers’ hands. These were the integrated switching regulators such as the LM2575.
Continue to page 2
They not only included the switching controller, they also had a power transistor on a monolithic device. A high-efficiency power supply could now be built with very few external components (only an inductor, a capacitor, and a Schottky diode). Engineers could finally improve on their power supplies with far better efficiency than the older linear regulators. They could also do magical things like invert voltages to make the negative values required by line drivers or other analog sections.
The energy crisis of the early 21st century had not yet arrived, so power was mostly taken for granted. Yes, hard-disk drives were improving on their densities and processors were making great strides, but the power requirements still weren’t a major concern. If your product had an electrical cord, you assumed you had at least 2000 W of available power to run your system. Those were the days!
Today’s Design Considerations
It’s no longer a luxury to provide efficiency. It’s expected. Systems not only need to provide their function, they also must do it with the fewest joules possible. Yes, we have components like high-density FPGAs complete with soft processor cores, supercomputer-like microprocessors, high-density RAM, ultrahigh- speed data converters, and amplifiers. But it all takes power to run them. Soon, every electronic product will have a sticker on its side that spells out how much energy it will consume in a year’s period. Energy is no longer a low-cost commodity that’s taken for granted.
Design engineers today have a daunting task—create systems that outperform their predecessors while using less power. Okay, when I was designing systems in the 1980s, my only concern was to get my design to fit into the box given to me by the mechanical engineers. I might even have convinced them to make it bigger if I needed more space. But not today.
Server farms using thousands of blade servers are running 24/7 to provide uninterrupted service to millions of customers worldwide. Power is everything. In July of 2007, Ellacoya Networks released data that showed YouTube accounting for 10% of Internet bandwidth— a staggering amount of power dedicated to watching video over the Internet. The growth of information in all forms is highly nonlinear and increasing with every second, and so is the power consumption associated with the storage and delivery of that information.
Today, it would not be uncommon in an engineering peer review to question the power consumption before the functional implementation. Power that goes into a system comes out as heat and impacts both the cost of ownership and the long-term reliability. Fans can fail, components can overheat, and systems can go down. By lowering the energy consumption of a system, engineers gain both improved reliability as well as lower energy costs.
But does saving 30 W in a server really matter? Saving 30 W (possibly a 10% improvement in the power supply or through improved system architecture) in each of 10,000 servers is a tremendous amount of power. If you account for the energy requirements of the air conditioning as well, the power can double. The 30-W savings across 10,000 servers is equivalent to the power required to run roughly 500 average U.S. homes for a year!
The Next Generation
Like today, the future of designing electronic systems will rely heavily on computers. Advances in software will give engineers many alternative architectural approaches, all optimized for their target specifications. Leading those specifications will be power consumption, and the semiconductor suppliers will be pressured to provide tools that understand the system requirements and suggest solutions— much like my old coworkers did in my peer design reviews.
As systems continue to become more complex, software will be key in improving performance as well as reducing power consumption. Expert systems have long been the dream of engineers—those automated systems that effectively “bottle” the expertise of hundreds of specialists. These systems will analyze designs or possibly suggest new designs based on the requirements provided. I can envision a day when engineers become so overwhelmed by the details, they will come to rely on a collective computer that works the fine aspects while allowing the designer to handle the big picture.
In the near term, there’s no doubt that we rely more on the manufacturers to supply either tools or complete integrated solutions, making the creation of systems much easier. Imagine having to build a CPU from scratch. Today, you can simply load it as a soft core into an FPGA or buy it as a completely self-contained device. FPGA tools already allow you to budget your power requirements. The simulations calculate the estimated amount of power a design needs running at a particular clock frequency.
Additional online tools help digital engineers become analog designers. Once the digital sub-systems are complete, these tools can help designers build highly efficient power supplies or a complete analog signal path. In the future, such tools will be “aware” of the entire system requirements and provide online advice as engineers turn to the manufacturers for assistance, even if that assistance is from artificial intelligence. Now that will be the day.