If one looks at the last 50 years of engineering boom-and-bust cycles and correlates them with the “stealth” technologies that emerged during those periods, one can see an encouraging pattern: breakthrough technologies take root during the crises and eventually transform the industry. Often, few people initially grasp these technologies or their potential. It’s also regrettably demonstrable that the actual pioneers have rarely been the ones to reap the big rewards.
Start a little more than 50 years ago, in 1958, right after Sputnik and about the time I built my first radio (a Knight “Space Spanner” from Allied Radio). Across the auto industry, tailfins grew to what would be their maximum useless size, yet auto sales still took a big hit (Fig. 1). Some corny industry ads exclaimed “You ‘auto’ buy now!” Consumer and wholesale prices rose across the board.
According to economists, that recession didn’t really end until early 1961, when President Kennedy kicked off the race to the moon. But in terms of engineering, the recovery and ensuing boom was in part due to earlier government investments in technology education (see “Give ’Em a Buck”).
This boom was also the first to build on the root technology that powered the latter half of the 20th century, the semiconductor IC. This technology goes back to the work of Jacobi, Dummer, Darlington, Kilby, and Noyce (Fig. 2). Kilby and Noyce were probably the first engineers who had a clear vision of where IC technology might lead. Not me. I must have been seduced by the warm glow of the 12AT7 in that Space Spanner.
I remember reading about Kilby’s work in 1959, when I was in high school, and naively wondering whether it would be possible to scale passives as readily as transistors. Actually, it was a long time before we moved beyond through-hole assembly, but passives scaled as rapidly as clock rates and rise times required them to.
Another technology from this era took longer to become pervasive—directsequence spread-spectrum (DSSS) modulation. One day in a senior EE circuits class, our TA, who was doing his PhD research at Bell Labs, came to class apparently lacking a planned lesson. Somewhat orthogonally to what we had been covering, he delivered a brilliantly clear lecture on autocorrelation and its application to signal processing.
He was so articulate, I could have replicated that lecture for at least several days afterward. (It wasn’t going to be on the final.) I recognized the concept nearly 20 years later when it reared its head in two applications in particular that have since became pervasive, GPS and wireless networking.
I had to set that lecture aside until IC capabilities caught up with the theory. Meanwhile, by the time I finished college in 1966, the aerospace boom had reached a crescendo, and I moved to Los Angeles to get a piece of the action—for a time.
Economists don’t label it a recession. But in late 1969, if you were an engineer working anywhere in the aerospace or defense industries, it sure felt like one. The folks in Los Angeles who had sent Eagle to the moon took it hard. Seattle, where Boeing lost the C5A jumbo military transport to Lockheed-Georgia and had to shut down the 2707 SST in 1971, took it really hard.
The company went from a peak of 100,800 employees in 1967 to 38,690 in April 1971. That month, two real estate agents with a taste for irony put the message “Will the last person leaving Seattle —Turn out the lights” on a billboard a few blocks down Pacific Highway from Sea-Tac airport.
By that point, the worst was past for many EEs. Boeing’s payroll was up to 53,300 by October 1971. Down in L.A., I had turned in my Yellow Cab taxi-driver’s hat and gone to work for an aircraft antenna company. However, few anticipated the stock market crash that started in January 1973 and lasted until December 1974. The Dow lost more than 45% (after picking up 15% in 1972). That’s before factoring in the effects of the Arab oil embargo. Things were worse overseas.
While aerospace crashed, never to recover the glamour it had during the 1960s, the microcontroller tiptoed into our lives. Intel released the 4004 in late 1971. I never saw one of the Busicom four-function handheld calculators that used it, but I was impressed when a hotshot consultant showed us salaried grunt engineers his new HP35 in 1972. (It was early enough that his calculator had the famous “2.02 ln ex bug.” That’s the keypad entry in reverse-Polish notation, and the result on the calculator was truncated to “2.”) The HP35 CPU was a custom chipset, though, oriented to the calculator’s stack architecture (Fig. 3).
This was, in a word, awesome. At the time, engineers at the antenna company I worked for performed their heavy computations using a single Teletype KSR-33 to access Tymnet’s mainframe. We entered data offline, creating tapes on the Teletype’s built-in punch, and generally spooled output to the punch, which was faster than the TTY. We’d make readable printouts offline.
Actually, Tymnet was more useful than the HP-35 for big, repetitive jobs. (There are lots of iterations involved in modeling a horn antenna.) But the idea of the HP-35 —that you could run rings around a slide rule and eliminate order-ofmagnitude errors (while adding many digits of false precision) on a (nearly) affordable unit you could slip into your pocket—was striking.
Of course, desktop scientific calculators already existed. In its air-data computer division, my previous employer had not one but three Nixie-tubed Wang electronic slide rules for all of its engineers to share. But they were the size of the clackety-clack mechanical Marchant calculators, relics of the Manhattan Project, they replaced. In contrast, the HP-35 was half the size of a Star Trek tricorder!
Continue to page 2
The sneaky thing about those chips, and the Intel (and Motorola) 8- and 16-bit microcontrollers to follow, is that it took a while before many of us stopped thinking of them as number-crunchers that could be forced into other roles and started seeing them as “anything” machines. I was at Tektronix in the mid- 70s when Tek acquired a company that made the first non-Intel microprocessor development system—for the 8008 (Fig. 4). It thus became necessary to train a sales force comprising the world’s best analog oscilloscope experts on how to sell firmware development tools.
The architectural concepts weren’t much of a problem, but we had to get across what these things were good for. What were people going to do with them that wasn’t basically arithmetic? We guessed at a few simple control situations, though I don’t think anybody envisioned the scope of ever-evolving applications you find at today’s Embedded Systems Conferences.
Oh, we might have. Perhaps the clue should have been Nolan Bushnell’s “Pong” game. The legend of the coin box overstuffed with quarters is too well known (and a little exaggerated) to retell here. (See computermuseum.50megs.com/pong.htm.) But Sunnyvale was Sunnyvale and Beaverton was Beaverton, so some of us missed connecting with the birth of the gaming boom.
Notwithstanding early energy crises, by the later 1970s, a pervasive IC-based civilian electronics industry was helping most engineers bring home regular paychecks, their amounts ratcheting up to almost keep pace with inflation. Then came the recession in the U.S. that began in July 1981 and lasted until November 1982, said to be triggered by the Federal Reserve’s attempts to attack inflation by making credit harder to obtain.
By 1980, inflation was 13.5%. In response, the Fed raised interest rates. The prime hit 21.5% in June 1982. The years 1983 through most of 1987 were less troublesome. But on October 19, 1987, the Dow dropped by 22.6%, triggering the savings and loan crisis and subsequent congressional bailout. However, those early 1980s were precisely the time that silicon hardware caught up with the DSSS theory that Bell Labs and the military had been working on back when I was in college.
In the early 1980s, I worked in a PR agency that picked up a startup client run by a harddriving sailboat racer named Charlie Trimble. Trimble had been an IC guy inside Hewlett Packard and worked on an HP skunk-works Global Positioning System program. When HP decided that GPS didn’t fit its business model, it let Trimble and his cohorts acquire the intellectual property and start Trimble Navigation.
My assignment at the time was to ghost-write a pile of articles for the RF and defense-industry magazines of the day about the GPS. So, as people inside and outside Trimble explained the mysteries of Gold codes and chipping sequences to me, I was tickled to realize that they were talking about the theory a certain TA had laid out in a stuffy classroom nearly 20 years before. (It would come up again, nearly 10 years later, when Harris Semiconductor introduced the first DSSS Wi-Fi chipset while I was working for Harris’ agency.)
One cool thing about Trimble’s first product was that it wasn’t a locator. Not enough of the constellation of NavStar satellites had been launched to provide consistent coverage. But they could provide an extremely precise clock.
Trimble began offering a $15,000 benchtop unit that could be used to recalibrate laboratory atomic clocks without the need to send them to the National Institute of Standards and Technology in the U.S. or to the Bureau International Des Poids Et Mesures in France. Trimble’s product not only prevented lab downtime, it also prevented complex recalibration following the atomic clocks’ return to service to account for the relativistic effects related to shipment by air.
For me, the atomic-clock calibrator may have obscured the potential for cheap, handheld positioning units. A few years later, during the first Gulf War, Trimble’s handheld sales went through the roof as parents and spouses bought civilian units to send to loved ones in Iraq.
DOTCOM BUBBLE AND BUST
More important than GPS in those years was the penetration of the Internet into ordinary life, due to the convergence of hypertext and highspeed TCP/IP, delivered via cable and DSL, and disseminated around the home and office by that other embodiment of spread-spectrum, IEEE 802.11 Wi-Fi, first found in Harris Semiconductor’s PRISM chipset.
It’s almost hard to remember the Internet in the days when acoustic couplers were in the home, with mom-and-pop ISPs and text-based applications, or Unix workstations in the office, with more text-based applications accessed from the C shell. Who would have expected in those days enterprises like Amazon, Google, PayPal, and eBay? Who would have predicted server farms that suck electrical power from the grid on the scale of medium-sized cities?
I wasn’t a particularly early adopter of social networking, but by 1994, I’d been a regular on various Usenet newsgroups and the Banjo-L list server for at least five years. Yet when Jim Clark and Marc Andreessen released the beta of Netscape Navigator in 1994, I happened to be working a couple of blocks down Mountain View’s Castro Street from the suite over the bakery occupied by Netscape at the time (a job that included Harris’ PRISM introduction).
I heard Netscape was handing out free discs, so I walked over one lunchtime. I didn’t get a disc, but the receptionist told me how to download the beta online, and I was wasting company time on the Web by that afternoon.
The roots of the technology go back to the Defense Advanced Research Projects Agency (DARPA) and Bell Labs, UC Berkeley, the Xerox Palo Alto Research Center (PARC), and Apple (and Ted Nelson). These organizations had laid the groundwork for what later happened at CERN and the National Center for Supercomputing Applications (NCSA) at the University of Illinois, resulting in the Web. Once again, it was a sleeper technology that bided its time until there was a hitch in the economy, at which point it exploded on the scene and rewrote the book of profits.
Continue to page 3
This takes us to the dotcom bust of 2000- 2001, which was like the aerospace bust of 1969, in that it disproportionately affected technology workers (i.e., I eventually lost my job again). After peaking at 5132.52 on March 10, 2000 (the pinnacle of “irrational exuberance,” as Fed chairman Alan Greenspan once characterized the boom), NASDAQ, the techie index, collapsed three days later on March 13. On October 9, 2002, it struck its nadir: 1114.11.
NASDAQ’s highest value since that date (to mid-April of this year) was 4580, on October 31, 2007. That was the peak of the boom that started unraveling in early 2008 and whose low we probably haven’t yet seen. What will be the next transformational technology (or technologies), finding its start after the dotcom crash and growing through the present Great Recession? I wouldn’t be surprised if it has something to do with energy efficiency. Or, maybe it will involve the convergence of energy harvesting, thin-film batteries, and wireless mesh networks. It’s too soon to tell.
What’s next in the longer term? Maybe I’m not the best person to ask. I was gazing at vacuum-tube filaments (and tweaking a regenerative-feedback knob!) while the IC was being launched. I made TV tapes to tell oscilloscope salesmen that microprocessors might be good for telling washing machines when to rinse. I wrote articles explaining how GPS could calibrate atomic clocks. True, I was a premature adopter of social networking, but Usenet and listservers do not equate to eBay and Twitter.
On the other hand, here we are in another of those episodes that find thousands of EEs trying to figure ways to get their resumes into the short stack and out of the tall stack. (Hint: it’s good to know Python; forget FORTRAN IV; calibrate the hiring manager before you bring up Forth.) Maybe I’ve learned something. Still, I’m not going to pick technologies. Yet it does make sense for engineers to look at how the electronics business has been changing recently to see how these shifts align with their skills.
Certain changes have been obvious. Chip production has been largely based in Asia for decades, starting even before Intel abandoned the memory business in the 1980s. One can argue whether that’s because of lax environmental laws there, or more willingness to invest in expensive fabs, or both. The original equipment manufacturer (OEM) center of mass has more recently shifted to Asia, in turn creating growth in smaller original design manufacturers (ODMs) to support them.
OEMs are like traditional car manufacturers. They make branded products for end users. ODMs are like auto parts suppliers. They make anonymous subassemblies. Or, the two are like primes and subcontractors in the aerospace business. Pick your industry. Either way, OEM/ODM engineering tends toward industrial design and manufacturing efficiency. But of more interest is where the gee-whiz parts of the design are carried out.
Over the years, I’ve watched semiconductor vendors make parts with better and better specs in a perpetual game of leapfrog. Yet recently, there’s been a change. In the past, the better the parts performed, the harder they were to use. The broader the amplifier bandwidth, for instance, the more difficult it was to keep it from oscillating in a circuit. The current OEMs and ODMs have changed that.
They will now (somewhat reluctantly) pay a few extra pennies for components that are less sensitive to layout. In fact, they would rather not deal with basic components. They respond much more positively when a chip vendor works with them to provide maybe not a “black” box, but some shade of “gray” box—one that performs a higher-level function at a higher level of performance than a “white” box made of off-the-shelf parts.
In essence, they’re “reverse-outsourcing” the most challenging parts of their designs to chip makers or independent design houses (IDHs), or a combination of the two. (Good IDHs seem to be prime acquisition targets for chip companies.) And where the IDHs are located doesn’t seem important.
When Analog Devices decided that microelectromechanical systems (MEMS) microphones were the coming thing, it acquired AudioAsics, a Copenhagen-based IDH that had a separate design center in Bratislava. There are other hot analog IDHs in Dublin and Edinburgh, and ADI has a strong captive operation in Limerick. Cork is a hotbed of DSP design. It all seems to depend on certain professors and the universities that support them. Think in terms of Fred Terman, at Stanford, updated and cloned dozens of times around the globe.
What might be the roots of the next stealth technology? We’re due for something transformational to come along and create the next Google or iPod. I certainly don’t know, but it’s easy to guess that it might have a greenish cast. And it’s not too farfetched to think it might be connected to consumer demand from a rising middle class in China and/or India. The trick might lie in thinking how those 21st century middle classes would be different than the 19th and 20th century middle classes in Europe and the Americas.
Cypress’ T.J. Rogers recently shared some interesting tidbits. I’m not claiming to know what T.J. thinks, but the notion I took away was that China is too big and culturally different to follow a Western pattern of growth, with interior cities linked by freeways and a seamless power and communications grid.
Instead, in the interior, think of separate, self-sufficient urban population centers with specialized manufacturing capabilities, self-powered by solar-thermal or pebble-bed nuclear plants and surrounded by agricultural resources. Then decide what you want to design for that kind of environment.