At one time a scientist or engineer trying to solve a tough problem with electronic computation had the choice of an analog computer, a digital computer, or both together in a hybrid configuration. For really complex math involving differential equations and other messy calculus, the analog computer was king. It could solve differential equations almost instantaneously. And since analogs could do parallel computing naturally, multiple parts of a problem could be solved concurrently—really speeding up the solution. This made the simulation of big, ugly physical systems fast and practical.
Digital computers were really slow back then and while they could be programmed to do calculus and other higher math algorithms, they were essentially impractical for simulation. It was rarely done. But as digital computers got faster, it soon became apparent they were going to give analog computers some competition. Today, almost any digital computer, big mainframe to smallest PC, does calculus and other higher math in a flash and with unparalleled precision. Pretty soon, analogs could no longer compete and just disappeared. Just like so many other electronic technologies, these impressive machines ran their life cycle course.
Nevertheless, it was and still is a cool technology. Analog computers represented constants and variables with proportional analog voltage levels. These were then processed by various electronic circuits that performed the mathematical operations in analog form. The key processing circuit was the operational amplifier. The op amp could be easily reconfigured to perform a wide range of math operations such as addition/subtraction, multiplication/division by a constant, integration and differentiation, and many others. The op amp could even be configured to do logarithmic and trigonometric operations with special non-linear feedback circuits. For operations like multiplication, special four-quadrant modules were available.
To program an analog computer, you simply wrote out the equations you wanted to solve, converted them to a block diagram, then wired the various computing elements together with pluggable cords on a large patch panel. Most problems were simulations of circuits, mechanical assemblies, and even big complex systems involving chemical processes and space travel. With analog components, the computation precision was not as great as that you could get with a digital computer, but it was good enough for most jobs. Answers with an error of no more than a few percent were possible. Lots of really tough problems got solved this way. These computers could give you calculus answers that you could not come up with yourself for lack of a suitable paper solution. When your table of integrals failed to reveal a form to fit your problem and you had an analog computer at your disposal, you were good to go. The math software available today readily takes care of nasty problems like that.
Back in the heydays of analog computers there were quite a few manufacturers. I used to work for the largest analog supplier, Electronic Associates in West Long Branch, NJ. I worked on their big analogs, first the PACE series then the 8800, as well as their 32-bit digital computer called the 8400. A set of AD/DA interfaces was used to connect the two together in a hybrid system. One of the systems I worked on was the one that simulated the first Apollo mission to the moon at NASA Houston. The big digital system simulated the long flight from earth to moon while a couple of analog computers simulated the fast dynamics of the Apollo spacecraft. The ADCs and DACs in the interface let the analog and digital machines exchange data. You can imagine the equations that went into programming this monster.
Some of the other analog companies that come to mind are Applied Dynamics, Comcor, Systron Donner, Computer Systems Inc. and a few others I can no longer remember. I also worked for Hybrid Systems in Houston, which made big analogs and hybrids for aerospace and petrochemical companies. The digital computer we used was the IBM 360/44 that was the "hot" engineering/scientific Fortran machine. I worked on one system that went to Bell Helicopter in Fort Worth where they simulated their helicopter designs. Think Huey.
Most of these analogs were of the iterative type where fast integrators were used and the problem was time scaled and solved repeatedly many times per second. This allowed you to display the graphical results on an oscilloscope. It allowed dynamic interaction with the solution that helped visualize the problems and optimize designs.
A few other companies used to make components for analog computers—many of which were constructed for general and special purpose applications. George A. Philbrick (GAP) was one of these. I first learned op amps using their K2W vacuum tube op amp module with two 12AX7 tubes. You may remember reading in Bob Pease's column about this device. Later GAP made solid state op amps, some of which Bob Pease designed. Burr Brown (now part of TI) was another early solid state op amp maker whose early products I sampled.
When I worked for Heathkit in the late 70' and 80's I saw some examples of their old kit analog computer that had been discontinued by the time I worked there. The EC-1 was a small (9 op amps) but useful educational computer. Their op amps were a single 6U8 dual triode-pentode with an open loop gain of only 1000, but it was enough to work moderately well. The main problem with all these early op amps was that you had to balance them to get the DC offset out for maximum precision. That was done manually with a pot and once all your amps were balanced, one or more would start drifting out of balance. You had to run your solution fast. Heathkit also had a large machine called the EC-400 which had 15 op amps if I remember correctly. Both worked good as long as the the DC drift was balanced out.
Solid state op amps soon became available, but few could operate over the ± 100 volt analog range used in most analogs. A few ± 10 volt machines were available, but most did not give the required precision or dynamic range. Making a wideband op amp with that ± 100 volt range back in the late 1960s and early 1970s was an amazing accomplishment. It still is today.
Today we have such good op amps, analog multipliers, and other superior analog computing components, I have sometimes wondered if we shouldn't bring back a modern version of an analog computer. Does that make sense? Maybe it could be a solution to some ugly calculus-intensive problem like in a research situation. A special analog computer could be built to simulate the equations quickly and with the ability to play around with the various variables and so on. It would certainly be smaller, cheaper and more precise than the monsters of yesteryear. Analog is still inherently faster than digital so maybe it could happen. Then again, maybe not.
Anyway, this is a lost technology. With digital computers faster than ever and now with DSP and super simulation and math software, who needs an analog computer? What a shame. But I guess we can say that about many other lost electronic technologies.