Designing The Old-Fashioned Way

Sept. 21, 2007
Some years ago, a prestigious investment firm ran television commercials in which a distinguished British actor proclaimed that “They make money the old-fashioned way. They earn it.”

Some years ago, a prestigious investment firm ran television commercials in which a distinguished British actor proclaimed that “They make money the old-fashioned way. They earn it.”

In a similar vein, it’d be nice if it could be said of us electrical engineers that “we solve problems the old-fashioned way. We think.” Unfortunately, we electrical engineers tend to be victims of our own success in that we’re over-dependent on computers to do our analysis and measurements for us. Too often, we mistake the precision of a computer-generated result for accuracy, and those are not at all the same thing. Between these two lies the difference between true professionals and those who simply play with the tools.

Many engineers insist that simulation and measurement must agree. I think they’re two-thirds right. The whole story is that simulation, measurement, and theory must agree if one is to have a completely reliable result. In other words, the result must make sense based on what we know about the operation of the structures we build.

In addition to making results more reliable, theory can have the greater advantage of making a wider range of design alternatives available. A computer only does exactly what it has been programmed to do; thus, its creativity can be no greater than that of its programmer. In essence, the “box” can’t “think outside the box.” In contrast, if one understands how a given result occurred, then one can find ways to generate a different result.

I will illustrate my point by examining the way two types of tools are used: the various flavors of Spice and the numerous 3D field solvers. Both types of tools are very powerful and very useful. That’s precisely why both types of tools see not only overuse, but also misuse. While both can generate valuable and insightful results, all too often they’re used to generate nonsensical results. The difference between these two classes of results lies in the quality of the neural processing that preceded and followed the electronic computation.

In the case of Spice, how do you know that your use of the device models supplied by a semiconductor falls within their range of application? You should know that some semiconductor vendors deliver excellent Spice device models that are reasonably accurate for a wide range of both analog and digital designs. On the other hand, other very popular semiconductor vendors have a reputation for providing device models that which may be okay to use for designing digital gates, but are wildly inappropriate for any type of analog design. Do you know which is which?

Here are some more pertinent questions about models: Do you know how to evaluate a device model for a prospective application? Have you determined that the device models you’re using match your understanding of device physics? If not, how can you possibly know whether or not your Spice results are valid, especially if you’ve done anything at all creative? How confident are you that the extracted parasitics you’re working with are accurate? Have you laid out some simple structures and compared the extracted parasitics to your estimates?

Does your design flow involve the use of a combination of extracted and estimated parasitics? If so, then how do you know that there aren’t any parasitics that are being accounted for twice? How do you know that there aren’t any parasitics that are slipping through the cracks? In my experience, it’s extremely difficult to get the extracted and estimated parasitics combined correctly at each stage of the design process.

How much are you depending on device matching, either within a cell, between cells, or across a chip? Due in no small part to the continued shrinkage of line widths, matched devices are becoming a thing of the past. What methods are you using to estimate the effects of device mismatch, whether it’s differential offset in a receiver or on-chip variation (OCV) in your chip-level clock distribution network? How rigorous is the derivation of these methods? How reliable (or at least plausible) is the device-mismatch data you've gotten from your vendor? Or does your vendor even have a mismatch model available (yet)?

The bottom line is that your confidence in your simulation results can be no better than the accuracy of your models and the rigor of the methods used to evaluate the results.

A similar set of questions applies to 3D field solvers. How do you know you’re modeling the right circuit in the field solver in the first place? I know of a case in which a well-respected electromagnetic (EM) modeling company was using its own products to model some interconnects for an equally well-respected systems company. Unfortunately, when they modeled one of the cable connectors, they left out a 0.2-in.-long unshielded section where the transmission media was attached to the connector. This oversight was found by tracing a dribble-error problem in the system back to its source. It will never be known whether the systems house failed to give the EM modelers the right information, or whether the EM modelers chose to ignore this seemingly insignificant detail that turned out, in hindsight, to be critical. But what is clear is that they didn’t model the right circuit, they got the wrong answer, and the mistake affected system performance in a very visible way.

One way to make sure that you’ve modeled the right circuit is to compare simulation to measurement. Have you done that? A lot of people treat the output of a 3D EM solver as though it were equivalent to measured data, and they don’t even mention the possibility that one might make some measurements to confirm the result. That’s simply not realistic or professional. 

How do you know that the boundary conditions you’re using are correct and adequate? For example, the physics of a via involves radial transverse electromagnetic waves (TEM waves), and the losses due to these modes can become measurable above 3 GHz or so. Are you using boundary conditions that allow these effects to be evident?

There's also an EM modeling company that’s stirring up a lot of buzz these days about modeling transmission lines over split planes. If you’re modeling such a geometry, will your boundary conditions allow you to estimate the losses and/or EMI caused by the slot antenna such a geometry can create? For either a via or a split-plane geometry, would you be able to sketch out on a napkin where the ground currents are going? Are you sure your sketch would be consistent with the laws of physics? Can you name and describe the electromagnetic modes that would be used in a mathematical analysis of these geometries? If you can’t do that, or can’t be confident that you’ve done it correctly, how can you be sure that your boundary conditions are correct? And if you aren’t sure your boundary conditions are correct and adequate, I invite you to consider the merits of your result.

Are you sure you’ve spaced your frequency points closely enough? Some geometries can exhibit relatively high-Q modes that can be important to the analysis results. Do you have a sense for what Q values to expect and therefore what frequency spacing to choose?

For both Spice and the 3D field solver, I’ve asked a lot of very direct questions. These questions can be difficult to answer; those answers are often difficult to verify. Nonetheless, there are professional engineers out there who can craft coherent and satisfactory answers to each and every one of these questions. These people have a firm grasp of the theory behind what they’re doing, and that’s at least part of what makes the difference. If you’re not already one of these people, then I suggest that you (or at least your employer) find such a person and start learning from them.

Author’s bio:

Michael Steinberger, Ph.D., leads SiSoft’s tool development for design and analysis of serial links in the 5- to 20-Gbit/s range. Before joining SiSoft, Dr. Steinberger led a large group of design engineers at Cray Inc. responsible for SERDES design, high-speed channel analysis, PCB design, and custom RAM design. He drove the development of in-house methodologies and software at Cray used to successfully design and validate 6+-Gbit/s serial links.

Dr. Steinberger has over 29 years experience in the design and analysis of very-high-speed electronic circuits. He began his career at Hughes Aircraft designing microwave circuits and then moved to Bell Labs, where he designed microwave systems that helped AT&T move from analog to digital long-distance transmission. Steinberger was instrumental in the development of high-speed digital backplanes used throughout Lucent’s transmission product line. Steinberger earned his doctorate from the University of Southern California and has been awarded seven U.S. patents.

Author’s e-mail address:

[email protected]

Sponsored Recommendations

Near- and Far-Field Measurements

April 16, 2024
In this comprehensive application note, we delve into the methods of measuring the transmission (or reception) pattern, a key determinant of antenna gain, using a vector network...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Empowered by Cutting-Edge Automation Technology: The Sustainable Journey

April 16, 2024
Advanced automation is key to efficient production and is a powerful tool for optimizing infrastructure and processes in terms of sustainability.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!