IC designers now have a powerful weapon in the struggle against rising test costs: commercially available EDA solutions that provide fast and effective means to implement scan compression onchip. By reducing the amount of data needed to thoroughly test digital circuits, compression frees up enough tester memory to add tests (e.g., transition delay pattern sets) that further improve quality. Because offtheshelf tools have become increasingly automated and easier to use, semiconductor firms are rapidly embracing scan compression to lower costs at the tester.
It's because scan compression has proven so successful in reducing test costs that designers and managers alike often maintain, mistakenly, that more is better. Although it's reasonable to assume that everhigher levels of compression will achieve everhigher cost savings, the economics underlying the technology suggest otherwise. In fact, compression without limits increases costs.
In this article, we'll see why this is the case by exploring how compression reduces test time and what factors degrade compression performance and cost savings. We'll determine what level of savings designers can realistically expect and how to maximize these savings. In the process, we'll arrive at some practical implementation guidelines that will help designers reap the benefits of scan compression while avoiding its hazards.
Test Execution Cost and Test Time Reduction
Assume that a design with F scan flops has C scan chains of equal depth, each
connected to a pair of dedicated scan I/O pins. Then without scan compression,
the chain depth is F/C and we can approximate the cost C_{T} of testing each yielding
device on the tester as:
R is the tester cost ($/sec), P_{B }the number of basic scan ATPG patterns, Y_{0} the manufacturing yield, and f the tester scan shift frequency. The multiplier α reflects a slight decrease on average in test time due to failing die (Y_{0}≤ α ≤ 1)^{1}. C_{T} is referred to as the test execution cost.
Test application time reduction (TATR) is accomplished by increasing the number of scan chains by a factor of "x" so that the depth of each chain is reduced by the same factor. The variable "x" is loosely referred to as the amount of compression, but it's actually the compression ratio, the number of internal scan chains divided by the number of scan channels, C.
For example, if your design has C = 10 scan channels, then implementing x = 20 compression creates 20 X 10 = 200 chains 1/20th the size of the original depth. Compression and decompression circuits between the chain I/Os and the scan I/O pins ensure that the number of scan I/O pins remains the same. Sharing of internal inputs to the scan chains means that the number of bits in each scan ATPG pattern is reduced by the same factor, x. Likewise, reducing scan depth by the same factor makes it possible to scan in and test x times more patterns in the same amount of time.
The cost savings ΔCost from TATR is the difference in test execution costs between basic scan and scan compression:
Dividing by C _{T} gives the percentage cost reduction: 1 — 1/x. Cost savings garnered by Equation 2 are ideal because the formula doesn't account for various negating effects that offset savings. Let's now examine each of these effects in turn, using a 65nm design consisting of 97.1 million gates, 1.3 million scan flops, and 10 scan channels for the examples. For this article we've adjusted measurements of toolspecific behavior in order to highlight the described phenomena.
Pattern Inflation
As the compression ratio increases, more patterns are needed to maintain the
same high fault coverage. Pattern inflation from compression is always present
to some degree, although the use of multiple clock domains in today's systemsonachip
tends to increase pattern inflation. That's because it increases the level of
unknown logic values propagating through the circuits. To compensate, commercial
compression tools employ various methods, including Xblocking, to ensure relatively
low and linear pattern inflation over a wide range of compression levels.
The number of patterns generated for a scancompressed design P'(x) is a function of the basic scan ATPG pattern count P_{B} and the pattern inflation rate ε, which is the percentage increase in pattern count per unit increase in the compression ratio x:^{2}
From a costsavings perspective, the pattern inflation rate itself has only a minor impact as long as it remains linear in x. This is because differences in compressed test times for different inflation rates are insignificant compared with the test times using basic scan. To illustrate, the black curves in Figure 1 display tester cycle count for zero pattern inflation and ε = 4%, an extraordinarily high pattern inflation rate.
Further impact on savings can result if there's a pronounced step increase in pattern count relative to P_{B} for any compression level x, such that the pattern count in Equation 3 is instead described by:
The step increase is illustrated in Figure 2, wherein the red line, representing P'(x) in Equation 4, is the leastsquares fit of different compression data points. The basic scan pattern count is P_{B} = 1100, whereas the yintercept of the line is nearly 50% greater:
The blue curves in Figure 1 show the tester cycle count for e = 0% and e = 4%, assuming the step increase of Figure 2.
Growth in Die Size
In addition to compression and decompression circuits, compression tools insert
multiplexers and Xgating logic with each synthesized scan chain. Gate overhead
of compression contributes to a relatively small, linear increase in
the die size. As compression increases, however, there's increasingly high fanout
of decompressor outputs to scan chain inputs and fanin of scan chain outputs
to compressor inputs.
Due to wirerouting congestion, the area of wiring that connects all scan chains to the compression logic increases nonlinearly so that it dominates the area overhead of compression. Routing congestion can be such a problem that it may become difficult to efficiently route a highlycompressed design. Die size as a function of compression level can be described by:^{2}
AF is the area of compression circuitry that's independent of compression level (cm^{2}), A_{0} the die size without compression (cm^{2}), γ a linear areascaling factor, and ζ a nonlinear multiplier that accounts for the large area increase from scan chain interconnect. The variables affecting die size are both design and tooldependent.
Measurements taken from the example design at different compression levels indicated the gate count increased at γ = 1.1 µm per unit increase in compression, or 11.8 gates per scan chain. This is the red curve in Figure 3, which plots the estimated percentage increase in A(x) across compression levels using Equation 5 with ζ = 0. When only gate area overhead is considered, the area increase is almost flat across the compression range. The blue curves, however, consider also scan chain interconnect area overhead reflecting different rates of nonlinear growth in area due to wire routing congestion.
Manufacturing yield is inversely proportional to die size, so increasing die size by adding compression circuitry decreases yield from Y_{0}, the yield of the design without compression, to Y(x), the yield of the design with compression level x. To illustrate, Figure 4 plots Y_{0}/Y(x) for several manufacturing yields using Equation 5 for A(x) and the exponential yield equation relating yield to die area and defect density,^{3} assuming the same parameters as in the previous example. The decrease in yield at higher compression levels makes it more costly to manufacture each yielding part.
Cost Savings from Compression
In the section on test execution cost, we described a simplistic costsavings
model that ignored all of the underlying effects contributing to reduction in
cost savings from compression. Now that we understand the behavior of these
effects, let's observe their combined impact on cost savings.
Assume a design is implemented first without compression using basic scan, then with compression using compression ratio x. Thus, the cost savings ΔCost from test time reduction is the difference in test execution costs for the two designs, C_{EXEC}, subtracted by the silicon area overhead cost of compression C_{SILICON}:
We can formulate the cost savings per good die as two weighted terms: the test execution cost component with coefficient C _{T} equivalent to test execution cost of the basic scan design ($/sec) given by Equation 1, and the siliconarea overhead cost component with coefficient C _{S}, the cost of silicon ($/cm ^{2}):
The formulation for cost savings in Equation 7 includes all compression cost variables that offset ideal savings from test time reduction. To illustrate their relative contributions, Figure 5 displays compression cost savings as a percentage of total costs calculated from Equation 7 for the design example (design and cost parameters are indicated in the figure). Four curves are shown, each reflecting a different set of assumptions about the cost variables, as summarized in the table.
Scenario 1 assumes zero area overhead so that A(x) = A_{0} and Y(x) = Y_{0}, and zero pattern inflation so that ε = 0 and
The expression in Equation 7 is reduced to its simplified form given by Equation 2. Therefore, this scenario reflects ideal savings from test time reduction. Without considering the factors that influence compression cost, one would reasonably assume that increasing compression ad infinitum is the most costeffective strategy!
Scenario 2 takes into consideration pattern inflation, which tends to pull the cost curve downward. Most of this difference is due to a 60% step increase in pattern count from PB that is common across all compression levels (the cost difference between zero and 0.42% pattern inflation rate is too negligible to be displayed separately on the graph).
Scenario 3 accounts for nonzero compression area overhead. Savings from test time reduction are offset by the siliconarea overhead cost of compression. An optimal compression level occurs at x = 32, at which the incremental increase in silicon cost is equal to the incremental decrease in test execution cost. Above this level, silicon cost increases at a faster rate than the decrease in execution cost; net savings decline precipitously with higher compression until a breakeven point is reached at x = 196, above which compression actually increases costs.
Scenario 4 takes all of the negating factors into account by highlighting the cost impact of decreasing yield due to compression. The breakeven point shifts down to x = 174, while the optimal compression level decreases slightly to x = 29, where cost savings reaches the maximum level of 86%.
Conclusion: Steps to Maximize Savings
The preceding analysis indicates that real cost savings from scan compression
are substantial, though less than the ideal levels usually claimed by the marketers
of compression tools. You should expect typical savings from test time reduction
in the 80% to 95% range, depending on die size, manufacturing yield, tester
scan shift frequency, tester cost, cost of silicon, and the compression cost
variables, which are both design and tooldependent. You can maximize savings
by following these guidelines when implementing scan compression in your designs:
 Utilize as many I/O pins as feasible while avoiding very high compression levels.
Increasing the number of scan chains from C1 to C2in the uncompressed design reduces the chain depth and requires less compression to reduce test time to a given level. The amount of compression needed to achieve the same test time is reduced by approximately 1 – C1/C2. For example, if your design utilizes 100 scan channels instead of 10, then the compression ratio needed to achieve the same test time is reduced by 90%. This is advantageous at nominal compression levels, but keep in mind a very large number of chains at high levels could increase routing congestion.
 Select a compression ratio in the range of the optimal compression level.
Regardless of the number of scan channels, cost savings start to plateau—even under ideal assumptions—by x = 20. Maximum savings occur at a level higher or lower than this, depending on the compression cost variables. Use the costsavings formula in Equation 7 to estimate the optimal compression level for your design.
 Minimize the number of unknown logic states.
Although it may not be possible to produce an "Xclean" design, there are ways to reduce the amount of unknowns during scan testing. Timing exceptions associated with multiple internal clocks that aren't skewbalanced are a leading culprit of unknown logic states. The problem often occurs when using a single external clock (or single internal clock controller) to control many internal clock domains. A better approach is to employ different onchip clock controllers to generate separate capture clocks (one for each clock domain), thereby using the skewbalanced clock trees in test mode. Although there will be an area penalty associated with the additional clock controllers, no increase in routing congestion should occur.
Another way to reduce unknowns is to resolve all internal tristate buses to known values in test mode. Finally, consider bypassing all embedded memories during stuckat testing. For transition delay tests, techniques to propagate known memory states are possible though more involved to implement.
 Minimize wire routing congestion from scan chain interconnect.
Embedding the compression logic inside the design's physical hierarchy can reduce routing congestion. To illustrate, Figure 6 shows two large physical partitions, A and B, each containing its own compression circuits. Smaller cores C and D connect with compression circuits at the top level, along with other toplevel logic. Embedding compression in the largest physical blocks decreases routing at the top level; most connectivity is confined to the interior of the blocks, and this reduces the length of wires that connect the compression logic to the scan chain I/Os. To maintain compression performance, it's essential that all scan chains have approximately the same depth.
 Anticipate your toughest design challenges and select implementation tools accordingly.
Perhaps the most important cost not factored in by the equations concerns implementation cost—the engineering time and effort required to implement compression. After all, reducing test time isn't very cost effective if it adds weeks to your design cycle and delays your tapeout. To manage implementation cost, it might be best to invest in EDA solutions such as Synopsys' Galaxy Design Platform for Test, which have a high degree of predictability and correlation. Predictability reflects the extent to which your compression performance goals are achieved by the tool.4 Correlation considers the impact of compression on area, timing, power, and routability. Correlation reflects selfconsistency of the design platform as a fronttoback implementation flow. Thus, results you achieve at the logical level are observed in silicon, with a minimal amount of effort.
We hope these guidelines, while by no means exhaustive, will help you maximize savings from compression.
References

S. Wei, P.K. Nag, R.D. Blanton, A. Gattiker and W. Maly, "To DFT or Not to DFT?" Proc. Int'l Test Conf., 1997, Paper 23.3.

C. Allsup, "The Economics of Implementing Scan Compression to Reduce Test Data Volume and Test Application Time," Proc. Int'l Test Conf., Lecture 2.2, 2006.

T.M Michalka, R.C. Varshney, J.D. Meindl, "A Discussion of Yield Modeling with Defect Clustering, Circuit Repair, and Circuit Redundancy," IEEE Transactions on Semiconductor Manufacturing, Vol. 3, No. 3, Aug. 1990, Pages: 116127.

C. Allsup, "Measuring Scan Compression Performance," appearing in EDA DesignLine (www.edadesignline.com), May 2007.