Get The Most Out Of Your Logic Analyzer

March 7, 2013
  For many applications, a modern logic analyzer will reveal the root cause of troublesome bugs in less time than alternative instruments. 
Download this article in .PDF format
This file type includes high resolution graphics and schematics.
The logic analyzer is a versatile tool that can help engineers with digital hardware debug, design verification, and embedded software debug. Yet many engineers turn to a digital oscilloscope when they should be turning to a logic analyzer, often because they’re more familiar with oscilloscopes. But logic analyzers have come a long way over the last few years, and for many applications they will help reveal the root cause of troublesome bugs in less time than alternative instruments.

Digital Oscilloscopes Vs. Logic Analyzers

There are similarities between oscilloscopes and logic analyzers, but there are also important differences. To better understand how the two instruments can address your particular needs, it is useful to start with a comparative look at their individual capabilities.

The digital oscilloscope is the fundamental tool for general-purpose signal viewing. Its high sample rate and bandwidth enable it to capture many data points over a span of time, measuring signal transitions (edges), transient events, and small time increments. While the oscilloscope is certainly able to look at the same digital signals as a logic analyzer, it is typically used for analog measurements such as rise times and fall times, peak amplitudes, and the elapsed time between edges.

Oscilloscopes generally have up to four input channels. But what happens when you need to measure five digital signals simultaneously—or have a digital system with a 32-bit data bus and a 64-bit address bus? You then need a tool with many more inputs. Logic analyzers typically have between 34 and 136 channels. Each channel inputs one digital signal. Some complex system designs require thousands of input channels. Appropriately scaled logic analyzers are available for those tasks as well.  

Unlike an oscilloscope, a logic analyzer doesn’t measure analog details. Instead, it detects logic threshold levels. A logic analyzer looks for just two logic levels. When the input is above the threshold voltage (Vth), the level is said to be “high” or “1.” Conversely, the level below Vth is a “low” or “0.” When a logic analyzer samples input, it stores a “1” or a “0” depending on the level of the signal relative to the voltage threshold.

A logic analyzer’s waveform timing display is similar to that of a timing diagram found in a data sheet or produced by a simulator. All of the signals are time-correlated so that setup-and-hold time, pulse width, extraneous, or missing data can be viewed. In addition to high channel count, logic analyzers offer important features that support digital design verification and debugging: 

• Sophisticated triggering that lets you specify the conditions under which the logic analyzer acquires data.

• High-density probes and adapters that simplify connection to the system under test (SUT).

• Analysis capabilities that translate captured data into processor instructions and correlate it to source code.

Using a logic analyzer is like using other instruments. It involves four main steps: connect, setup, acquire, and analyze.

Connect To The SUT

Logic analyzer acquisition probes connect to the SUT. The probe’s internal comparator is where the input voltage is compared against the Vth and where the decision about the signal’s logic state (1 or 0) is made. The user sets the threshold value, ranging from transistor-transistor logic (TTL) levels to CMOS, emitter-coupled logic (ECL), and user-definable. Logic analyzer probes come in many physical forms.

General purpose probes with “flying lead sets” handle point-by-point troubleshooting. High-density, multi-channel probes that require dedicated connectors on the circuit board can acquire high-quality signals with minimal impact on the SUT. And, high-density compression probes that use a connector-less probe are recommended for applications that require higher signal density or a connector-less probe attach mechanism for quick and reliable connections to the SUT.

The impedance of the logic analyzer’s probes (capacitance, resistance, and inductance) becomes part of the overall load on the circuit being tested. All probes exhibit loading characteristics. The logic analyzer probe should introduce minimal loading on the SUT while providing an accurate signal to the logic analyzer.

Probe capacitance tends to “roll off” the edges of signal transitions. This roll off slows down the edge transition by an amount of time represented as “t∆” in Figure 1. Why is this important? A slower edge crosses the logic threshold of the circuit later, introducing timing errors in the SUT. This is a problem that becomes more severe as clock rates increase.

1. The impedance of the logic analyzer’s probe can affect signal rise times and measure timing relationships.

In high-speed systems, excessive probe capacitance can potentially prevent the SUT from working. It’s always critical to choose a probe with the lowest possible total capacitance. It’s also important to note that probe clips and lead sets increase capacitive loading on the circuits they are connected to. Use a properly compensated adapter whenever possible.

Set Up The Logic Analyzer

Logic analyzers are designed to capture data from multi-pin devices and buses. The term “capture rate” refers to how often the inputs are sampled. It is the same function as the time base in an oscilloscope. Note the terms “sample,” “acquire,” and “capture” are often used interchangeably when describing logic analyzer operations. Also, there are two types of data acquisition or clock modes.

Download this article in .PDF format
This file type includes high resolution graphics and schematics.

First, timing acquisition captures signal timing information. In this mode, a clock internal to the logic analyzer is used to sample data. The faster data is sampled, the higher the resolution of the measurement will be. There is no fixed timing relationship between the target device and the data acquired by the logic analyzer. This acquisition mode is primarily used when the timing relationship between SUT signals is of primary importance.

Second, state acquisition is used to acquire the “state” of the SUT. A signal from the SUT defines the sample point (when and how often data will be acquired). The signal used to clock the acquisition may be the system clock, a control signal on the bus, or a signal that causes the SUT to change states. Data, which is sampled on the active edge, represents the condition of the SUT when the logic signals are stable. The logic analyzer samples when, and only when, the chosen signals are valid.

If you want to capture a long, contiguous record of timing details, then timing acquisition, the internal (or asynchronous) clock, is right for the job. Alternatively, you may want to acquire data exactly as the SUT sees it. In this case, you would choose state (synchronous) acquisition. With state acquisition, each successive state of the SUT is displayed sequentially in a listing window. The external clock signal used for state acquisition may be any relevant signal.

Triggering is another capability that differentiates the logic analyzer from an oscilloscope. Oscilloscopes have triggers, but they have relatively limited ability to respond to binary conditions. In contrast, a variety of logical (Boolean) conditions can be evaluated to determine when the logic analyzer triggers. The purpose of the trigger is to select which data the logic analyzer captures. The logic analyzer can track SUT logic states and trigger when a user-defined event occurs in the SUT.

When discussing logic analyzers, it’s important to understand the term “event.” It has several meanings. It may be a simple transition, intentional or otherwise, on a single signal line. If you are looking for a glitch, then that is the “event” of interest. Or, an event may be the defined logical condition that results from a combination of signal transitions across a whole bus. Note that in all instances, though, the event is something that appears when signals change from one cycle to the next.

Acquire State And Timing Data

During hardware and software debug (system integration), it’s helpful to have correlated state and timing information. A problem may initially be detected as an invalid state on the bus. This may be caused by a problem such as a setup and hold timing violation. If the logic analyzer cannot capture both timing and state data simultaneously, isolating the problem becomes difficult and time consuming.

Some logic analyzers require connecting a separate timing probe to acquire the timing information and use separate acquisition hardware. These instruments require you to connect two types of probes to the SUT at once (Fig. 2). One probe connects the SUT to a timing module, while a second probe connects the same test points to a state module. This is known as “double-probing.” It’s an arrangement that can compromise the impedance environment of your signals. Using two probes at once will load down the signal, degrading the SUT’s rise and fall times, amplitude, and noise performance.

2. Double-probing requires two probes on each test point, decreasing the quality of the measurement.

It is best to acquire timing and state data simultaneously, through the same probe at the same time (Fig. 3). One connection, one setup, and one acquisition provide both timing and state data. This simplifies the mechanical connection of the probes and reduces problems. The single probe’s effect on the circuit is lower, ensuring more accurate measurements and less impact on the circuit’s operation.

3. Simultaneous probing provides state and timing acquisition through the same probe for a simpler, cleaner measurement environment.

The logic analyzer’s probing, triggering, and clocking systems exist to deliver data to the real-time acquisition memory. This memory is the heart of the instrument—the destination for all of the sampled data from the SUT, and the source for all of the instrument’s analysis and display.

Logic analyzers have memory capable of storing data at the instrument’s sample rate. This memory can be envisioned as a matrix with channel width and memory depth (Fig. 4). The instrument accumulates a record of all signal activity until a trigger event or the user tells it to stop. The result is an acquisition, essentially a multi-channel waveform display that lets you view the interaction of all the signals you’ve acquired with a very high degree of timing precision.

4. The logic analyzer stores acquisition data in deep memory with one full-depth channel supporting each digital input.

Acquiring more samples (time) increases your chance of capturing both an error and the fault that caused the error. Logic analyzers continuously sample data, filling up the real-time acquisition memory, and discarding the overflow on a first-in, first-out basis. Thus, there is a constant flow of real-time data through the memory. When the trigger event occurs, the “halt” process begins, preserving the data in the memory.

The placement of a trigger in the memory is flexible, providing the ability to capture and examine events that occurred before, after, and around the trigger event. This is a valuable troubleshooting feature. If you trigger on a symptom, usually an error of some kind, you can set up the logic analyzer to store data preceding the trigger (pre-trigger data) and capture the fault that caused the symptom. The logic analyzer can also be set to store a certain amount of data after the trigger (post-trigger data) to see what subsequent affects the error might have had.

The logic analyzer’s main acquisition memory stores a long and comprehensive record of signal activity. Some of today’s logic analyzers can capture data at multi-gigahertz rates across hundreds of channels, accumulating the results in a long record length. Each displayed signal transition is understood to have occurred somewhere within the sample interval defined by the active clock rate. The captured edge may have occurred just a few picoseconds after the preceding sample or a few picoseconds before the subsequent sample or anywhere in between. The sample interval, then, determines the resolution of the instrument.

Download this article in .PDF format
This file type includes high resolution graphics and schematics.

Evolving high-speed computing buses and communication devices are driving the need for better timing resolution in logic analyzers. The answer to this challenge is a high-speed buffer memory that captures information at higher intervals around the trigger point. Here too, new samples constantly replace the oldest as the memory fills. Every channel has its own buffer memory. This type of acquisition keeps a dynamic, high-resolution record of transitions and events that may be invisible at the resolution underlying the main memory acquisitions.

Analyze And Display Results

The data stored in the real-time acquisition memory of the logic analyzer can be used in a variety of display and analysis modes. Once the information is stored in the system, it can be viewed in formats ranging from timing waveforms to instruction mnemonics correlated to source code.

The waveform display is a multi-channel detailed view that lets the user see the time relationship of all the captured signals, much like the display of an oscilloscope. Commonly used in timing analysis, it is ideal for:

• Diagnosing timing problems in SUT hardware.

• Verifying correct hardware operation by comparing the recorded results with simulator output or data sheet timing diagrams.

• Measuring hardware timing-related characteristics including race conditions, propagation delays, and the absence or presence of pulses.

Analyzing Glitches

The listing display provides state information in user-selectable alphanumeric form. The data values in the listing are developed from samples captured from an entire bus and can be represented in hexadecimal or other formats. Imagine taking a vertical “slice” through all the waveforms on a bus. The slice through the four-bit bus represents a sample that is stored in the real-time acquisition memory. As Figure 5 shows, the numbers in the shaded slice are what the logic analyzer would display, typically in hexadecimal form. The intent of the listing display is to show the state of the SUT, allowing you to see the information flow exactly as the SUT sees it. 

5. State acquisition captures a “slice” of data across a bus when the external clock signal enables an acquisition.

State data is displayed in several formats. The real-time instruction trace disassembles every bus transaction and determines exactly which instructions were read across the bus. It places the appropriate instruction mnemonic along with its associated address on the logic analyzer display.

An additional display, the source code debug display, makes debug work more efficiently by correlating the source code to the instruction trace history. It provides instant visibility of what’s actually going on when an instruction executes. The source code display can be correlated to real-time instruction traces.

With the aid of processor-specific support packages, state analysis data can be displayed in mnemonic form. This makes it easier to debug software problems in the SUT. Armed with this knowledge, you can go to a lower-level state display (such as a hexadecimal display) or to a timing diagram display to track down the error’s origin.

Automated measurements provide the ability to perform sophisticated measurements on logic analyzer acquisition data. A broad selection of oscilloscope-like measurements can include frequency, period, pulse width, duty cycle, and edge count. The automated measurements deliver fast and thorough results by quickly providing measurement results on very large sample sizes.

Two use cases show how logic analyzers can be used to address common measurement problems.

Capturing Setup Or Hold Violations

Setup time is defined as the minimum time input data must be valid and stable prior to the clock edge that shifts it into the device. Hold time is the minimum time the data must be valid and stable after the clock edge occurs. Digital device manufacturers specify setup and hold parameters, and engineers must take great care to ensure their designs do not violate the specifications.

But today’s tighter tolerances and the widespread use of faster parts to drive more throughput are making setup and hold violations ever more common. In recent years, setup and hold requirements have narrowed to the point where it is difficult for most conventional general-purpose logic analyzers to detect and capture the events. The only real answer is a logic analyzer with sub-nanosecond sampling resolution.

The following example uses a synchronous acquisition mode that relies on an external clock signal to drive the sampling. Irrespective of the mode, the logic analyzer can provide a buffer of high-resolution sample data around the trigger point. In this case, the DUT is a “D” flip-flop with a single output, but the example is applicable to a device with hundreds of outputs.

In this example the DUT itself provides the external clock signal that controls the synchronous acquisitions. The logic analyzer drag-and-drop trigger capability can be used to create a setup and hold trigger. This mode offers the ability to define the setup and hold timing violation parameters (Fig. 6). Additional submenus in the setup window are available to refine other aspects of the signal definition, including logic conditions and positive- or negative-going terms.

6. Setup and hold timing violation event parameters can be defined to create a trigger.

When the test runs, the logic analyzer actually evaluates every rising edge of the clock for a setup or hold timing violation. It monitors millions of events and captures only those that fail the setup or hold timing requirements. Figure 7 shows the resulting display. Here, the setup time is 2.375 ns, far less than the defined limit of 10 ns.

7. After the logic analyzer evaluates every rising edge of the clock, it shows the setup and hold timing violations.

Signal Integrity

Direct signal observations and measurements are the only way to discover the causes of signal integrity-related problems. For the most part, the same familiar instruments found in almost any electrical engineering lab measure signal integrity. These instruments include the logic analyzer and the oscilloscope with probes and application software rounding out the basic toolkit. In addition, signal sources can be used to provide distorted signals for stress testing and evaluation of new devices and systems.

When troubleshooting digital signal integrity problems, especially in complex systems with numerous buses, inputs, and outputs, the logic analyzer is the first line of defense. It offers high channel count, deep memory, and advanced triggering to acquire digital information from many test points, and it then displays the information coherently. Because it is a digital instrument, the logic analyzer detects threshold crossings on the signals it is monitoring and then displays the logic signals as seen by logic ICs.

The resulting timing waveforms are clear and understandable, and they can easily be compared with expected data to confirm that things are working correctly. These timing waveforms are usually the starting point in the search for signal problems that compromise signal integrity. These results can be further interpreted with the help of disassemblers and processor support packages, which allow the logic analyzer to correlate the real-time software trace (correlated to source code) with the low-level hardware activity (Fig. 8).

8. This logic analyzer display shows timing waveforms and real-time software traces correlated to source code.

However, not every logic analyzer qualifies for signal integrity analysis at today’s extremely high (and increasing) digital data rates. The table provides some specification guidelines that should be considered when choosing a logic analyzer for advanced signal integrity troubleshooting. With all the emphasis on sample rates and memory capacities, it is easy to overlook the triggering features in a logic analyzer.

Yet triggers are often the quickest way to find a problem. After all, if a logic analyzer triggers on an error, it is proof that an error has occurred. Most current logic analyzers include triggers that detect certain events that compromise signal integrity—events such as glitches and setup and hold time violations. These trigger conditions can be applied across hundreds of channels at once—a unique strength of logic analyzers.

Summary

Logic analyzers are indispensable for digital troubleshooting at all levels. As digital devices become faster and more complex, logic analyzers are keeping pace. They deliver the speed to capture the fastest and most fleeting anomalies in a design, the capacity to view all channels with high resolution, and the memory depth to untangle the relationships between tens, hundreds, or even thousands of signals over many cycles.

Triggering can confirm a suspected problem or discover an entirely unexpected error. Most importantly, triggering provides a diverse set of tools to test hypotheses about failures or locate intermittent events. A logic analyzer’s range of triggering options is a hallmark of its versatility. Furthermore, high-resolution sampling architectures can reveal unseen details about signal behaviors.

Single-probe acquisition of both state and high-speed timing data is helping designers gather volumes of data about their devices and then analyze the relationship between the timing diagram and the higher-level state activity. Other characteristics such as acquisition memory, display and analysis features, integration with analog tools, and even modularity join forces to make logic analyzers the tool of choice to find digital problems fast and meet aggressive design schedules.

Download this article in .PDF format
This file type includes high resolution graphics and schematics.

Chris Loberg is a senior technical marketing manager at Tektronix responsible for oscilloscopes in the Americas region. He has held various positions with Tektronix during his more than 13 years with the company, including marketing manager for the Tektronix Optical Business Unit. His extensive background in technology marketing includes positions with the Grass Valley Group and IBM. He earned an MBA in marketing from San Jose State University. 

Sponsored Recommendations

What are the Important Considerations when Assessing Cobot Safety?

April 16, 2024
A review of the requirements of ISO/TS 15066 and how they fit in with ISO 10218-1 and 10218-2 a consideration the complexities of collaboration.

Wire & Cable Cutting Digi-Spool® Service

April 16, 2024
Explore DigiKey’s Digi-Spool® professional cutting service for efficient and precise wire and cable management. Custom-cut to your exact specifications for a variety of cable ...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!