Does SNR Measure Up For Capacitive Touchscreens?

Oct. 25, 2011
Cautions and recommendations relative to characterizing capacitive touch screens' performance in the presence of noise from displays, power supplies, etc.

Fig 1. In capacitive touchscreens, the signal in SNR is the measured amount of change in mutual capacitance as a direct result of finger capacitance. Finger capacitance depends on the sensor cover thickness, finger size, DUT stray capacitance to ground, finger size, and sensor pattern. The noise component depends on internal controller noise and external noise sources.

Fig 2. Here, SNR is determined when there’s spiky noise. The finger signal (CF) is measured by calculating the difference of the mean value of 100 samples (about one second) of data before a finger touchdown and the mean value of 100 samples after a finger touchdown.

Touchscreen controller manufacturers often cite an array of varying specs and metrics to help distinguish their products from others. One frequently mentioned differentiator is signal-to-noise ratio (SNR).

However, even if the numbers are impressive, it doesn’t necessarily mean SNR is a good indicator of system performance in the presence of noise. Thus, it’s important to have a solid understanding of SNR and how it’s calculated, as well as its impact on system performance. It’s also a good idea to become intimately aware of alternative metrics that better represent touch performance.

What Is SNR?

Simply, SNR is an industry-standard performance metric for capacitive touchscreen systems (Fig. 1). The problem is that no standard methodologies exist to measure, calculate, and report SNR, especially when considering the high variability of noise-contributing components in a typical system (e.g., a mobile phone). The two components (signal and noise) of this measurement and calculation depend heavily on the device under test (DUT).

Although the legitimacy of SNR as a measure of performance is widely accepted, industry experts understand that most marketing claims of extremely high SNR don’t hold up when put to real-world use cases. In addition, delivering high SNR isn’t nearly as important to performance as meeting functional specifications in noisy conditions.

In projected capacitive touchscreens—the touch technology used in every new smart phone—noise bombards the touch sensor whenever it’s in use. Noise coupled from the display, which can be either an LCD or active-matrix organic LED (AMOLED) type, to the touch sensor is trending higher as advances in touchscreen manufacturing allow for thinner substrates between the display and the touch sensor. Without analog display synchronization, LCD-generated noise typically becomes spiky.

Noise generated by USB chargers is also spiky in nature. On top of that, it’s the most variable, since the construction and the components in the ac-dc converter differ for every device. Third-party, low-cost chargers are particularly prone to such noise spikes. Consequently, USB chargers create the biggest headaches for OEMs when touch controllers don’t incorporate noise-cancellation technology such as Cypress’ Charger Armor.

The touch controller is expected to operate without reporting false finger touches or a jittery finger position in the presence of all these simultaneous external noise sources. None of them can be characterized as having a normal, or Gaussian, distribution. This situation presents a problem for engineers and marketers who typically specify the SNR, using RMS noise, of analog-to-digital converters (ADCs) in the absence of noise.

With so much variance in measurement conditions, it’s a wonder that SNR is still used as a quantitative metric. Moreover, RMS noise-based SNR measurements can’t predict jitter (also known as noise-free resolution) and false-touch reports—the most important, and quantifiable, noise-related performance parameters of a touchscreen system. Fortunately, though, there’s an SNR measurement technique that can predict jitter in the presence of non-Gaussian noise.

Noise’s Impact On Touchscreens

SNR affects system robustness to false touches and positional jitter. A finger near the touchscreen interferes with the fringing electric field of the capacitor at the intersection of two transparent electrodes. The amount of energy stored in this electric field is known as the mutual capacitance.

Intersections form when orthogonally aligned transmit and receive electrodes cross each other. Hundreds of these intersections exist on a mobile-phone touchscreen. The touchscreen controller measures the change in capacitance for every intersection and converts the measured data into a quantized array of raw data. By measuring each intersection, rather than an entire electrode, the controller can create a two-dimensional map of the touchscreen sensor capacitance.

If a large noise spike occurs on one of the intersections that are near the finger, an error term is added to the position calculation algorithm. The algorithm then converts raw data to coordinates. Depending on the size of the noise spike, the coordinates of the reported finger position may jitter, or it may alternate between two coordinates, when the finger is stationary. For example, unintended input or selection may occur when using the touch interface on a smart phone while it is plugged into a USB wall charger.

While it may not be very noticeable at low levels, jitter can create a variety of problems for a user interface. As the finger coordinates change, the gesture-decoding algorithms may misinterpret a swipe or pan gesture in a way that’s not only noticeable, but can also cause a misfire when you’re playing Angry Birds.

Worse yet, in extreme cases, the noise generated by a charger can cause the touchscreen controller to report multiple fingers when only one finger is touching the sensor. This creates a condition often called ghost fingers, which can lead to an inoperable interface for mobile apps designed for use with only one finger, or an inoperable gesture decoding algorithm.

Applying extensive intellectual property, diligent analog design, and advanced signal-processing algorithms allow modern touch controllers, such as the fourth-generation TrueTouch (TMA440) controller, to be immune to charger noise.

Specmanship And Noise

Calculating and reporting SNR is even trickier than setting up the conditions for a representative measurement. The severity of spiky, temporal noise-based problems indicates that the SNR reported in a datasheet should adequately represent spiky noise. So, what kind of measurement should be used to quantify SNR? Based on the way noise is counted, there are two possibilities: standard deviation or RMS, or peak-to-peak measurements.

In a system with Gaussian noise, it’s safe to use the standard deviation to calculate SNR. That’s because a scalar conversion can be used—the standard deviation noise value is multiplied by six to calculate the peak-to-peak (p-p) value (with 99.7% confidence).

When the display is off and no chargers are present, touchscreen-system noise is solely Gaussian, and SNR isn’t a concern. SNR becomes crucial when the touchscreen is integrated into a device like a mobile phone.

Peak-to-peak is another way to count noise in an SNR calculation. Let’s take a closer look at both methods employing a raw dataset (no digital filters applied) that exhibits a typical noise level with a charger and LCD in the system.

The finger signal (CF) is measured by taking the difference of the mean value of 100 samples (about one second) of data before a finger touchdown and the mean value of 100 samples after a finger touchdown (Fig. 2):

CF = Mean(Finger) – Mean(NoFinger) = 1850 – 813 = 1037

Next, determine the amount of noise present in the system (CNS). System noise is the difference in the maximum and minimum capacitance measured at a sensor for a one-second interval.

This value represents the amount of measured noise, but it doesn’t include quantization error. Adding one least significant bit (LSB) worth of noise restores quantization error. This is especially important for systems with lower ADC resolution.

The noise measurement is taken when the finger is touching to reproduce the most worrisome condition. At this point, the standard deviation or the p-p route can be taken. The standard deviation when the finger is touching measures 20.6 counts, while the peak-to-peak noise is 155 counts as calculated by:

CNS(p-p) = (Max(NoFinger) – Min(NoFinger)) + 1 = (900 – 746) + 1 = 155

The SNR calculated using peak-to-peak noise is 6.7, while the SNR calculated using standard deviation is 49.9. It becomes clear which result is preferable for a product’s datasheet, but which one better represents the functionality of the system?

With standard deviation, a quiet set of data with a single large noise spike (that is, large enough to look like a finger) results in the same noise as a data set with a low-amplitude Gaussian distribution.

Very high SNR would be evident, even though the touch controller doesn’t meet the functional specifications of the user interface. If the same dataset was measured using peak-to-peak noise, SNR would be close to one, which immediately indicates that there’s a problem in the system.

As noted earlier, converting standard deviation to peak-to-peak involves scaling by a factor of six to get to a 99.7% confidence interval. If we apply the same thinking to the above dataset, we can see that the error in peak-to-peak noise estimation is off by 32 counts, or 20% (see the table).

When reading a datasheet, remember that without the dataset, standard-deviation SNR calculation provides no quantitative or qualitative representation of the touchscreen system’s performance or functionality. On the other hand, peak-to-peak SNR calculation can qualitatively determine if there’s a significant level of noise and whether it may affect performance.

Beyond SNR

SNR is a poor performance metric without a standardized measurement procedure. Touchscreen controller suppliers and mobile-device OEMs use defined performance metrics with measurement procedures and calculation steps to thoroughly quantify touch performance. These specs are necessary to ensure repeatable test outcomes for regression testing changes to touchscreen hardware or firmware, as well as to prove touchscreen performance.

A typical performance test setup requires metal finger emulators and jigs, an oscilloscope, a function generator, and a robot, in addition to the touchscreen hardware and an interface to the controller. For example, standard jitter measurement entails a seven-step process for recording the temporal noise in the reported coordinates that represent finger position.

The measurement indicates how much movement, in units of distance, is expected from a stationary finger. This relatively simple measurement of a parameter has a direct and immediate effect on a user interface.

By contrast, the effect of SNR on touch performance is less direct. Digital filters and position calculation algorithms can remove jitter even in noisy conditions, reducing the value of SNR as a performance metric. Relying on SNR as an indicator of performance isn’t advisable, since it ultimately doesn’t deliver a true sense of system functionality.

Just as the thread count in linens doesn’t reflect the quality of the bed sheet, SNR won’t indicate how well a system will respond to touch. That’s why touch-controller manufacturers like Cypress TrueTouch offer a suite of tests and measurements that can evaluate the performance of new touchscreen designs.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!