Multi-Touch Technologies Evolve To Meet The Demands Of Larger Screens

March 1, 2012
$jq().ready( function() \{ setupSidebarImageList(); \} );

Fig 1. A resistive touchscreen comprises several layers, the most important of which are two thin, electrically conductive layers separated by a narrow gap.

Fig 2. A mutual-capacitance system will detect two touches as (x1,y3) and (x2,y0), whereas a self-capacitance system will detect simply (x1,x2,y0,y3).

Fig 3. However large-format touch computing evolves, application developers will want the flexibility to take full advantage of new kinds of touchscreen interactions.

Touchscreens represent more than the latest consumer craze. They also signal a fundamental shift in the way people interact with information and the computing hardware that delivers it. Their growth has unleashed a flurry of activity among device manufacturers, who are actively porting touchscreen technologies from tiny smart phones to large-format hardware.

But the transition from small screens and simple touch-enabled applications to a new paradigm, where hands and fingers are the primary tools for interacting with full-scale computers, isn’t necessarily straightforward. Manufacturers need to rethink how consumers will use touchscreens and address a new and more demanding set of requirements. Most importantly, the move to larger screen sizes has made multiple-touch capability essential.

Smaller phone displays can rely on a single finger touch to control and select the phone’s operation. But while a few finger strokes on today’s 5-in. screens are sufficient, what’s required on a 12-in. or 40-in. device, or when multiple users are interacting simultaneously using both hands? What wildly popular new applications will emerge for large-format touchscreens, and how can manufacturers ensure that their devices will support them?

Resistive Touchscreens

A resistive touchscreen comprises several layers, the most important of which are two thin, electrically conductive layers separated by a narrow gap (Fig. 1). Pressing a point on the panel’s outer surface connects the two metallic layers at that point. Acting as a voltage divider, this results in a change in the electrical current, which is registered as a touch event and sent to the controller for processing.

Resistive touchscreens were favoured since they are cheaper to produce and had excellent stylus capability that found many supporters, particularly for Asian character-based applications. But resistive technology could not support the drive toward multi-touch applications.

The multiple layers or “stacks” give these displays poor visibility in sunlight due to reflections. The stacks also severely attenuate display brightness. Since these displays require an outer flexible layer that comes in contact with the stylus (and anything else it falls against or comes into contact with), they suffer from scratching, moisture, and dust as well.

Projected Capacitance Technology

Resistive’s competing technology, projected capacitance, uses a projected capacitive field. It has quickly won support from users since it has a solid “glossy” outer surface that cosmetically looks great and is completely sealed from dust and moisture. Following consumer demand, manufacturers have responded and most appear to have adopted capacitive touch as the way forward.

The technology measures small changes in capacitance, the ability to hold an electrical charge, when an object (such as a finger) approaches or touches the surface of the screen. But not all capacitive touchscreens are created equal. Choices in the capacitive-to-digital conversion (CDC) technique and the spatial arrangement of the electrodes that collect the charge determine the overall performance and functionality the device can achieve.

Device manufacturers have two basic options for arranging and measuring capacitance changes in a touchscreen: self-capacitance and mutual-capacitance. Most early capacitive touchscreens relied on self-capacitance, which measures an entire row or column of electrodes for capacitive change. This approach is fine for one-touch or simple two-touch interactions.

However, it presents serious limitations for more advanced applications, because it introduces positional ambiguity when the user touches two places. Effectively, the system detects touches at two (x) coordinates and two (y) coordinates, but has no way to know which (x) goes with which (y). This leads to “ghost” positions when interpreting the touch points, reducing accuracy and performance.

Alternatively, mutual-capacitance touchscreens use transmit and receive electrodes arranged as an orthogonal matrix, allowing them to measure the point where a row and column of electrodes intersect. This way, they detect each touch as a specific pair of (x,y) coordinates. For example, a mutual-capacitance system will detect two touches as (x1,y3) and (x2,y0), whereas a self-capacitance system will detect simply (x1,x2,y0,y3) (Fig. 2).

The underlying CDC technique also affects performance. The receive lines are held at zero potential during the charge acquisition process, and only the charge between the specific transmitter X and receiver Y electrodes touched by the user is transferred.

Other techniques are available, but the key advantage of the CDC is its immunity to the noise and parasitic effects. This immunity allows for additional system design flexibility. For example, the sensor IC can be placed either on the flexible printed circuit (FPC) immediately adjacent to the sensor or farther away on the main circuit board.

Sensors And Chips

Electrode pitch, a key parameter in sensor design, refers to the density of electrodes—or more specifically, (x,y) “nodes”—on the touchscreen. To a large extent, it determines the touchscreen resolution, accuracy, and finger separation. Naturally, different applications have different resolution requirements. But today’s multi-touch applications, which need to interpret fine-scale touch movements such as stretching and pinching fingertips, require high resolutions to uniquely identify several adjacent touches.

Typically, touchscreens need a row and column electrode pitch of approximately 5 mm or less (derived from measuring the tip-to-tip distance between the thumb and forefinger when pinched together). This allows the device to properly track fingertip movements, support stylus input, and, with proper firmware algorithms, reject unintended touches. When the electrode pitch falls between 3 and 5 mm, the touchscreen can support input with a stylus with a finer tip—a boost in accuracy that will enable the device to support a broader range of applications.

At the core of any successful touch sensor system is the underlying chip and software technology. As with any other chip design, the touchscreen driver chip should have high integration, a minimal footprint, and close to zero power consumption along with the flexibility to support a broad range of sensor designs and implementation scenarios. Driver chips are measured by the balance of speed, power, and flexibility they achieve.

Delivering True Multi-Touch

Users of the Apple iPhone and other contemporary devices will be familiar with today’s multi-touch gestures, typically pinching or stretching two fingers. With a larger screen, however, it becomes possible to envision much more complex multi-touch gestures (Fig. 3).

Imagine painting and music applications for young students that involve gesturing with all 10 fingers and thumbs or new tablet-based games that pit two or more users against each other on the same screen. However large-format touch computing evolves, application developers will want the flexibility to take full advantage of new kinds of touchscreen interactions. Device manufacturers don’t want to stand in their way—and they certainly don’t want to build a device that can’t support the next hugely popular touch application.

As large-format touch applications begin using four, five, and 10 touches, it’s important to consider not just how new applications might exploit these capabilities, but also how the controller chip will use this richer information to create a better user experience. For example, the ability to track incidental touches around the edge of a screen and classify them as “suppressed” is even more important on a large-format device than on a small one.

Just as mobile phone touchscreens need to be able to recognize when users are holding the phone or resting the screen against their cheek, a larger-format system must account for the different ways that users will hold and use the device, such as resting the edge of the hand on the screen when using a stylus or resting both palms when using a virtual keyboard.

And it’s not enough to simply identify and suppress incidental touches. The device must track them so they remain suppressed even if they stray into the active region. The more touches that a controller can unambiguously resolve, classify, and track at once, the more intuitive and accurate the user experience can be.

When designing a touchscreen application, engineers need to carefully consider a number of factors. The first consideration in creating a high-performance responsive display control usually is the required accuracy, or the fidelity with which the touchscreen reports the user’s finger or stylus location on the touchscreen. An accurate touchscreen should report touch position better than ±1 mm.

Hand in hand with accuracy is linearity, which measures how “straight” a line drawn across the screen is. Linearity depends on sound screen pattern design, and it also should be accurate within ±1 mm. Another practical consideration concerns the size of the screen’s active area and the number of potential touch spots the application may present. Our fingertips can only be brought together so far before they may be interpreted as a single touch, so finger separation is key.

The resolution of the screen needs attention as well. This is the smallest detectable amount of finger or stylus motion. It is important to reduce the resolution to a fraction of a millimetre for a number of reasons, chief among them being the enabling of stylus-based handwriting and drawing applications.

From the user perspective, one of the most important evaluations of a touchscreen-based device is the response time. Response time measures how long it takes the device to register a touch and respond. For basic touch gestures such as tapping, the device should register the input and provide feedback to the user in less than 100 ms. Factoring in various system latencies, that typically means touchscreens need to report a first qualified touch position in less than 15 ms. Applications such as handwriting recognition require even faster response.

Another factor that impacts the user experience of the screen, but perhaps not so apparent to the user, is the signal-to-noise (SNR) ratio. This refers to the touchscreen’s ability to discriminate between from real touches and the capacitive signal arising from accidental noise.

Capacitive touchscreen controllers measure very small changes in the row-to-column coupling capacitance. The way those measurements are performed has a strong influence on the controller’s susceptibility to external noise. Large-format touchscreens are especially challenging in this regard, as one of the most significant noise generators is the LCD itself.

As touchscreens get larger and support more simultaneous touches—and more complex interactive content—achieving top performance in all of these categories becomes even more important.


To join the conversation, and become an exclusive member of Electronic Design, create an account today!