“Stylus? Yerk! Who wants a stylus?”
So said Steve Jobs in January 2007, when he was introducing the iPhone at the Moscone Center in San Francisco. Jobs insisted on the superiority of Apple’s finger-only capacitive multi-touch technology over existing stylus-controlled devices. Perhaps Apple’s aversion to the stylus can be traced to Newton, the company’s most noticeable product failure after Jobs’ return.
The Newton rests in the stylus graveyard amongst countless other dead-on-arrival products including tablet PCs, PDAs, and early smart phones. Over the last two decades, poor implementations and repeated design mistakes have turned many product designers and marketers against stylus-centric interfaces.
In spite of its burdensome legacy, however, the stylus is becoming popular again.
Actually, We Want A Stylus
“The finger-only approach works well for a pure consumption tablet but falls far short of what is required to move tablets forward as true creation devices. The ideal paperless solution would allow users to interact with the electronic content in the same way one does with the media it is replacing,” said Rhoda Alexander of market research firm IHS iSuppli.
“This is particularly true in education environments where students and educators want more flexibility than a soft keyboard provides. They want the ability to make notes in margins and underline critical passages with the simple motion of a stylus, to jot down thoughts as they occur, and to sketch out a diagram or a mathematical solution,” Alexander said.
No surprise, then, that we’re seeing a new wave of stylus-empowered tablets (some of which include products from Lenovo, Samsung, Wacom, and more) coming to the market, as well as increased industry competition. So, the real question is why the stylus should succeed now.
Failure To Launch
We first need to examine why the stylus initially failed to captivate users. There are two primary reasons: fundamental design mistakes and technical issues.
Using a stylus to replace a keyboard may sound like a great idea, but it’s not nearly as fast for office applications since typing is still the quickest way to create text. In addition, real-time handwriting recognition is inherently frustrating and not at all transparent. It’s analogous to speaking through an interpreter.
The stylus as a mouse replacement was similarly doomed. Anyone who has ever used a Windows tablet PC or a legacy WinMob phone knows how painful navigation via stylus can be. But the mouse could not replace the keyboard, either. These two input techniques are only efficient when they are combined together.
We have rarely seen devices combining a stylus with other input techniques. A stylus is extremely powerful for sketching, annotating, and capturing ideas. Its utility can only be fully realized, however, when it is integrated with more intuitive and faster input techniques (including multi-touch) for basic operations.
This leads to the first technical issue engineers and product marketers have encountered while designing stylus-centric devices. PC operating systems are built upon the WIMP paradigm (Windows Icons Menu Pointer), designed to leverage the keyboard-mouse tandem. Most mobile operating systems now support multi-touch events and gestures as a primary input. In contrast (until recently), the stylus was either poorly supported or not supported at all.
One example is Windows 7. While it officially supported multi-touch and stylus inputs, it was not multi-modal. The touch events were automatically disabled when a stylus was detected in the vicinity of the screen. Consequently, users could never know when and where the UI would respond to touch.
But software issues are not the only problem. Hardware issues have actually presented a greater challenge. In the past, stylus solutions were either electromagnetic or resistive. The former delivered amazingly accurate and responsive handwriting capture but was inherently incapable of detecting finger input—the sensor was in the stylus.
Even worse: if you lost the stylus, you could no longer use the device. Analog resistive sensors used in most PDAs and early smart phones were capable of detecting either finger or stylus touch in theory. In practice, only a single contact could be detected at a given moment. Moreover, this less precise sensor required frequent recalibration.
The New Generation
So, what has changed? First, new hardware technologies permit concurrent stylus and multi-touch input with dramatically improved responsiveness and reliability. Companies such as N-trig and Stantum provide two alternative means to this end:
- N-trig combines a projected-capacitive sensor along with an electrostatic stylus. The company has also proposed a battery-powered stylus that can be capacitively coupled to pro-cap touch panels.
- Stantum’s technology is based on pressure sensing. A single matrix sensor detects and locates any contact point occurring on the screen (stylus, finger or virtually any tool). Real-time processing is applied to discriminate stylus contact from other touches.
On the software side, both Microsoft and Google announced important steps toward improved stylus and finger support. Windows 8 introduced a brand new touch-centric UI with stylus support. Google’s latest iteration of Android, known as Ice Cream Sandwich (each version is developed under a code name based on a dessert), features an enhanced event system that includes a new class of dedicated stylus messages.
It is still too early to assess whether stakeholders in this industry have learned from past failures. But there are encouraging signs. Students, creative professionals, knowledge-based workers, and other users silently dreaming about pen-and-paper-like experience on their tablet are now recognizing that the stylus may be ready (finally!) for prime time.