We often take consumer audio technology for granted. Users obviously need it to make phone calls or listen to music. But how much attention do they pay to its quality and delivery? With audio use cases rapidly increasing and diversifying, and users putting ever-larger demands on their devices, this is all about to change.
The Push For HD
HD display advances in televisions generated very high expectations in early smart-phone and tablet users for impressive graphical capabilities in their devices. R&D investment into developing overall picture quality has historically far outstripped the pace of audio evolution in the same products.
Audio became something of a reliable and satisfactory (but unfulfilled) element, without fully satisfying the increasingly diverse use cases or providing any sort of wow factor. With the modern consumer becoming more discerning, however, the value of quality audio as a device differentiator has enhanced dramatically.
We are now moving toward HD audio as a standard rather than premium feature, as consumers begin to demand sound quality on par with the other default features of their smart devices. The HD tagline will continue to define the highest-quality audio solutions available, but shifting expectations in a competitive market will see users turn their backs on devices that don’t offer the level of audio quality now viewed as standard.
This is true across all audio use cases. Whether speaking, listening, recording, or sharing, users now expect the quality to be crystal clear regardless of the immediate environment. Irrespective of their location, users will now have the opportunity to enjoy more natural and private voice calls and exceptional audio experiences.
Beyond these more traditional audio capabilities, advanced microphone and audio processing technology is allowing manufacturers to enable gesture and voice control as a core function of many smart devices. The same technology is fuelling myriad innovative use cases emerging from the apps design community. This exciting technology is sure to receive widespread adoption by users, so it’s absolutely crucial that “traditional” audio such as calls or media is seamlessly integrated into the device user interface.
The Cost Of Diversification
The flipside of adding so much extra functionality to smart devices is that they inevitably require more processing MIPS, software, and memory, potentially creating more drain on already stretched portable power sources. Form factors have become thinner and sleeker, and this demand for continually smaller consumer electronics has led to a need for highly optimised system architecture. However, the reduction in product size does not allow for any compromise in performance. In fact, the performance markers are continually being raised.
Multimedia players, many of which once performed one primary function, now encapsulate a plethora of features; audio playback, video playback, photo storage, Internet connections, Wi-Fi, touch panels, and HD screens, all of which require larger memories and more MIPS. The internal electronics therefore need to be smaller but faster than their predecessors and have far more functionality integrated into a single IC. With such an array of functionalities competing for power, unless advanced audio features are designed with minimum power consumption as a feature, they ultimately could become something of a burden.
To adequately host these power-draining processors, standalone audio systems-on-chip (SoCs), combining leading mixed-signal analogue with blazingly fast low-power digital signal processing, consolidate all audio functionality in one highly efficient place and require less power to operate. They are becoming a clear choice for original equipment manufacturers (OEMs) as they provide a highly flexible, portable, and powerful processing engine that can perform a huge range of functions efficiently and autonomously in a way in which no other IC in the handset can. These SoCs in turn are evolving into what can almost be regarded as processing cores in their own right, in the same class as other task-optimised cores such as modems, graphics cores, and application processors.
Moving Toward Dis-integration
Most major mobile OEMs have introduced dis-integrated audio architectures into their latest products and are beginning to roll them out across other smart converged devices. As they adopt dis-integrated architectures, a tipping point (expected in 2013) will be reached where they become the exception rather than the rule. This process of evolving toward full audio SoCs brings a number of key benefits including a reduction in size and bill-of-materials (BOM) cost.
There are also advantages in terms of significantly more MIPS in the device, enhanced analogue performance, lower power consumption, and efficient memory utilisation. These all provide benefits to the end user: fantastic sounding calls wherever they are; rich, clear music and movie playback; “always on” voice control; distortion-free audio recording from the quietest to the loudest; and hands-free conference calling as if you were in the same room.
OEMs can, in turn, market these benefits to increase call time and the amount of digital content and services users are likely to enjoy. By offering authentic analogue-to-digital conversion (and vice versa) and offering a complete key noise cancellation solution, dis-integrated HD audio solutions provide users with immersive audio experiences regardless of their immediate environment.
From ECM To MEMS
Another current disruptive trend for audio is the shift from electret condenser microphones (ECMs) to microelectromechanical-systems (MEMS) microphones. The range of new devices and form factors in the market is driving this transition and enabling new audio use cases. These devices all require more microphones that need to be increasingly smaller, with better performance, more sensitivity, and different package orientations.
Perhaps ECM technology simply doesn’t have the capacity to stay abreast of a continually evolving industry. With their larger form factor and no bottom port, ECMs are unsuitable for many applications and suffer from performance shift after automated reflow.
MEMS, however, offer the consistent performance quality required for advanced sound processing. They’re also surface-mountable, allowing for automated high-volume manufacturing. The compact scale of MEMS minimises their space within a device, enabling further capacity for extra components, increasing the overall capability of the device without compromising on costs and quality.
Fully integrated MEMS microphones, which means the ASIC and transducer are in the same silicon chip, have arrived. In addition to greatly improved operational efficiencies, these microphones are 50% smaller and 25% thinner than existing models, which is ideal for integrating with the complex thin forms of modern devices. The seamless combination of the audio SoC and MEMS microphone technologies, coupled with advanced algorithms and deep audiology experience, ensures a true “mouth to ear” HD audio experience.
The Future Of Audio
With the explosive growth of audio use cases and the availability of technology to fully showcase them, the stage has been set for OEMs to seize the opportunity and introduce the end consumer to a new level of audio quality in their smart devices such as smart phones and tablets; converged devices such as TVs, gaming consoles, set-top boxes, laptops, and PCs; and converged and connected smart devices such as cars and home automation. With the shift to MEMS microphones and fully integrated audio SoCs, the opportunity is there to provide a differentiated experience to users like never before.
Eddie Sinnott has been the portfolio and strategy director at Wolfson since 2008. He holds a BSc in laser physics and optoelectronics from the University of Strathclyde and an MBA from the London Business School.