The Sound Development: The Hearing-Aid-On-a-Chip

April 12, 2007
DSP technology is the driving force behind the one-chip hearing-aid systems.

Semiconductor manufacturers now offer solutions for digital hearing aids based on DSP technologies. Such technologies enable far more accurate results than can be achieved using analogue signal processing. As a result, hearing- aid response can be better adjusted to suit each individual's requirements and the wide range of listening situations encountered in everyday life.

Once the digital processing is complete, the hearing aid transforms the digital representation back into an acoustic signal that's detectable with the human ear. Figure 1 shows the key elements of a digital hearing aid. By using DSP technology to process the sound, digital hearing aids can perform functions not possible with analoguebased hearing aids.

Digital devices, for example, can divide the sound information into many components based on frequency, time, or intensity. Then they're able to apply different processing techniques to manipulate the signal, resulting in precise tuning of the signal for the benefit of the hard of hearing.

Algorithms can filter out unwanted noise and perform tasks such as automatic feedback suppression, speech enhancement, noise reduction, directional processing, and echo cancellation. Advanced algorithms are also capable of pattern recognition; therefore, the hearing aid can automatically change processing modes based on the sound environment.

The digital hearing aid's ability to accurately process sound also enables advanced multi-microphone processing techniques, which can provide benefits in noisy listening situations (e.g., a restaurant). Some of the latest hearing aids feature two (or more) microphones. By using sophisticated DSP techniques, these aids can perform spatial processing over various frequency bands that give users consistent directionality based upon a "listen where you look" paradigm. These techniques can be further enhanced by allowing a "steerable null," which helps attenuate unwanted noise sources that emanate from a particular direction.

However, studies have shown that directionality isn't always desired, especially in situations where the noise level is lower or when the user wishes to listen to music. Thus, some DSP hearing aids can switch between a directional or non-directional (omni) mode of operation.

Finally, digital hearing aids also enable the use of sophisticated noise-reduction techniques. These evaluate the level of background noise during pauses in speech and then subtract this estimated noise level from the speech signal in a technique known as "spectral subtraction." This latter technique is implemented in a sufficient number of processing bands (such as 64 or 128), along with psychoacoustic- based post-processing to eliminate so-called "musical noise." Consequently, hearing- aid developers can incorporate them in very effective and high-audio-quality, noise-reduction systems.


While DSP technology makes all of this possible, for a hearing aid to encompass one or more of these listening enhancement features depends on the DSP's performance. And in traditional DSPs, as computing capability increases, so does power consumption and the corresponding drain on the hearing-aid battery.

In the early days of digital-signal- processing technology, hearing- aid manufacturers specified fully customised, fixed-function DSPs for semiconductor companies to develop. These DSPs would exactly compute the algorithms for which they were designed. Yet, when new algorithms were conceived, new DSPs had to be developed. The development of next-generation DSPs to run new and more sophisticated algorithms also required new semiconductor processes and system approaches to ensure that power consumption would remain within acceptable limits.


The constant challenge for the hearing-aid designer, therefore, lies in the tradeoff between DSP computing capability and DSP power consumption. By taking advantage of continuously evolving semiconductor process technologies, hearing-aid manufacturers have been able to specify more sophisticated DSPs for their products—and, thus, introduce new features—without increasing power consumption. This, in turn, has fueled the development of smaller, more sophisticated hearing aids that didn't need bigger batteries.

Moreover, in theory, it might be assumed that manufacturers could simply rely on Moore's Law to solve their DSP needs. In reality, however, this poses a few practical problems.

Design cycles using new and sophisticated semiconductor process technologies are longer simply because they're more complex. This means that once audio-processing algorithms are conceived, likely requiring a new DSP engine, it still takes a long time for product release.

Hearing-aid manufacturers have attempted to include as many algorithm ideas as possible into one DSP, so that the processor lasts a number of years before the next one is needed. In many cases, though, not all ideas will fit in the same DSP, or the best idea comes after the DSP's "feature lockdown" development phase. Today's market is seeing a boon of new hearing-aid products, which requires hearing-aid manufacturers to modify or specify new DSPs more frequently to keep up with the competition.


The solution to this dilemma involves flexible DSP technology. When using this technology, portions of the signal-processing algorithm that may change are written in software code (or microcode). Common portions of signal-processing algorithms that will not change are able to be "coded" in hardware or in microcoded hardware, which is generally a more power-efficient method than a fully software- programmable system.

AMI Semiconductor (AMIS) calls this combination a reconfigurable application-specific signal processor, or RASSP. With RASSP, hearing-aid manufacturers can now reprogram their DSP for new features when new algorithms are conceived. The RASSP is tightly integrated with the other required functional blocks of a hearing aid.

Such blocks include the analogue front end to interface with microphones; the output driver to interface with the receiver; and program, volume, and fitting control functions. It also includes battery management, which allows the hearing aid to operate with both disposable and rechargeable batteries. Effectively, the RASSP (Fig. 2) isn't just a DSP, but rather an entire hearing-aid system-inpackage (SiP) that offers the advantages of power efficiency and flexibility.

To realise such systems, semiconductor companies such as AMIS have had to develop specific DSP architectures that exploit the commonality found in many signal-processing schemes. On the other hand, they had to offer sufficient flexibility for a wide range of applications, while meeting the inherent hearing-aid power-consumption constraints. With these architectures, manufacturers can create their algorithms in software (and microcode) without changing the DSP hardware. The flexibility in creating new products and rapidly addressing new markets comes from RASSP solutions, not from crafting new DSPs each time.


Moving forward, semiconductor companies are working on RASSPs that offer higher-precision DSP operations and further improved sound quality. To render the full range of audible frequencies, RASSPs must feature good audio front-end and backend stages, as well as ample available computing precision and capability.

Chip manufacturers are working on enhancing the circuitry that captures signals from the microphone(s). Therefore, a greater range of audio frequencies can be considered when processing the signal.

Combined with greater computing precision from the DSP, the resulting audio will maintain much better quality. Not only will the essential sounds related to speech be adjusted to the individual user, but the listener will also get the full depth of sounds that equally contribute to the emotional experience of hearing.


To join the conversation, and become an exclusive member of Electronic Design, create an account today!