Digital ICs: DSP

Jan. 7, 2002
Harnessing The Power Of DSPs BEFORE SEMICONDUCTOR BUILDING BLOCKS OR SINGLE-CHIP solutions came on the scene, digital signal processing (DSP) problems, primarily in the military arena, were solved using DEC minicomputers and...
Harnessing The Power Of DSPs Before semiconductor building blocks or single-chip solutions came on the scene, digital signal processing (DSP) problems, primarily in the military arena, were solved using DEC minicomputers and IBM mainframes. With the emergence of semiconductor multipliers, accumulators, registers, and logic, these nonreal-time imaging and filtering problems slowly and steadily shifted to DSP building blocks in the early '70s. Despite these advances, commercial applications were unthinkable. No one envisioned that this technology would one day revolutionize the communications world and enable many other applications.

All of that changed with the first single-chip DSP created in 1978 by AMI and soon after by AT&T and NEC Microcomputer (www.nec.com). DSP pioneers had laid the groundwork for a new industry, which got the needed shot in the arm by the first commercially successful DSP chip from Texas Instruments (www.ti.com) in the early '80s. From then on, nothing would stop this marvel. DSP rapidly expanded beyond its traditional role into countless untraditional applications. It has mushroomed into a several-billion-dollar market that keeps growing.

Today, with the power to perform billions of instructions per second from a tiny package at a supply voltage that can be delivered by a battery at an affordable price, electronic appliances can process images and video on a real-time basis. From digital cameras to video conferencing, DSPs are fueling every conceivable application. As this technology evolves and moves forward, its reach is left to the imagination of developers.

DSPs will become more reconfigurable to meet future needs. We can also expect more powerful cores driving such chips with hundreds of processing elements (PEs). The future will bring multicore designs for broadband communications, a greater interaction between DSP and field-programmable gate array (FPGA) designs, more powerful fixed- and floating-point DSPs, high-level DSP programming languages, improved development tools, and system-level integration. The DSP will become as fundamental a computational element of electronic circuits as the microprocessor.

Because standards are changing rapidly and processing horsepower needs are rising quickly, traditional programmable DSP processors may soon run out of steam. To meet these needs, dynamic reconfigurability with dramatic improvements in power and code efficiency are being pursued. Several reconfigurable DSP architectures, with parallelism at the computational and instruction levels, have been proposed. Depending on the applications, designers will be able to configure the computational units and parallel datapaths, as well as map instruction sets with the architecture to maximize code density with minimal power consumption. Plus, such reconfigurable processors will offer shared-memory accesses to keep the power consumption low. Being highly programmable and adaptable, adaptive computing machines (ACMs) will offer an alternative to traditional DSPs.

Many emerging portable applications require a DSP processor and high-performance RISC combination to handle signal-processing and control tasks separately. Developers combine these two cores on the same die and add a rich set of peripherals, serial interfaces, I/Os, and the correct amount of memory to cut size, power, and system costs. Moreover, to ensure true open-source flexibility, they're supporting embedded OSs like Linux.

To tackle monumental signal-processing tasks of forthcoming broadband multichannel infrastructure equipment, and high-density voice-processing boards used in communication gateways, developers are readying multicore designs with multiple identical DSP cores connected in highly parallel formats. Each core is a powerful DSP engine in this architecture.

FPGA makers have been eying DSP applications for some time. Now they're becoming more aggressive because FPGA densities have soared to new levels and development tools are in place. In fact, they're developing methodologies to help DSP engineers transfer their skills to programmable logic devices

Toward this new end, FPGA makers are crafting seamless design flows that link their software directly to The Mathworks' algorithm development tool, Simulink. In addition, they're offering comprehensive development kits for designing, prototyping, and debugging high-performance DSP applications. In fact, some have targeted software-defined radio (SDR) applications, asserting that programmable devices deliver performance with flexibility. They can provide a configurable radio for multiple wireless standards.

Also, suppliers say that FPGAs and PLDs can perform multiple-accumulate operations two orders of magnitude faster than traditional DSP devices. Similarly, they perform finite-impulse-response (FIR) filtering, fast Fourier transforms (FFTs), and other DSP tasks much faster.

DSP processors optimized for motor control applications are merging the traditional control function of an MCU with the horsepower of a DSP. Plus, motor-specific functions are implemented to address a wide variety of applications. On-chip flash provides low-cost nonvolatile memory for reprogramming the device. Lately, the addition of a motor current-sensing feature to these DSPs cuts the component count and the bill-of-material costs.

Today's DSP architectures are tailored for programming in high-level languages like C and C++. Therefore, we're seeing the development of compilers and DSP architectures simultaneously to ensure that the DSP designed fully complies with the compiler for highest code efficiency. The trend will continue to get higher code efficiency until all programming is in C or C++.

Floating-point versions continue to evolve to serve applications that need precision like high-quality digital audio, imaging, and robotics, some of the latest applications moving in the direction of 32-bit precision. It's not just the horsepower, but the price-performance ratio and ease of programming that drives many applications toward floating-point solutions. Over the years, floating-point performance has more than doubled, while costs have dramatically dropped. Texas Instruments, for instance, will sample a DSP this year that boasts 1350 million floating-point operations per second. Furthermore, it will offer an appropriate mix of peripherals, memory, and interfaces to keep system cost low. Another major competitor advancing in this front is Analog Devices (ADI, www.analog.com). The company is touting a 1-GFLOPS DSP for its TigerSharc architecture with the right mix of peripherals, memory, buses, and interfaces to achieve lower system cost. This year, this next-generation DSP processor is slated to be fabricated in 0.13-µm CMOS. ADI's roadmap points toward 0.1-µm features in the near futurbe.

Whether embedding DSP cores in an ASIC solution or programming single-chip DSP processors, fully integrated development tools support them. Furthermore, they're refurbished continually to take advantage of the latest architectural enhancements and let DSP developers get started before the samples arrive.

Core developers are unleashing raw processing power and slashing power consumption by combining the advances in CMOS processes with architectural enhancements. While traditional suppliers have been fortifying their fixed-point DSP cores with more MAC units and datapaths with special-purpose instructions, a number of fabless design houses have created powerful synthesizable DSP cores with extreme parallelism on-board. Expected to give traditional DSPs tough competition, the new cores combine over 100 PEs in parallel configurations to let DSP developers map the connectivity and processing needs of an algorithm to the architecture.

Fixed-point DSPs have enabled cellular communications and continue to drive many more new wireless applications. So the need to get "more for less" is forcing suppliers to keep improving the DSP architecture. Merging very-long-instruction-word (VLIW) and single-instruction multidata (SIMD) techniques has maximized juice from a minimal die with several-fold improvement in power consumption. Dedicated coprocessors and optimized peripherals support this architecture to realize optimized solutions for specific market segments. On the packaging front, these devices will go from BGAs to chipscale packages in the near future.

See associated timeline.

Sponsored Recommendations

TTI Transportation Resource Center

April 8, 2024
From sensors to vehicle electrification, from design to production, on-board and off-board a TTI Transportation Specialist will help you keep moving into the future. TTI has been...

Cornell Dubilier: Push EV Charging to Higher Productivity and Lower Recharge Times

April 8, 2024
Optimized for high efficiency power inverter/converter level 3 EV charging systems, CDE capacitors offer high capacitance values, low inductance (< 5 nH), high ripple current ...

TTI Hybrid & Electric Vehicles Line Card

April 8, 2024
Components for Infrastructure, Connectivity and On-board Systems TTI stocks the premier electrical components that hybrid and electric vehicle manufacturers and suppliers need...

Bourns: Automotive-Grade Components for the Rough Road Ahead

April 8, 2024
The electronics needed for transportation today is getting increasingly more demanding and sophisticated, requiring not only high quality components but those that interface well...

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!