Which Compression Format Provides The Most Bang For The Buck?

Sept. 8, 2009
We all want value for our money, especially during the current economic downturn. But when it comes to compression algorithms, what’s the best way to determine value? Let’s take a look at audio compression algorithms and figure out which ones provide the

We all want value for our money, especially during the current economic downturn. But when it comes to compression algorithms, what’s the best way to determine value? Let’s take a look at audio compression algorithms and figure out which ones provide the best “bang for the buck.”

Audio compression algorithms take advantage of psychoacoustics to remove bits that most human beings won’t notice anyway. The more clever that an algorithm is at identifying and removing those unnoticeable bits, the higher the resulting compression ratio. We expect cleverness to be proportional to MIPS, so a higher compression ratio often requires more CPU cycles. The table quantifies both of these characteristics by listing the effort required (MIPS) for the result achieved (compression ratio).

The right-most column in the table lists a “bang for the buck” audio compression value metric, using MIPS as an indirect indicator of price. It divides the compression ratio increase entry by the MIPS increase entry. Interestingly, our “bang for the buck” metric is above 1.0 for just one of the compression algorithms listed. Since 1994 when MPEG2 was standardized, audio compression MIPS increased by a factor of four, from 10 MIPS to 40 MIPS, while the compression ratio only increased threefold, from 4:1 to 12:1. What’s going on here?

Compression algorithms are subject to a law of diminishing returns. For example, the venerable Lempel-Ziv (LZ) text compression algorithm from1978 achieves 2:1 compression on many computer files. Despite 30 years of effort by the world’s leading text compression researchers, today’s state-of-the-art text compression algorithms, such as Burrows-Wheeler (BW), are only about 10% more effective than LZ.

The science of information theory provides a way to measure the entropy, or inherent information content, of a stream of symbols such as ASCII characters or audio samples. Across many text files, LZ was already within 5% to 10% of the theoretical entropy limits of text compression. Subsequent text compression algorithms, like BW, could thus only achieve up to 10% improvement over LZ.

A similar phenomenon is at work with audio compression algorithms. MP3 took off around 1998 when Intel CPUs finally got fast enough to run a real-time MP3 decoder. Thanks to Moore’s Law, audio compression MIPS steadily became cheaper. Since 1998, audio compression algorithms used these lower-cost MIPS to increase the audio compression ratio from 4:1 (192 kbits/s) to 12:1 (64 kbits/s). But the increases will soon end because many audio researchers believe that transparent audio coding isn’t possible below 48 kbits/s (16:1 compression), regardless of MIPS applied. The law of diminishing returns strikes again!

Other Compression Applications

“Bang for the buck” for compression algorithms that target industrial, scientific, and medical (ISM) applications is calculated differently than it is for audio compression algorithms. This is because the care-abouts of ISM designers and end users are different. Rather than using additional MIPS, FPGA lookup tables (LUTs), or ASIC gates to achieve more compression on one audio channel, compression in ISM applications reduces multi-channel and high-sample-rate bottlenecks.

For instance, radar engineers eliminate analog front-end components by sampling radar signals at Gsamples/s, with all subsequent processing done digitally in FPGAs. Radar compression would apply extra FPGA LUTs or ASIC gates not to achieve more compression, but to compress signals at ever-faster sample rates.

Medical ultrasound transducers are evolving from 128-element linear arrays to 2000-element matrices. In ultrasound, hosting multiple compression cores in any additional FPGA LUTs or ASIC gates accommodates the 16x increase in channel count. For ISM applications, “bang for the buck” is measured in computation resources (LUTs or gates) per megahertz of sample rate. This metric makes it easy to compare competing ISM compression solutions, for both rising sample rates and channel counts.

Multicore processors are becoming I/O-limited rather than compute-limited. Similarly, as analog-to-digital converter (ADC) sample rates and channel counts increase year after year, the traditional ways of attaching data converters to FPGAs and ASICs is morphing due to I/O challenges. When sample rates, bit resolutions, and channel count were low, parallel buses were adequate. As sample rates rose, low-voltage differential signaling (LVDS) replaced parallel busses. As system channel counts rose, vendors replaced one-channel and two-channel converters with quads and octals.

These days, even LVDS pin counts are getting out of hand, as illustrated by the 56 double-data rate (DDR) LVDS pins (at 1.25 GHz) on the AD9739 digital-to-analog converter (DAC) (one channel, 2.5 Gsamples/s, 14 bits/sample). Serializer-deserializer (SERDES) I/O is now appearing on devices such as the AD9239 ADC (four channels, 250 Msamples/s, 12 bits/sample), which uses four 4-Gbit/s transceivers.

Provisioning high-speed ADCs and DACs with SERDES interfaces lowers pin counts, at the cost of significantly higher board design and layout complexity. Of course, the FPGA receiving all that data has to provide multiple SERDES and LVDS pins as well, and FPGA SERDES I/O is expensive. However, integrating real-time compression into high-speed, high-channel-count DSP systems would reduce these I/O problems by two to four times regardless of interface. Requiring fewer SERDES transceivers and using smaller packages would also lower DSP system costs, complexity, and power consumption. Now that’s a real bang for the buck!

About the Author

Al Wegener

Al Wegener is the CTO and founder of Samplify Systems, a fabless semiconductor startup in Santa Clara, Calif. He holds 17 patents and is named on additional Samplify patent applications. He earned a BSEE from Bucknell University and an MSCS from Stanford University.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!