Electronicdesign 5801 Wegner595x335

New Techniques Help Compression Surpass Its Practical Limits

Sept. 14, 2012
Uncertainty Quantification, (UQ) Opens New Vistas

Compression technology benefits our daily lives so entirely, it goes unnoticed. It’s been a long, bizarre trip since compression’s early days in the 1950s, when mathematicians first developed lossless encoding as a laboratory curiosity with few practical applications. Compression technologies continue to be integrated into new electronic designs, especially in high-volume consumer products like smart phones and tablets.

The demand for data is only going to grow, though, while current compression techniques approach their limits for managing large volumes of data. Designers will have to tap new innovations to prevent potential bottlenecks.

Compression’s Early Days: 1950-1995

In 1951, MIT college student David Huffman opted out of a final exam by accepting his professor’s alternate choice, a term paper assignment, which was to prove which binary code was most efficient. Consequently compression’s famous Huffman code was born, whose token lengths are determined by symbol likelihood.

Thanks to Huffman’s term paper, Huffman codes became easy to construct and were provably optimal. But Huffman codes also came with some drawbacks: algorithm complexity, adaptability to changing symbol statistics, flexibility, and processing speed, only a few of which remain in today’s compression techniques.

In the 1970s, AT&T Bell Labs developed what was, arguably, the first commercial compression success story. Adaptive delta pulse code modulation (ADPCM) compressed telephone conversations to half their original bit rate, from 64 kbits/s to 32 kbits/s. AT&T reaped financial benefits because ADPCM allowed the same hardware to carry twice as many phone calls, which is a huge economic benefit.

While 10:1 audio compression rates and 50:1 video compression rates have recalibrated our compression ratio expectations, modest compression rates of 2:1 to 8:1 still deliver economically impressive savings. Imagine if compression made double-data rate (DDR) memory four times faster, or L3 caches effectively two times larger.

The Lempel-Ziv (LZ) text compression algorithm was born in 1977, using a dictionary of recent character strings to downsize computer files by a factor of two. While IBM owned the original LZ patents, Unisys made more licensing revenue from its 1984 LZW (Terry Welch is the “W” in LZW) patent because LZW was integrated into an image compression format called GIF, which was widely deployed in early Web pages.

By the early 1990s, MP3 players, cable TVs, DVD players, digital satellite broadcast receivers, and digital video cameras rapidly adopted MPEG-1 (1992) and MPEG-2 (1996) video and audio compression standards. During the first half of the 1990s, speech and audio compression algorithms were implemented in software, since sample rates were below 50 ksamples/s. Image and video compression still required an ASIC.

Compression’s Golden Years: 1995-2012

While the MP3 audio compression standard languished for almost six years, by 1995 x86 CPUs became fast enough to decompress MP3 in real time, at 44.1 ksamples/s. By 2000, audio compression had become an integral part of the burgeoning Internet, causing legal problems for music-sharing Web sites like MP3.com.

In 2001, Apple introduced the first iPod music player, and more importantly the iTunes music store, which has the guarded cooperation of music publishers. Apple revolutionized the music industry with $0.99 singles downloads and a cool, integrated music shopping and listening experience. By the early 2000s, mobile phones became ubiquitous enabled by 12-kbit/s speech compression that allowed six times more customers to share mobile spectrum.

Apple again reinvented consumer electronics in 2007 with the introduction of the iPhone, offering telephony, e-mail, Web access, a camera, and music playback in a single mobile device. Subsequent iPhones and smart phones included multiple hardware and software compression algorithms that supported consumers’ insatiable desire for digital media, especially video. Today, video sites like YouTube, Netflix, and Hulu account for a significant portion of Web traffic.

These new products wouldn’t exist without 10 times audio compression and 50 times video compression technology, and they wouldn’t have trained a new generation of compression experts in electronic design. Modern smart phones do perform speech and audio encoding and decoding within software, but image sensors require hardware support for compression of still images and video. Companies like Apple (A5), Samsung (Exynos), Qualcomm (Snapdragon), and Texas Instruments (OMAP) provide driver-level software that makes it easier than ever to access compression hardware accelerators.

The Sunset Of Media-Specific Compression: 2013

Unfortunately, media-specific compression is entering its sunset years. Compression ratios for text, speech, audio, and video have reached, or are about to reach, practical entropy limits. The table compares the leading compression algorithms to the previous leader for that media.

In the right-hand column, a compression figure of merit compares the leader and has-been algorithms by dividing the compression ratio improvement by the complexity increase for that improvement. These ratios are less than 1.0, meaning that doubling a compression algorithm’s complexity no longer doubles the compression ratio. The table implies that electronic designers won’t be getting additional compression benefits for text, speech, audio, images, or video.

Compression’s Future in Electronic Design

Although media-specific compression is reaching its limits, bandwidth and storage bottlenecks continue to increase. Such bottlenecks could in theory be reduced by a “universal compressor,” if it could operate fast enough on data necessary. For example, memory and network bandwidth per core has decreased in the last four years, because pin counts and speeds have not kept pace with the number of cores per socket.

In high-performance computing (HPC), several applications utilize less than 10% of the peak MIPS because they can’t get enough numerical operands to the HPC cores. Similarly, even at 2 Tbytes of memory per server, Web servers running today’s leading applications are often capacity-limited. Could compression reduce any of these new computing bottlenecks?

Speech, audio, and video compression algorithms can use lossy compression because consumers are satisfied with “good enough” quality, rather than “perfect” quality. Lossy numerical compression combines “good enough for intended use” results with a new technique called uncertainty quantification (UQ) to reduce computer system bottlenecks for numerical data — integer and floating-point numbers.

Just as audio and video signals have an underlying noise level, most numerical data are noisy and thus inaccurate. Could numerical compression be used at a setting where its distortion is below the noise level of the numerical signal? A related question: how would users know the noise level of their signal, and thus determine when lossy compression would be “good enough?”

UQ estimates uncertainty in numerical measurements. In numerical computations, UQ measures and carries forward the numerical uncertainty of input and intermediate operands, enabling users to quantify the accuracy of their results. Rather than saying “we used 32-bit floats for all operations,” researchers using UQ techniques can quantify the accuracy of their results, such as “our results are accurate to ±2.4%.”

Samplify’s APAX Profiler software tool measures UQ for both integer and floating-point data, allowing HPC scientists to know (perhaps for the first time) the uncertainty and accuracy of their numerical data. The figure illustrates the APAX Profiler output for a HPC climate data variable called cloud fraction (clt.dat).

The APAX Profiler software tool illustrates that for a HPC climate variable, APAX compression at 6:1 preserves the accuracy of numerical computations.

At an APAX encoding rate of 5.93:1, this NetCDF variable has a miniscule uncertainty of 9 x 10-7 %. The spectral plot (lower left-hand graph in the figure) illustrates that at 6:1 compression, the distortion introduced by APAX encoding is nearly 23 dB below the noise floor of the cloud fraction variable. At this level, the distortion introduced by APAX encoding at 6:1 is unnoticeable, sitting well below the uncertainty of the original signal’s noise floor. In general, APAX encoding reduces computing’s bandwidth and storage bottlenecks up to 6:1, and users control the rate-distortion tradeoff.

While media-specific compression is approaching its limits, electronic designers will have innovative compression algorithms that reduce today’s computational bandwidth and storage bottlenecks for numerical data.

Sponsored Recommendations

TTI Transportation Resource Center

April 8, 2024
From sensors to vehicle electrification, from design to production, on-board and off-board a TTI Transportation Specialist will help you keep moving into the future. TTI has been...

Bourns: Automotive-Grade Components for the Rough Road Ahead

April 8, 2024
The electronics needed for transportation today is getting increasingly more demanding and sophisticated, requiring not only high quality components but those that interface well...

TTI Rail Transit Line Card

April 8, 2024
TTI stocks premier interconnect, passive and electromechanical components for rail systems as diverse as door control, HVAC and cabin entertainment, trackside safety, communications...

Littelfuse: Take Charge for Peak Performance in Material Handling Evs

April 8, 2024
As material handling electric vehicles such as automated guided vehicles (AGVs), autonomous mobile robots (AMRs) and forklifts become an integral part of Industry 4.0, Littelfuse...

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!