MIT: The Faster-than-Fast Fourier Transform

Feb. 21, 2012
At the Association for Computing Machinery's Symposium on Discrete Algorithms (SODA), a group of MIT researchers will present a new algorithm that, in a large range of practically important cases, improves on the fast Fourier transform.

At the Association for Computing Machinery's Symposium on Discrete Algorithms (SODA), a group of MIT researchers will present a new algorithm that, in a large range of practically important cases, improves on the fast Fourier transform. Under some circumstances, the improvement can be dramatic - a tenfold increase in speed. The new algorithm could be particularly useful for image compression, enabling, say, smart phones to wirelessly transmit large video files without draining their batteries or consuming their monthly bandwidth allotments.

Like the FFT, the new algorithm works on digital signals. A digital signal is just a series of numbers - discrete samples of an analog signal, such as the sound of a musical instrument. The FFT takes a digital signal containing a certain number of samples and expresses it as the weighted sum of an equivalent number of frequencies.

"Weighted" means that some of those frequencies count more towards the total than others. Indeed, many of the frequencies may have such low weights that they can be safely disregarded. That's why the Fourier transform is useful for compression. An eight-by-eight block of pixels can be thought of us as a 64-sample signal, and thus as the sum of 64 different frequencies. But as the researchers point out in their new paper, empirical studies show that on average, 57 of those frequencies can be discarded with minimal loss of image quality.

Signals whose Fourier transforms include a relatively small number of heavily weighted frequencies are called "sparse." The new algorithm determines the weights of a signal's most heavily weighted frequencies; the sparser the signal, the greater the speedup the algorithm provides. Indeed, if the signal is sparse enough, the algorithm can simply sample it randomly rather than reading it in its entirety.

"In nature, most of the normal signals are sparse," says Dina Katabi, one of the new algorithm's developers. Consider, for instance, a recording of a piece of chamber music: The composite signal consists of only a few instruments each playing only one note at a time. A recording, on the other hand, of all possible instruments each playing all possible notes at once wouldn't be sparse - but neither would it be a signal that anyone cares about.

The new algorithm, which Katabi and Piotr Indyk, both professors in MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), developed together with their students Eric Price and Haitham Hassanieh, relies on two key ideas. The first is to divide a signal into narrower slices of bandwidth, sized so that a slice will generally contain only one frequency with a heavy weight.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!