Vladimir Timofeev | Dreamstime.com
Data Server Speed Promo

You Feel the Need, the Need for Speed

July 16, 2021
“Make it go faster” seems to be the mantra overriding much of the electronics industry these days. Lou Frenzel presents 11 keys to achieving that goal.

What you'll learn:

  • 11 essentials to focus on when trying to increase data-rate speeds.
  • Dealing with bandwidth constraints.
  • Modulation and MIMO methods.

What’s the one dominating factor driving virtually all electronic new product design today? To some it might be low power consumption, but it’s not too difficult to identify higher speeds and faster data rates as the main objective guiding most new ICs, equipment, systems, and technology development.

Year after year, all electronic devices just keep getting faster. Memory speed, processor speed, Wi-Fi, and other communications data rates continue to rise. Tom Cruise’s character Maverick in the popular 1980s movie Top Gun stated in this blog’s headline what we all feel about electronics today.

We like higher speeds because we hate to wait. In our instant gratification society, we want split-second video downloads, lower-latency everything, and no waiting for whatever the application. The whole industry has responded with a continuous stream of faster products. And that speed quest continues.

How do we make data go faster? Just out of curiosity as an intellectual exercise, I’ve summarized the factors that I could think of to achieve higher communications data rates, for whatever it’s worth:

1. Bandwidth and the communications medium. The available wired and wireless media dictate how fast you can go. For wire data transfers, we use varying types of media like twisted-pair, coax, and fiber-optic cable. All of these have upper frequency limits that determine the bandwidth available to carry fast data.

For wireless, the carrier frequency ultimately determines the maximum possible data rate. Again, the available bandwidth sets the upper limit. As data rates go up, more bandwidth is needed, which means pushing into higher frequencies to make that happen. However, we’re constrained by regulations imposed by the FCC or other agencies. Furthermore, electronic components must be available to implement the required higher frequencies. Bottom line, bandwidth is the main determining factor in achieving high speed.

2. Constrained by physics. The Shannon-Hartley law states the relationship between bandwidth and data rate:

C = 2B

Here C is the channel capacity or serial data rate in bits per second and B is the bandwidth in hertz. It says that with a bandwidth of 3,000 Hz, you can have a maximum data rate of 2(3,000) = 6,000 bits per second. Data rate C is determined by the bit time (t) in a serial data stream or C = 1/t. Of course, this is all theoretical and assumes a perfect medium and no noise. If noise is present, the data rate will be even less.

3. The effect of noise. Here’s the complete Shannon-Hartley law:

C = B log2 (1 + S/N)

The signal (S) and noise (N) values are given in power. The signal-to-noise ratio (SNR) clearly affects the data rate. The greater the noise level, the lower the maximum data rate that can occur for a given bandwidth.

4. Boosting data rate in a fixed bandwidth. One popular way to get more bits per second through a narrow channel is to use a type of coding that translates bits into symbols. A symbol is a signal using multiple voltage levels, multiple phase shifts, multiple frequency shifts, or a combination of similar schemes to represent two or more bits within the same symbol interval. When such schemes are applied, higher data rates can be achieved in the narrower bandwidth:

C = 2B log2 N

where N is the number of symbol levels used. Examples include:

  • 8PAM uses eight voltage levels, with each level representing three bits—000 through 111—for a data rate that’s 3X the symbol rate.
  • 4FSK utilizes four different frequencies, each representing two bits. For each frequency that’s used, two bits are transmitted, doubling the data rate.

5. Signal-to-noise ratio (SNR). The viability of a data link or its efficiency is determined by the bit error rate (BER). BER is the number of bit errors that occur in one second. For a good channel, the BER is low or better than 10-5 to 10-11. As the data rate increases, there’s an increase in the number of bit errors. That’s because the bit voltage will transition as modified by the noise, which make the bits harder to recognize. Reducing the data rate then has the effect of lowering the BER—bit times become longer with more energy than the noise transitions.

6. Forward error correction (FEC). Several methods of detecting and correcting errors on-the-fly have been developed. Adding extra bits to the data stream enables bit errors to be identified and corrected. These methods are either hardware, software, or a combination of the two. The end effect is as if there was an increase in transmit power, thereby permitting higher data rates.

This outcome is referred to as coding gain. For instance, using FEC can produce results equivalent to increasing signal power by 3 db. A 3-dB increase is equivalent to doubling transmit power. As a result, lower BER allows for an increased data rate. The downside is that adding extra bits to the data stream lowers throughput.

7. Spectral efficiency. The way we measure and express how much speed we can get through a given bandwidth is spectral efficiency given in bits per second per Hz. If we measure 780 Mb/s through a 40-MHz channel, the spectral efficiency is 780/40 = 19.5 b/s/Hz.

8. Modulation. Modulation methods also can produce higher speeds. Examples you’ve heard of include QAM and OFDM. Quadrature amplitude modulation (QAM) employs a combination of amplitude and phase shifts to produce symbols that increase data rates. 64QAM uses 64 symbols to define 6-bit codes. Transmitting one symbol produces a 6-bit stream, thereby multiplying the basic symbol rate by 6 and leading to a 6X data rate increase.

Orthogonal frequency-division multiplexing (OFDM) uses multiple subcarriers in the assigned bandwidth. The data to be transmitted is divided up into multiple lower-speed streams, each modulating one of the subcarriers. Using hundreds or thousands of subcarriers multiply the lower data rates to extremely high data rates.

9. Multiple input, multiple output (MIMO). MIMO takes advantage of multiple transmitters (TX), receivers (RX), and antennas. The data to be transmitted is divided into multiple lower-rate streams and each is transmitted over the same bandwidth to multiple receivers. This process multiplies the data rate by the number antennas and TX/RX paths. A typical arrangement is 4×4 or four TX and four RX.

At higher frequencies, the smaller antennas permit dozens or hundreds to multiple data paths that boost data rate. Add in agile beamforming using phased arrays in the GHz bands that boost power levels, and you can increase data rates even further.

10. Compression. I started to add data compression to this list, but that’s not so common in data transmission. Data compression is used mainly to reduce the storage requirement for massive data like video. However, in certain types of communications, shorter messages translate into higher data throughput.

11. Combinations. In practice, several of the methods described above are combined to achieve significant data rates. Newer versions of Wi-Fi and 5G cellular use most of the above techniques to deliver data rates above 1 GB/s.

What I wonder is, what new techniques will we see in the future to create even higher speeds? Have we nearly reached the limit of technology? Or can we expect to see terabit speeds soon?

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!