Today, wireless communication relies on numerous independent devices, each employing its own protocols and operating within a fixed segment of the frequency spectrum. Because these devices can't communicate with each other, they have collectively created a highly inefficient climate. Moreover, there's a potential for interference and "jamming" of each other's transmission and reception, especially when a high-power transmission occurs simultaneously with highly sensitive reception.
It's not unusual to see people with more than one cell phone, a Wi-Fi card, a notebook with an NIC, a Bluetooth-enabled PDA, and a GPS device in their car. Eventually, though, users will want to consolidate and coordinate all communication appliances into one intelligent device.
Adaptive radio uses one device to communicate across all of these frequency ranges with the appropriate protocol. When the communication is initiated, the adaptive radio senses its environment and, for example, chooses the best available frequency. During the transmission and reception, it might change protocols and frequencies several times to ensure a reliable, high-speed connection. In the fullest implementation of this vision, billing for these services would be aggregated onto one monthly invoice, and customers would no longer pay attention to which specific protocol or technology was handling their wireless communications.
This vision isn't nearly as far off as it might seem. Intel Research has been working on developing the hardware and software building blocks necessary for implementing adaptive communications. The goal is to create inexpensive silicon components and software modules that vendors could use to assemble devices with adaptable intelligence. Many innovative technologies have been developed, yet several obstacles remain.
Spectrum Policies Issues One of the biggest hurdles facing adaptive radio is a policy issue, rather than a technology challenge. There's insufficient allocated radio spectrum for large-scale, high-speed wireless communications for cross-technology operation. A significant part of the problem stems from the longstanding policy of allocating fixed bands of spectrum to dedicated forms of communication.For instance, TV channels are assigned fixed frequencies that can be used only by the assignee. When that assignee is off the air, the channel bandwidth remains unused and unusable by other devices. Likewise, the frequencies for TV channels that have no station assigned to them in a particular area can't be used by other devices, or even by other television broadcasters in the same area. So, these channels remain fallow. Also, there's unused spectrum between the channels. The rationale for the swaths of unused bandwidth derives from the high power levels transmitted by television broadcasters and the potential for interference between them. In some circumstances, TV tuners can't pick up broadcasts cleanly unless a wide spectrum separation exists between the transmitted channels. In the UHF band, this spectrum separation involves many TV channels, called "taboo channels."
Many parties are working with the U.S. Federal Communications Commission (FCC) to consider a new form of spectrum assignment. It would move from the current fixed-bandwidth allocation to a rules-based assignment. Multiple parties can access the unused bandwidth without causing (in concept) interference to the primary user. Called an overlay approach, this solution creates an overlay of potentially shared secondary communication channels on the existing, assigned primary-user bandwidths. This sharing is negotiated on a voluntary basis, either before use or even potentially in real time. One example involves the possible sharing of the public safety and TV bands. Here, low-power devices could operate at much smaller separation distances from TV broadcast stations without causing harmful interference.
A second approach being researched is an underlay approach, where the assigned bandwidth in current use serves double duty. In underlay, low-power transmissions occur on the same spectrum used by other devices, such as Wi-Fi and cordless phones. These transmissions work for short-distance communication within the home or a small office. They have an extremely wide bandwidth of several gigahertz and resultant lower power density in a given frequency band. So when these transmissions overlap with other signals using the same spectrum, they appear as noise to these devices. Using filtering technology, the devices can ignore the unwanted transmissions, avoiding any conflicts.
The need for more bandwidth comes directly from the nature of adaptive communication. Consider what happens today when a cellular system receives more calls than it can handle. New calls are refused. Hence, denying access conserves bandwidth. In a scenario where adaptive radios would switch to other channels, the bandwidth would be consumed and the communication would occur.
Another reason for requiring extra bandwidth is the demand created by having constantly available, reliable high-speed communications. Of course, new applications will arise to use that bandwidth. For example, homes will soon stream video and audio wirelessly to entertainment centers and televisions located throughout the house. Wire connections between stereos and speakers will become a thing of the past, as will wires between computers and peripherals and between computing devices. Every one of these connections will consume spectrum bandwidth.
Adaptive Radio Requirements Adaptive radios need to recognize the environment in which they are placed. They must be able to detect and recognize the available frequencies. Then, they have to inspect those channels for interference, noise, and other factors that affect signal quality. They also must recognize existing licensed networks and quickly connect and negotiate for the required throughput.Having detected this information, adaptive radios must be able to analyze the channels, intelligently choose the optimal channel, and determine the appropriate protocol and modulation scheme to use. Finally, the device should reconfigure itself to use the protocol and modulation on the appropriate frequency.
Once communication is established, the adaptive radio must monitor the channel continuously to detect any significant drop in quality or the availability of significantly better channels. If either of these situations is detected, the radio has to seamlessly switch to the new frequency and protocol. As a result, the radio must be opportunistic in its search and reconfigurable on-the-fly to exploit the benefits of better channels.
The ability to seamlessly switch between frequencies and protocols also implies that the radio must be able to concurrently configure and connect to multiple radios. Consequently, adaptive radios are sometimes called cognitive, reconfigurable radios. Given that the challenge of multiple communications devices with distinct protocols has confronted users for several years, why is the solution of cognitive reconfigurable radios only emerging now?
Possible Approaches Most networking devices today, such as NICs and Wi-Fi components, are built to address a distinct, narrowly defined requirement: communicate over one channel using one protocol as inexpensively as possible. Limited adaptive techniques are employed today (for example, determining the best data rate for communications), although there isn't any switching between protocols/networks. These inflexible devices typically use dedicated hardware.Some flexibility can be gained by using a DSP and an ASIC. The DSP processes the signal and offloads some processing of protocol algorithms to the ASIC. Other designs use DSPs with closely coupled accelerators. DSPs, however, require lots of power, and this type of radio would need multiple DSPs. All previously mentioned devices have favored a simple hardware architecture of antenna-PHY-MAC that's well established and well understood.
A soft solution, where a reconfigurable processor is employed to handle the necessary protocol, hasn't been viewed as a viable alternative. Such processors don't have the raw processing power or can't reconfigure themselves on-the-fly while handling an ongoing communications protocol. But as ICs have realized the benefits articulated by Moore's Law, the power of reconfigurable processors to execute complex algorithms in real time has seen dramatic increases. As a result, for the first time ever, adaptive radios can be pursued as a viable technology. These adaptive radios will boost spectrum efficiencies by taking better advantage of unused spectrums and determining the optimum spectrum as a function of time, space, and frequency. The result will be better communications for users.
Reconfigurable Communication Core We've been developing the silicon building blocks for implementing such inexpensive adaptive radios. Current research includes prototype radios that comprise a mesh of heterogeneous processing elements (PEs), with multiple, routable paths between them. Many researchers have chosen homogeneous arrays that permit lots of flexibility, incorporating ALUs and processors at each node.These arrays typically suffer from ALU and processor requirements of decoding and relaying instructions every cycle. Due to their general-purpose nature and the extreme flexibility allowed, they require significant amounts of memory and cycles to properly compute a given function. The cost in size and power can be several orders of magnitude compared to dedicated logic (Fig. 1). Each tick mark on the MOPS/mW (millions of operations per second per milliwatt) axis represents an order of magnitude. The dedicated logic handles only one protocol, but the coarse-grain and DSP/FPGA logic can support multiple protocols. The difference, of course, is that the DSP/FPGA can handle anything. But with this infinite flexibility comes a severe power penalty, as shown in Figure 1.
The reconfigurable communication core (RCC) is a result of extensively examining many wireless protocols and identifying similar and heavy-computational-burden processing kernels across them. Special reconfigurable accelerators with a much coarser granularity than FPGAs are then designed to execute these functions. Optimally, the functions are reconfigured only once to avoid cycle-by-cycle reconfiguration for each function to be executed, and data is "streamed" into and out of the PEs. Processing is done both spatially and temporally with a significant amount of spatial parallelism.
As much as possible, processing similarity between dedicated hardware processing (complete spatial parallelism) is employed. Time-division multiplexing then is used until the point where frequencies are allowed by the semiconductor process. Thus, processing frequencies aren't being tied to small multiples of the analog-to-digital converter (ADC) rate, as in dedicated hardware. This yields an advantage in area savings that mitigates the increased area due to reconfigurability. Architectures such as RCC hold the promise of approaching the power/size of dedicated hardware while maintaining enough reconfigurability to address most wireless protocols.
For example, the PEs are components like filter-microcoded accelerators, forward-error-correction accelerators plus interface elements, and controller processing elements. The radio can reconfigure itself for various protocols by simply controlling the route that data packets take between the different PEs, as well as reconfiguring the PEs themselves.
The reconfigurable approach conserves power and size compared to the alternative solutions of multiple dedicated cores and other software-defined approaches to radio. In addition, the reconfigurable approach has the eventual advantage of easier programming, along with greater portability and scalability of elements to other platforms and future protocols using existing tools. The scalability is offered via the "building block" mesh connect. This enables connections between "new elements" and their associated communications infrastructure (e.g., routing nodes, etc.), similar to adding new roads when building more houses in a neighborhood (Fig. 2). The new "roads" or interconnect infrastructure also doesn't affect the existing infrastructure. This architecture builds on the fact that for communications PHY applications, the operations are highly pipelinable, and for the most part they require communications only with the nearest neighbor processing element. If interconnect congestion occurs with very large networks of processing elements, it's easy to add an option to add a simple hierarchy to the interconnection architecture.
Continued research is driving further componentization of adaptive-radio elements. The goal is to make them sufficiently effective and inexpensive so that they become the standard solution for constantly-on, high-quality wireless communication in the home, at work, and in between. For this vision to become reality, however, spectrum-allocation policies will need modification as well, so that all possible bandwidth is available for constant communication. The table summarizes the design guidelines for adaptive radios.
Advances in technology are enabling more efficient spectrum usage and influencing reforms in spectrum policy. The results of current research into cognitive, reconfigurable radios will permit ubiquitous wireless communication across multiple protocols, networks, and spectrums.