Wireless Systems Design

Turning Promise Into Practice: How To build A Practical Wireless Sensor Network

Wireless Sensor Networks will open the floodgates to the wireless revolution. But building a practical wireless network can be a daunting challenge unless the concepts are kept simple.

Nobody can question that the wireless revolution has already begun—CDMA or GSM for long-range voice and data, Wi-Fi for WLANs (Wireless Local Area Networks) and Bluetooth for consumer-oriented PANs (Personal Area Networks) are all flourishing. Although each is a very successful commercial technology, they are restricted to particular applications areas by virtue of range, bandwidth and power requirements. Bluetooth v1.2, for example, can only extend to a PAN of eight nodes including the primary over a range of up to 30 meters, and it has a nominal raw data bandwidth of 1 Mbit/s.

For a wireless revolution of the magnitude predicted by The Economist’s report to occur, there needs to be a truly pervasive networking technology that can build networks consisting of hundreds of nodes (even though at present the overwhelming majority of practical wireless networks comprise tens of nodes or less). These nodes will need to be capable of communicating with each other at any time without being compromised by interference from other RF sources. The wireless networks these nodes build up will be characterized by inexpensive, ultra low-power radios, with modest bandwidth requirements—able to transmit small amounts of sensor data perhaps a few times a second—and typically operating in the globally accepted, license-free 2.4 GHz Industrial, Scientific, Medical (ISM) band.

There are some major design constraints: if a network is going to comprise many nodes then each has to be inexpensive—of the order of less than $5 today and even lower in the long term—and virtually maintenance free. There are also likely to be nodes sited in inaccessible places, so battery life of months or years from inexpensive cells is vital.

The technology to do all this is available now. ZigBee, the IEEE802.1.4-based solution championed by the ZigBee Alliance is one option, and there are a slew of proven proprietary alternatives (including the technology from the company I work for, ANT). Yet at present no single technology dominates or has even been installed in high volumes. That’s because designing a Wireless Sensor Network (WSN) can be so difficult that designers struggle to come up with commercial solutions.

It’s not just a case of switching on one radio and expecting it to talk to another. Technical challenges that have to be resolved include: how to avoid interference between nodes and other RF sources; whether the network is scalable; how many nodes can be supported; whether nodes can be added in an ad hoc manner without reconfiguring the rest of the network; what bandwidth is required; how the power consumption can be minimized; and what microcontroller resources will be needed?

What is a network?

If you’ve followed the trade press closely you’ll probably think there’s only one way to build a WSN, and that’s by creating a so-called mesh. Mesh networks are touted as the best way to maximize the potential of ultra low-power wireless sensors where every node can communicate with many (or even all) of its neighbours in a self-managing and healing topology.

Unfortunately, while mesh networks make for compelling academic debate, in commercial implementations with even a modest number of nodes they invariably prove difficult to set up, and in effect they introduce a level of complexity that can’t be justified for almost all contemporary practical applications. Engineers quickly conclude (often after a lengthy development program) that mesh networks are overly complicated—demanding lots of computing resource and electrical power, and are expensive. Fortunately, 99.5% of all envisaged wireless sensor-network applications can be designed without a mesh, eliminating the need to waste time grappling with the challenge.

Virtually all practical networking problems can be resolved using a simple predetermined structure comprising two to several dozen nodes at most. The simplest of these is the peer-to-peer network where only one node communicates with another. The simplest application example of this peer-to-peer networking is a humble switch controlling a light.

One step up from this simple example is a more complicated wireless network that can comprise several peripheral nodes communicating with a single receiving node—the fitness and health sectors are a prime, proven example of this type and right now millions of practical wireless networks, operating flawlessly on a daily basis, are being used across the world (see figure 1). Consider, for example, a cyclist wearing a sports watch (node 1) where node 2 is a GPS tracker, node 3 is a speed indicator and node 4 is a heart rate monitor all communicating simultaneously with the sports watch node via their own dedicated channels, A, B, and C (see figure 2). This type of network is often referred to as a star network because it features a central hub that can be schematically shown communicating in a star-like fashion with peripheral nodes.

Star networks can be further connected to other star networks to form complex systems often referred to as tree or cluster networks (see figure 3). Taking the example above, the cyclist’s network could be extended by the sports watch communicating via channel D with a PDA to download the distance, speed, position, and heart-rate data as the ride progresses. Further, temperature sensor and humidity-meter nodes could also connect to the PDA node via channels E and F. Each of the channels in this network is bidirectional so, for example, when the temperature rises above a threshold the PDA informs the sports watch, which sounds an alarm to instructs the cyclist to take on more fluid to maintain optimum performance.

Although we’ve used the sports sector as an example, these types of practical wireless networks are applicable to many other sectors, such as the burgeoning medical device monitoring segment, home automation, and industrial control. These examples show that most practical networks don’t require every node to communicate with several of its neighbours, providing the functionality of each node is equal. For example, in the tree network illustrated in figure 3, the temperature sensor (node 7) won’t need to send data to the heart rate monitor (node 4). However, the temperature sensor’s data is of use to the sports watch in order to trigger an alarm if the temperature rises too high and it’s easily routed via the PDA (node 5).

The mesh network in figure 4 routes communication via an intermediary before reaching its intended destination. This dramatically reduces the number of communication channels required, producing a practical pre-determined network that is efficient and requires few system resources and lower power consumption.

Importance of the protocol

Practical wireless networks must be low cost and immune to interference from other radio sources (including neighbouring nodes), reliable, and perhaps most importantly, consume little power. The last thing a user wants is a reliable network compromised by the need to change batteries every few days.

Each node requires a silicon radio allied to a microcontroller, often referred to as the physical layer (PHY), forming the hardware that drives the node. The PHY supports a protocol stack and an application layer that forms the specific instruction set for the application supported by the network. Some modern 2.4 GHz radios integrate the radio and microcontroller into a single chip.

The protocol is perhaps the most vital element in ensuring that a practical wireless network performs to expectations. It determines how the node communicates across a wireless link with other nodes by establishing standard rules for co-existence, data representation, signalling, authentication, and error detection.

Some engineers are tempted to create their own protocols, reasoning that it would be a good way to cut costs. However, while it’s possible to come up with something that will get the radios communicating, designing an efficient and robust protocol is another matter. It’s something most engineers—after a long a painful development cycle—realize is best left to the specialists.

One way to compare the various offerings from wireless communications companies is to consider the protocol’s efficiency by comparing a packet’s ratio of overhead (information required to setup the communication with a specific node and to determine how the information will be reliably sent) to payload (the actual useful data). If the packet comprises around 50% useful data, the protocol is efficient, if it’s around 20% it’s inefficient. A high ratio of data to overhead means the time that the radio transmits (when the power consumption is highest) is shortened. Hence the radio can go back into ultra low-power sleep mode faster.

There is a bit more to it than this, though. Bandwidth and hardware efficiency of the radio itself allied with how this is managed in terms of PHY efficiency when communicating are vital. The bandwidth of the radio broadly correlates to how much time the radio will need to spend transmitting in a relatively high power on mode for a given amount of data. Theoretically, the wider the bandwidth, the faster the transmission and the less time the radio will need to spend out of sleep mode. In the real world, bandwidth costs power and the optimal trade-off point is generally considered to be 1 Mbit/s before the added power losses begin to outweigh the gains.

But all these radio hardware efficiency savings can be swept away in an instant by a flabby physical layer efficiency. Power consumed by the radio when ‘on’ will have the biggest effect on overall power consumption because this will usually be an order of magnitude higher than the power consumed by the radio when off (although this figure is important too due to the amount of time a radio will spend in this state). The problem is that the radio will diligently transmit what it’s told to transmit by the protocol and unless the data is packaged in a way that optimizes ‘off time per bit of data sent,’ the proportion of the time the radio spends on will rise significantly. The real challenge, therefore, is to maximize the amount of time the radio spends off or in minimum-power sleep mode.

WSNs are characterized by small amounts of data sent occasionally—such as a temperature reading updated every two seconds for example. Usually, if data is sent but occasionally not received this isn’t a problem because updated data follows soon after. This technique is suited to sensor applications and is field proven as the most economical method of operation.

However, if it’s essential that every piece of data is received—for instance during a backup procedure—the protocol should include instructions for the receiving node to return an acknowledgement that the message was received successfully. If the acknowledgement is not received, the message is resent.

In addition, some technologies provide burst messaging; this is a multi-message transmission technique using the full data bandwidth and running to completion. The receiving node acknowledges receipt and informs of corrupted packages that the transmitter then resends. The packets are sequence numbered for traceability. This technique is suited to data block transfer where the integrity of the data is paramount.

Building practical wireless infrastructure

Identifying a good silicon radio and efficient protocol is only part of designing a practical wireless network. Whether your network is going to comprise two, ten, or a hundred nodes, the biggest challenge is linking those nodes into a reliable, scalable network.

The key to this is to choose a technology where at the physical connection layer all nodes have equal functionality, so are capable of acting as “slaves” or “masters” within a practical wireless network and can swap roles at any time. In other words, the nodes should be able to perform as transmitters, receivers or transceivers in order to route traffic to other nodes. (Although it is important to note that at the network layer (as opposed to the physical connection layer), there may still be a requirement for certain special function nodes such as concentrators to ensure robust communications.)

In addition, every node should be capable of determining the best time to transmit based on the activity of its neighbours, eliminating the need for a network restricting coordinator or supervisory node. This combination of features means it’s easy to add another node to a network of any topology in an ad hoc fashion. There is no need to plan in order to consider what type of node will be required to extend the network or to make provisions for a coordinator node to tie the network together when it reaches a certain size.

Some technologies complicate network building at the physical connection layer by introducing reduced function (i.e. slaves), full function (i.e. masters) and coordinator nodes of varying functionality. Typically, the coordinator has to first form a subset cluster and then has to handle requests from neighbouring coordinator nodes wishing to attach their clusters to the mesh (figure 5). Reduced function nodes can’t act as masters and because of this, coordinators have to be distributed throughout the network to supervise subsets of nodes, adding complexity and preventing nodes joining or leaving the network in an ad hoc manner. Worse still, computing resources to manage these complex systems are expensive and consume a lot of power.

Setting up a network to perform a practical function requires more than establishing a network of nodes that can communicate with each other. The nodes need to be configured to perform a function such as measuring temperature, humidity, or heart rate. The configuration for each node to perform this function can be made considerably easier by selecting the appropriate technology. Some technologies enable configuration, testing, and debugging via a PC-based GUI so that the any skill-level can do it in hours rather than days.

In operation, the sensor is configured at start-up with a flash-memory stored-sensor profile and the relevant sensor communication protocol. An application host MCU isn’t required, further cutting system cost, power consumption, and size.

Eking out the power

Ultra-low power is essential for a practical wireless network because the coin cell batteries powering the nodes need to last for months or years to minimize maintenance. Let’s look at some typical numbers for a proprietary technology from the technology developed by my company, running on a 2.4 GHz radio from Oslo-based Nordic Semiconductor. (See the sidebar for detailed calculations).

For an application sending 8 bytes of data once per second for an hour a day (for example, a foot pod communicating with a sports watch), the battery life of the transmitter and receiver are 6.4 and 5.6 years, respectively. Note that longevity is heavily dependent on the application, and this example is a low-usage case. Figure 7 shows how battery life varies in message frequency for a particular use case. For instance, in an industrial setting the sensor may be required to be in use 24 hours a day with a message period typically of 0.5 Hz. Using the formula detailed in the sidebar, transmitter battery life would be 7.2 months, and the receiver’s life would be 6.3 months.

This compares favourably with a commercially available ZigBee solution. Battery lives for ZigBee transmitters and receivers in industrial applications using a transmitter current of 28 mA (above the peak-current threshold for a CR2032 battery), receiver current of 24 mA, and a much longer transmission period than the proprietary example (because of a lower bandwidth and less efficient protocol) would be 8 to 10 weeks.

Staying tuned

In keeping with many contemporary 2.4-GHz technologies, wireless sensor networks operate in an increasingly crowded part of the radio spectrum. Network nodes will have to compete with Wi-Fi, Bluetooth, cordless phones and each other when trying to get their message through. However, network nodes have one big advantage, being that they don’t have to transmit very often, and when they do, it’s for a very short time. Nonetheless, an interference-avoidance strategy is vital. In fact, regulations governing the license-free Industrial, Scientific and Medical (ISM) parts of the spectrum state “a device must expect interference.”

There are three common techniques for minimizing the impact of interference for devices operating in the 2.4-GHz band. These are a time-slot-allocation scheme such as the one our proprietary product employs, Direct Sequence Spread Spectrum (DSSS) such as that used by ZigBee, and a Frequency Hopping Spread Spectrum (FHSS) such as that utilized by Bluetooth.

Both DSSS and FHSS work well, but require the transmitter and receiver to be synchronized. In the case of FHSS this is to ensure the devices are tuned to the same narrow band simultaneously and for DSSS so that the de-spreading by the same pseudo-random sequence used to spread the signal in the first place works properly. Synchronization adds complexity to the network and increases power consumption. Although synchronization can be switched off to save power when communication isn’t needed, re-acquisition can take several seconds and uses even more power.

ANT’s proprietary technology uses an adaptive isochronous network scheme that takes advantage of the fact that the radio only has to transmit for a very short period (less than 150 µs per message), allowing a single channel to be divided into many timeslots. The messaging period determines exactly how many. For example, taking a 4-Hz reporting rate and with each node occupying a 2.5-ms interval (transmission time plus guard bands), the quotient of 250 ms and 2.5 ms is 100 timeslots. In other words, 100 nodes can report four times per second with no possibility of interference.

In operation, transmitters start broadcasting at regular intervals but if interference from a neighbour is detected on a particular timeslot then the transmission scheme is modified until a clear channel is found. If the radio environment is even more crowded—for example, if other 2.4-GHz sources such as Bluetooth and Wi-Fi are present, or several tens of nodes are in close proximity (see figure 1) the system is endowed with a frequency agility scheme to allow an application microcontroller-controlled hop to different 1-MHz slots within the 2.4 GHz band (see figure 6).

Ubiquitous wireless connectivity

Wireless sensor networks have the potential to make wireless connectivity ubiquitous and truly unleash the full power of the wireless revolution in myriad applications—most of which have yet to be conceived. However, this revolution won’t start unless networks become a lot simpler to set up, maintain, and scale.

A fixation with complex mesh networking—which is simply not required for almost all practical applications—has led to the development of technologies that are expensive, require lots of computing resource, are difficult to construct, and require large batteries.

Practical wireless networks use nodes of common functionality employing efficient silicon radios and protocols. They can be scaled in an ad hoc manner, are inexpensive, and can run for months or years on coin cell batteries.

What wireless networks equipped with tens or even hundreds of nodes will eventually be used for is open to debate. Pundits cite lighting control in large buildings, or humidity monitoring of vineyards, or smoke detection in oil refineries as a few possible applications. But the truth is that we don’t yet know what most of the applications will be. We do know that WSNs have huge potential.

For more information, contact Brian Macdonald at [email protected].

TAGS: Medical
Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.