Products based on wireless technology have been steadily insinuating their way into our lives since the 1980s. Wireless devices such as Apple’s iPhone and Amazon’s Kindle have become common items in many of today’s households. Similarly, within the industrial market segment, the use of small wireless sensor devices is becoming more widespread within office buildings and on factory floors.
Equipment manufacturers are incorporating and connecting these inconspicuous technologies into wireless sensor networks (WSNs). A WSN consists of a set of spatially distributed devices that work together to send data and control information among each other, as well as to a main gateway, to perform some meaningful function. Each device can operate autonomously under the control of an embedded firmware application. Sensors are used to monitor physical or environmental conditions, such as temperature and motion.
WSN devices have proliferated via one of two technological trends. The first is the introduction of new products. The second pathway is the upgrading of established products. In the latter case, the primary driver is usually the need to convert the product’s previously wired interface into a wireless one. A furnace and its wireless, wall-mounted temperature-control unit interface is one such example.
In high-node-count networks, there’s a strong need to hammer down the cost of the individual node as low as possible. A primary cost driver for each device is the amount of memory required to operate the application.
Wireless device manufacturers and network designers should consider three strategies as they strive to achieve a WSN device with a low memory footprint (LMF): simplifying the topology and routing, device feature selection, and distributing some of the specialized tasks among different devices in a network.
MEMORY PARTITIONS IN WSN MCUs
WSN device applications reside in, and are executed from, what is generically referred to as an MCU’s “program memory.” In practice, program memory is a well-organized structure, comprising several functional partitions that must be maintained to ensure the device’s proper operation. These functional areas consist of different types of storage:
• The Program Text and Data sections of memory contain the application’s executable machine code instructions. This is generally flash memory, the contents of which are only changed during firmware updates. The size of the text area is fixed, and it is determined at the time the code is compiled and linked. The Data memory contains the variables used by the application at run time. This size is fixed at link time as well.
• The Stack and Heap sections of memory are two runtime mechanisms used by modern programming languages to store information that’s dynamic in nature. The Stack stores parameters that are passed during subroutines or function calls, for example. The size of the Stack varies during the execution lifecycle of an application. For applications that use dynamic memory allocation at runtime, the Heap provides the means of storage.
Like the Stack, the Heap doesn’t have a fixed size. The application developer must set aside sufficient Stack and Heap space so the device can operate in a stable manner regardless of the network it’s deployed into. The Stack and the Heap, on account of the dynamic nature of their information content and the speed at which this content must be accessed, suggest the use of random access memory (RAM).
• Nonvolatile memory (NVM) is a third type of storage medium employed by WSN MCUs. This storage is used to retain important information that’s specific to each device, which must be retained across power failures and/or device resets. Information such as security keys and the MCU’s media access controller (MAC) address are prime candidates for this storage area. In practice, this is usually EEPROM. Table 1 shows the primary memory areas that must be maintained and properly managed by the WSN MCUs.
Table 2 shows the upper limit of the memory partitions for what is defined as LMF WSN MCUs. An LMF device has less than 64k of program memory, along with less than 2k each of RAM and NVM storage.
Microchip and other microcontroller manufacturers offer a range of MCUs in this regard. The challenge becomes how to structure the MCU’s application firmware so it can operate within these low-memory constraints. Again, three different strategies can be applied.
NETWORK TOPOLOGIES AND ROUTING
The first strategy is to simplify the topology and routing mechanism. In wireless networking, “topology” refers to the arrangement, configuration, and relationship of the individual devices or nodes to each other on the network. It also involves the data-transmission pathways throughout that configuration.
ZigBee and other wireless protocols support two broad topology categories: star and mesh. The choice of network topology deployed can directly impact the MCU’s memory usage. Three different types of WSN devices can be used to form particular topologies:
• Coordinator: This device forms the network and enables other devices to join. It’s the central network device and provides many services, such as security, performance monitoring, and network configuration. The Coordinator is a form of a full-function device (FFD).
• Routers: The primary role of routers is to extend the network transmission range by relaying messages to other devices. They provide multiple paths to destination devices and redundancy, and they are used to extend the size of the network by supporting other child devices. They are a form of an FFD.
• End devices: These are generally of limited capability, but perform a specialized task. End devices can send and receive messages and directly communicate only with their parent. They don’t support child devices. They are a form of a reduced-function device (RFD).
Continue to page 2
In the star topology, all of the devices are children of a central Coordinator device and are within its direct radio communication range (Fig. 1). Each device is within two hops of another. The RFDs don’t communicate directly with each other, but instead send messages to the Coordinator, which then forwards them to the destination device.
The mesh topology consists of a lattice of interconnected devices, including both routers and end devices (Fig. 2). All FFDs within radio range of each other can communicate directly.
Outside of radio range, communication among devices is supported by passing data along the most effective links until the destination is reached. This is commonly referred to as “multi-hop” communication. In practice, routers are interspersed throughout the network and then connected to provide at least two paths to each device to avoid a single point of failure.
TOPOLOGY IMPACE ON MEMORY USAGE
The star topology is much easier to operate, and it can have a significant impact on reducing the memory footprint of network devices. Complexity and the amount of data required are two key drivers:
• Code size and complexity: The star topology is much simpler to implement at the network layer of the device protocol stack, and the routing algorithm is straightforward. Just send every message to the coordinator, and they will be sent on to their final destination. By contrast, the routing algorithm required to support a mesh network device is very complex. With complexity comes increased program memory size.
As an example, Microchip’s mesh routing subsystem within its ZigBee PRO stack requires about 4 kbytes more program memory than a similar, more streamlined mechanism employed in its MiWi protocol. Both use the IEEE 802.15.4 MAC protocol.
• NVM resource usage: The mesh topology requires routers and the Coordinator to maintain a routing table. This table is used to maintain the best paths to other devices in the network. In its simplest form, the MCU memory space required to maintain a routing table will grow proportionally with the network size.
Some wireless protocols, including ZigBee, require the preservation of important information across power failures and device resets. This includes the preservation of the routing table. Thus, to support the mesh topology and its related routing table, NVM storage is required. A typical application will require about 100 bytes of NVM for maintaining the routing table of a mesh network of up to 20 devices.
While some applications may require support for a mesh topology, a star topology is sufficient for many others. In such instances, the use of the star topology will yield a reward of lower memory usage. The primary benefits are in two areas: code space savings gained from a simplified routing mechanism, and RAM or NVM storage savings gained from not having to maintain a complex routing table.
Another topology-related technique for saving memory is to use what is commonly referred to as “tree routing.” Rather than maintain a complex routing table and perform computationally expensive link cost calculations, devices make simple routing decisions with tree routing. They either pass any given packet up the tree to its parent or down the tree to one of its descendants. In practice, this decision is based on the destination device’s address. Generally, tree routing requires about 60% less code space to implement than a full mesh routing algorithm.
APPLICATION FEATURE SET SELECTION
The second strategy for reducing the memory footprint is to streamline the feature set of each WSN device. To support this strategy, the device application must be designed in such a way that, for each deployment, only the required core feature set is actually included within the MCU’s memory space. Unused features are excluded, directly reducing the required memory footprint. The architecture of the device must be designed with this strategy in mind.
Using the free Microchip ZigBee protocol stack as an example, all major protocol-stack supported features, particularly those that may be optional for some deployments, are designed so they can be either included or excluded from a given device realization. This is accomplished by providing a feature selection tool and coupling its use with judicious use of compile-time options.
FEATURE SELECTION IMPACT ON MEMORY USAGE
The ZigBee protocol feature called “fragmentation” illustrates the effectiveness of this streamlining strategy. The IEEE 802.15.4 specification, which ZigBee is based on, limits each packet to 127 bytes. Applications that require larger payloads aren’t directly supported. To address this issue, the ZigBee Alliance introduced the fragmentation feature, whereby packets larger than 127 bytes of payload can be broken into smaller blocks. These blocks are sent and reassembled at the receiver.
While this is a useful feature, it may not be a necessary option, since most packets that are transmitted fall in the range of 20 to 40 bytes. This feature required about 1.5 kbytes to implement. More importantly, a large block of dynamic memory must be set aside to manage fragmented packets that were never sent.
To accommodate both users and nonusers of a particular feature set, the strategy is to offer a convenient way of including or excluding them from a particular WSN device. A configuration tool is used to select the desired feature set, and it, in turn, generates the desired configuration file. This file is used by the application at build time to create devices that only contain the selected feature set.
In a wireless sensor network, some common features that lend themselves to be either included or excluded from a particular deployment include security and encryption, where packets are instead sent in the clear. Tree routing replaces the more complex mesh routing. And, frequency agility—the ability to dynamically switch channels to avoid interference—is replaced with the simpler fixed-channel mode of operation.
For customers who are deploying a private WSN, where inoperability with other manufacturers’ devices is not a requirement, such a feature-selection approach can be a significant cost saver. When used effectively, the memory footprint of the WSN device can be lowered, which directly affects the final cost of the device.
Continue to page 3
The third strategy is to distribute certain specialized operations to a limited number of devices, sometimes referred to as distributive specialization (DS). This means that WSN devices and the network are configured in such a way as to distribute certain operational functionality among a very limited number of “helper” devices within a network. This is in contrast to building all of the functionality into most of the devices of a certain type. Common features that lend themselves to this distributive specialization strategy include:
• The management and distribution of security keys (a Trust Center device)
• The monitoring and management of the physical channels (a Channel Manager)
• The collector and aggregator of network data (a Collector device)
• The keeper and distributor of routing information (a Route Concentrator device).
The network’s and its cooperating devices’ collective ability to distribute these operations can have an impact in reducing the overall average cost of each device. In practice, the RFD devices would be the predominantly low-memory, lower-cost MCUs. At the same time, “helper” devices would be strategically distributed throughout the network in much smaller numbers to perform some of the more memory-expensive tasks.
Designers can take advantage of three potential strategies for conserving memory in WSNs: using the simplest topology and routing mechanism; designing device features so they can be easily excluded when not required by a particular application; and distributing some specialized services among different devices in the network so as not to burden each device type with all the services.
Employing all or some of these strategies is an effective way to lower the memory- footprint threshold requirements for WSN devices, which in turn can lead to meaningful cost savings for wireless product and network providers.