Use a Data-Centric Publish/Subscribe Framework for IoT Applications

Use a Data-Centric Publish/Subscribe Framework for IoT Applications

March 30, 2018
With a data-centric connectivity framework, developers and system architects can more effectively build and enable IIoT applications.

Download this article in PDF format.

The Internet of Things (IoT). The Industrial Internet of Things (IIoT). Perhaps you have heard these buzzwords once, or twice, or more likely a few hundred times. But hearing about them and developing them are two completely different situations.

Let’s say you’ve been assigned to (or volunteered for) a project to upgrade an existing device or system with new connectivity requirements. Or maybe, you’re on a project to design a distributed system from scratch. Chances are that this system is complex, with multiple data sources, destinations, communications mechanisms, operating systems, etc.

These distributed systems can be categorized into two types of applications: One type is where each deployed system has just one or a few devices that periodically push their data up to the cloud. This is prevalent in consumer-based IoT applications, such as smart thermostats or sprinkler systems. The second type of application is more industrial in nature, where several devices and sensors work in unison to complete a localized control application of some kind. In this case, there’s just too much data to push up to the cloud and have any kind of real-time response to changing conditions. These more complex systems are typical in IIoT applications.

These IIoT systems require more localized processing elements that can provide the real-time control that’s needed, and then push lower-volume processed data up to the cloud for long-term analytics. Traditional communication protocols do not suffice here. The need is really for a data-connectivity framework with the performance, scalability, reliability, and resilience required by this more complex industrial system. A connectivity framework will provide more complete data-communication capabilities and greatly reduce the corresponding need to implement these capabilities in the application software.

If you’ve inherited an existing system, chances are pretty good that the number of lines of code to implement the communication infrastructure is in the thousands or tens of thousands. Typical design details that need to be implemented in a complex distributed system include:

  • Data filtering (How can I limit data transfer to only the data of interest?)
  • Data encoding (How is data serialized and sent over the network? XML? JSON? Binary?)
  • Initialization (How do I get the system booted up correctly?)
  • Addressing (How do I know where to send or get data?)
  • Congestion (What happens when messages take too long?)
  • Failures (How do I know when a communication link or a device fails?)

What if there was an easy way of creating high-performance peer-to-peer communication in a distributed system in only 35 lines of code? There is if you use a connectivity framework. Check out this blog post for more information. We’ll explore some of the key capabilities of a connectivity framework for IIoT systems in the following sections.

Defining the Pub/Sub Pattern

You’re probably already familiar with the publish/subscribe (pub/sub) communication pattern, whether you know it or not. For example, did you know that you use a pub/sub communication pattern with Facebook, Twitter, or any other social-media channel? On these platforms, you’re subscribing to people or organizations (otherwise known as “following”) that publish their current status or update. You then receive the status or update as it’s posted, and there’s no need to continually ask them how they’re doing or what’s new.

The same pattern can be applied to industrial systems with applications subscribing to sensor updates, or alarms, or system commands. This is implemented by the Data Distribution Service (DDS), a connectivity standard, managed by the Object Management Group (OMG), which is based on pub/sub. It’s easy to map your different modules or tasks as publishers of data, subscribers to data, or both. The main difference between DDS and lower-level pub/sub IoT protocols, like MQTT or AMQP for example, is that DDS is a data-centric pub/sub framework. Data-centricity means that the data becomes the application interface, without artificial wrappers like messages and objects. With a data-centric framework, applications are freed from the burden of managing the complexities of data exchange and data lifetimes.

To build a data-centric system, you first define the data structures that represent the flow of information from machine to machine. Defining the data structure (or model) first allows for a completely modular approach to designing a system and enables your system designers to focus on what’s most important in your system—data.

A database is a great example of a traditional data-centric system that we’ve all used in some way in the past. Database applications first define the data structure of what will be stored in the database as unique data tables. Data access is provided by INSERT, UPDATE, READ, and DELETE operations. New applications can work alongside old applications just fine, because they’re all working with a single common point of access—data. Software applications are cleanly modular, integration is straightforward, and systems can scale much more easily. All of this is enabled by data-centricity.

Just like a database, DDS is data-centric. It provides the same type of operations; however, they’re for data in motion rather than data at rest. The building of a system based on the data being exchanged enables a more modular approach than those based on lower-level networking or messaging protocols.

Let’s consider a simple collision avoidance example (see figure). We’ll make this as basic as possible, with a proximity sensor and a collision avoidance engine. We can define the proximity info as follows:

Topic

This structure represents the state of the Proximity Sensor that will be published by the sensor application. Notice the first parameter in the structure. This “id” parameter is common use of a variable in the structure to represent the specific proximity sensor. There is a special designation for that “id” field listed as “//@key”. DDS enables the ability to use a single topic for many sources of the same kind of information. This is very similar to a Primary Key designation in a database table structure. For each proximity sensor, just fill in a unique “id” value and DDS will take care of the management of all these different sources. For more information on key fields, visit the RTI community site.

The data-centric approach that DDS enables provides many more benefits and capabilities like content-filtering, data flow shaped by Quality of Service (QoS) and discovery. Let’s take a look at these DDS framework capabilities in more detail.

Getting Started: Discovery

Our proximity sensor module will be a publisher of this information. Now we just need to define a topic, basically a description of the information transferred. In this case, we will define a topic named “ProximityInfo” that has the datatype “ProximityInfoType” associated to it.

On the collision-avoidance side, the subscriber will listen for announcements of publishers in that topic. When one of these announcements is received, the subscriber validates the topic, data type, and offered Quality of Service (QoS). Once validated, the subscriber creates a connection automatically with that publisher. This process is called Discovery and it is performed automatically by DDS.

By default, discovery starts with a multicast message from the publisher announcing its topics. Available subscribers answer in unicast to the specific publisher. The DDS connectivity software automatically configures the connections between publishers and subscribers, which is really handy when deploying a distributed system.

Discovery not only simplifies your life regarding system configuration, but also allows your system to scale easily and decouples applications from specific locations. It’s rare that you have the full hardware system available on your desk or in your lab during development. Perhaps you use some virtual machines to simulate the actual deployed hardware, or run many processes on a single machine that will be scattered across a network on multiple machines in the final configuration.

With DDS, the publishers and subscribers will discover each other in all of these configurations and will function the same way. You needn’t recode any of your application as you move from desktop to lab to deployed system. New applications are discovered as they’re added to the network and share information about their data types.

Connectivity Contract—Quality of Service (QoS)

Through data-centricity, we define the data that will be sent between applications in our distributed system. However, just defining the data isn’t enough. We should also define how that data will be communicated between applications. QoS defines this behavior; for example, how much historical data should be retained, how frequently new data should be delivered, the lifespan for which data is valid, and how missed updates should be handled. In the Discovery phase, the publisher and subscriber share their QoS requirements. If a publisher offers a set of QoS that fulfills the QoS requested by the subscriber, the connection happens.

This is an example of how DDS could be used in a collision-avoidance application.

In the collision-avoidance example, the collision-avoidance engine needs not only the current proximity-sensor information, but also a small amount of historical sensor information. The proximity sensor can have its QoS set to provide current and historical data along with the ability to deliver data reliably and at the proper rate. Durability, Reliability, and Deadline QoS policies define those behaviors in DDS (see figure, again). The system can recover if the connection is momentarily lost, be aware of lost connection, and keep enough historical data to send missing samples once reconnected.

Sometimes you need the same data shared with multiple subscribers, yet each subscriber may have its own data-delivery QoS requirements. If a car has a display to show the driver the information from the proximity sensor, this would be a new subscriber.

In this case, though, we don’t need the data reliably at a high rate: The driver will not realize if a single update is lost, and there’s no need to update the display more than once or twice a second. The display application can request best effort delivery, which suppresses acknowledgements to save network bandwidth. It can also apply a time-based filter as part of its subscription; now the DDS connectivity software downsamples the inbound data instead of your application having to do it.

This QoS contract can be tightened even further with more specific QoS configuration. In a collision-avoidance algorithm, we’re only interested in those proximity samples that have detected an object with a confidence value that’s bigger than a given confidence level. You can add this requirement to your subscriber and DDS will only transfer to the application those samples where objectDetected is true and the confidence value is bigger than the requested level. This filter is transferred to the publisher in the Discovery phase, and the filtering is done even before sending the data. This way, we will not send information that’s not needed by any subscribers specifying this filter.

QoS Description (in context)

Let’s look at these QoS policies, provided by DDS, in more detail:

Content Filtering

We start off with content filtering because it’s one of the biggest differences you can find between DDS and other IoT protocols. In the beginning of this article, we discussed the notion that DDS was data-centric and the value of that capability. Content filtering is a primary benefit that you get with a data-centric connectivity framework.

In DDS, you typically send well-defined data structures rather than opaque data payloads (octet arrays). By using a strongly defined data structure, like what we defined in “ProximityInfoType,” subscribing applications can create a Content Filtered Topic in which they define the topic name, topic type, and a filter expression. Without this capability, developers would have to build the filtering logic directly in the application and all received data would have to be run through this logic—potentially throwing out the majority of the data it received.

DDS does this filtering for you, delivering to your application only the data that matches the filter. Because the filter requests are made at run-time, your application can request different data at different times. DDS allows different subscribing applications to each have their own filter, giving you the ability to architect a distributed application with many different requirements.

Content Filters can be propagated from subscribers to publishers, allowing filtering at the sending application. This saves data-transmission overhead and reduces bottlenecks on the network. This can create very good bandwidth savings on standard Ethernet networks, and be greatly leveraged when doing communications over intermittent RF links or pay-per-use satellite networks. With Content Filters, data only gets sent when necessary and only to those subscribers needing it, reducing network usage and enabling the creation of much larger systems.

Reliability

Most network applications support the reliable delivery of data by using TCP on top of an IP-based network. But what if your application doesn’t have a network with sufficient bandwidth, latency and reliability to efficiently support TCP? Or what if the level of reliability you need isn’t the same for all applications and data flows on a given node?

TCP provides reliable guaranteed delivery of data, but it doesn’t enable any tuning of the occurrence of retries or if some minor data loss can be tolerated. TCP also doesn’t guarantee latency of data delivery, which is critical to real-time applications. While TCP is good for simple point-to-point communications, it was never designed to support a streaming sensor data flow that’s prevalent in IoT applications.

To overcome all of these inefficiencies of TCP or to enable reliable communication over a transport that doesn’t provide reliability, DDS has built-in a full Reliable mechanism as one of its QoS Policies. This policy can be configured to provide strict reliability as in TCP, but can also be tuned specifically for the level of reliability required by the application.

Because the DDS software lives above the transport protocol, and because reliability is part of this connectivity implementation, the net result is that you can achieve reliability across any unreliable transport such as UDP and via multicast. This gives you the efficiency of one to many deliveries via multicast while ensuring that each subscribing node on the multicast address group receives the data in the order that it was published. Retries of data are performed via unicast only to those nodes that didn’t initially receive the data via multicast. In fact, DDS is one of the only IoT connectivity frameworks that provides a reliable multicast.

Deadline

Applications that communicate their data with some kind of time constraint typically have to implement application logic to monitor the timeliness of data delivery. If data isn’t delivered on time, the receiving application must detect it as well as mitigate the impact of the missed data. Because failure handling is truly application-specific, it’s not appropriate for a middleware to provide this function. However, the detection of missed data based on time is something that an IoT connectivity framework could provide.

DDS has a Deadline QoS policy that’s shared during discovery between publishers and subscribers. If a subscriber requests a deadline time period longer than the time period that the publisher asserts that it will publish the data, then this will result in a connection being made between the publisher and subscriber. If the publisher’s offered deadline is longer than the subscriber’s requested deadline, then DDS doesn’t make the connection and notifies both applications that the subscriber is requesting data faster than the publisher can deliver.

Such contract enforcement gives system architects the peace of mind that only applications built to get the data delivered in the timeliness provided by its source of data will actually be connected. Once the applications connect, if a subscribing application doesn’t receive data within its requested deadline, DDS will notify that application of a missed deadline event. The same is true for the publishing application, if the deadline offered by the publisher is not met, DDS will notify the publishing application that it has a missed deadline offered event. The net result is a fully monitoring connectivity solution that enables applications to easily deal with timeliness of data delivery.

Durability

While the Reliability QoS provides missing data during short interruptions in communications, applications sometimes go down or lose connectivity for longer periods of time. In these cases, those subscribers may need to recover some data that was published when they weren’t connected. The Durability QoS allows a publisher to maintain some historical data that can be selectively provided to re-joining or late-joining subscribers. Each subscriber specifies how much historical data it should receive at startup. This historical data is sent only to subscribers that request it, minimizing the effect on the network and other subscribers.

QoS Continued

So far, we’ve only discussed a few QoS (Reliability, Deadline, Durability, Content Filtering), but there are more QoS implemented in DDS that can help with defining and managing data-flow behaviors. For example:

  • Time-based filter: The ability to downsample high-frequency data to a lower rate of data delivery.
  • Liveliness: The ability to configure a “heartbeat” signal for a publisher or subscriber of data that lets connected applications know whether that entity is still alive to send or receive its data. Loss of liveliness results in events generated by DDS that the application can then provide error handling for.
  • History: The ability to specify historical data to be saved in RAM on either the sending side or receiving side or both.
  • Lifespan: The ability to define a time period of when a particular data sample is considered “Stale Data”. In this case, DDS will remove any stale data from sending and receiving history caches when its lifespan has been expired.
  • Ownership: The ability to have multiple sources of the same data for fault-tolerance reasons and be able to specify a strength of who is the primary owner of that data. This gives subscribing applications an ability to hot failover to a secondary or backup source of data.

There are additional QoS policies provided by DDS, but this article gives you an idea of the basic ones that can be used to build the most common IIoT applications.

Conclusion

There are many ways to send data between applications. Some are based on raw UDP or TCP sockets. Others are built on higher-level messaging protocols like AMQP or MQTT. All of these approaches just send data from source applications to receiving applications. They’re not data-centric solutions, so it’s up to the end applications to do all of the data handling and filtering. With a data-centric connectivity framework like DDS, all of that work and more is done for you. Therefore, your developers and system architects can spend their time building and creating your differentiating application logic without having to become experts in data distribution, connectivity, management, filtering, etc.

If you’re interested in learning more about DDS and its capabilities to help you build out your IoT or IIoT application, please see the following site on the Object Management Group’s page for DDS: OMG DDS. If you are interested in learning more about RTI’s implementation of DDS, check out the following page: RTI Connext DDS.

Reference:

RTI Blog: "Create A P2P Distributed Application In Under 35 Lines Of C++11 Code!

About the Author

Sara Granados Cabeza | Lead Field Application Engineer

Sara is a Lead Field Application Engineer for the EMEA region at RTI, where she works directly with customers to help them integrate RTI Connext DDS into their systems and take full advantage of its benefits. Sara joined RTI in 2011 as a software engineer and was responsible for developing the RTI DDS Toolkit for LabVIEW and the C/C++ Distributed Logger, among others as a part of the RTI Tools team. She has more than eight years of experience with middleware, software development, hardware programming, and computer vision in both development and customer-facing roles.

Sara graduated with a MS degree in Computer Engineering in 2006 and obtained her PhD in Computer and Networking Engineering (Cum Laude) in 2012, both from the University of Granada, Spain.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!