11 Myths About IoT Messaging (.PDF Download)

Dec. 14, 2017
11 Myths About IoT Messaging (.PDF Download)

Everyone loves a good myth. It makes for entertaining dinner-table conversation. But sometimes myths get in the way of solving problems, and unfortunately in the technology world, this can mean wasted time, money, and resources.

The corporate Internet of Things (IoT) market is a competitive place where application-development myths proliferate. Many of these myths involve real-time data handling and delivery—the heart of successful corporate IoT application development. The challenges of IoT application development are speed, scale, and reliability for data exchanged among people, machines, sensors, and devices. Let’s debunk some of those myths…

1. Latency is a function of distance.

Latency is a big challenge when it comes to the IoT. Consider first responders in an emergency, or a more consumer-focused example such as Google Glass. When latency exists, the data quality degrades to the point of arriving too late for useful action or response. Therefore, the efficacy of many IoT applications depends on the data being sent and received in real time.  One solution, according to a Wired article, is that “organizations will need to put their data and computing infrastructure in close proximity to users and the devices, and be able to connect directly to their trading partners and digital supply chain.”

However, this isn’t always possible. One cannot put a data center in the middle of the ocean (even though Google did discuss this a few years back) to reduce latency for support of marine shipping to, for example, handle RFID updates from containers on ships or to communicate with the planes flying overhead.  And in these situations, if there’s an emergency, the information must be transmitted as quickly as possible.  

Computing location is only part of the story. Getting the right data from the right device at the right time isn’t just about hardware and sensor location, it’s about data intelligence. If systems can understand data and only distribute what’s important, at the application level, this is more powerful than any amount of hardware thrown at the problem. 

A network alone can’t prioritize data because it doesn’t understand data; it just moves it. This prioritization of data should be done at the application level, where there’s logic. Combine this with data caching at the network edge and you have a solution that reduces latency.

To achieve low latency, IoT applications require a combination of intelligent data distribution and an architecture designed to put the data as close to the end user as possible—whether that’s a machine, device, or person.

2. The IoT is just like mobile data distribution.

Distributing data over a mobile network can be an issue because a huge amount of data is sent from a server to a device somewhere in the world. Unfortunately, the network can be unreliable, and this causes speed and performance issues. For IoT, that model is often reversed. That’s a huge volume of data from many devices or machines or sensors coming in over an unreliable network to a few aggregation locations for analysis and action.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!