The cloud, tablets, and lots of cores will be driving software, systems, and designs in the embedded space. It has never been more exciting and confusing to be a developer.
The cloud is as nebulous as its name. It addresses everything from Web-based services to cloud computing infrastructure. All of these areas are hot and will impact embedded developers’ tools and design choices from now on.
Tablets and touch interfaces should be front and center for a lot of embedded designers. Many designers are still concentrating on e-readers and Web browsers, though, overlooking the use of tablets as control and monitor devices for embedded hardware. Apple’s iPad is fun, but Android and Windows tablets will bring embedded options to fruition.
Embedded hardware and systems will require lots of cores. Asymmetric multicore architectures are moving from ASICs to standard parts. GPU computation abilities are making them a major component in system designs. FPGAs are sleeping giants here.
Application arenas will benefit from a range of computing platforms and integrated digital interface sensors coming out of the smart-phone market. For example, mobile robots are sporting dozens of sensors, making home and industrial use more economical.
Also, Federal Aviation Administration (FAA) changes could make a big difference in the micro-UAV (unmanned aerial vehicle) market. The automotive and health industries will benefit from better sensors, standards, and wireless technology.
Military and avionics designers are taking commercial-off-the-shelf (COTS) to heart with more open platforms. And, security and anti-counterfeiting technology has moved to the foreground in new designs.
Using The Embedded Cloud
Enterprise and commercial use of the cloud has been going on for a couple of years with services oriented architectures (SOAs). To users, the cloud is a smart-phone or tablet app linked to the Internet. To programmers and designers, it’s much more.
Get ready for Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). Variants include Monitoring as a Service (MaaS) as outfits like Digi International and its iDigi platform/service (see “The Embedded Cloud Floats Everywhere” at electronicdesign.com) and Eurotech’s Everyware (see “The Embedded Cloud Runs Java” at electronicdesign.com) make it easier to link embedded network devices to the cloud. At the extreme is Anything as a Service (XaaS).
Developers are also seeing development tools going online (see “Virtualizing the Application Lifecycle in the Cloud” at electronicdesign.com). Some, like Tabula’s ABAX FPGA design tools, are exclusively online (see “Web-Based Development Tools Target FPGA” at electronicdesign.com).
Other vendors like IBM are trying to deliver the same tools online and locally, raising a host of compatibility and security issues as well as short-term and long-term availability issues. More than Internet access could be lost, preventing access to services. Read the fine print. Even cloud backup services can go bust, get sold, or change their terms of service.
Although not specifically related to cloud-based solutions, tool and application integration are becoming key issues for developers. No one wants to spend time making various tools work together. Developers want their compiler, debugger, and operating system (OS) to work together right out of the box so they can concentrate on their application.
Proprietary OS and hardware vendors typically provide this type of support, but many platforms are based on or incorporate open-source tools. This brings up the support issue. Many vendors are addressing the integration and support issues and even adding open source to the mix. Texas Instruments’ latest Code Composer Studio is based on Eclipse (see “IDE Based On Stock Eclipse Also Adds Advanced Debug Capabilities” at electronicdesign.com).
Virtualization And The Cloud
Virtualization made the cloud possible. SaaS can work without virtualization, but PaaS and IaaS require it. Combine virtualization with clustering, and scalable systems become the norm. Virtualization is also leading to the availability of large compute platforms like Hadoop, which is an open-source distributed computing environment that’s being exploited by companies like eBay, Facebook, and Google in part because it’s scalable (Fig. 1).
Embedded developers may take advantage of Hadoop and similar enterprise-class services in a variety of ways, from using it as a cloud-based service to incorporating it into an embedded system. It means checking out the ecosystem, which includes the Hadoop Distributed File System (HDFS), as well as the MapReduce approach to application and data distribution plus data access frameworks like HBase, Pig, and Hive.
Hadoop scales up, but it can also fit on smaller platforms given the rapid increase in multicore hardware. It works equally well on a single node.
Parallel processing computational tools that are closer to the metal continue to gain in popularity. OpenCL is providing a bridge between GPUs and CPUs and now FPGAs. It has the potential to bring the power of FPGAs to developers just as it did for GPUs.
It’s possible to move even closer to the multicore metal with tools like Thread Building Blocks (TBB) and Cilk (see “Getting Ready For Some Hard Work With Multicore Programming” at electronicdesign.com). If you have more than one core to play with, then these tools will be of interest.
But back to virtualization, because embedded developers already have a handle on it. A platform that can run Android or other virtual memory OSs probably also has virtualization support, allowing multiple OSs to coexist—and a lot of these platforms are available for developers.
Virtualization enables a variety of systems to run on a single platform, including legacy application environments as well as new computational platforms. Isolation and security, though, often are more important to embedded developers. That’s why multiple independent levels of security/safety (MILS) implementations are built on virtualization hardware.
And, watch for how security and virtualization mesh with storage like self-encrypting drives (SEDs). SEDs supporting multiple regions are now available. But not all OSs, especially legacy ones, can take advantage of SEDs. The LynuxWorks LynxSecure separation kernel manages SEDs for multiple client OSs. Some can work with a single SED region, while others simply access the disk without knowing about the encryption aspects.
Very Local Networking
More designers will be counting on local-area networks (LANs) and personal-area networks (PANs) this year, typically by using wireless networks like Bluetooth, 802.15.4, ZigBee, and Wi-Fi. Also, look for dynamic and on-demand peer-to-peer networks this year. Alljoyn, a software technology for connecting application environments together, is an object-oriented programming model that provides device and service discovery and managed communication.
Developers also should get get to know the Wi-Fi Alliance’s Wi-Fi Direct. It requires WAP2 security but is different from ad-hoc Wi-Fi. Wi-Fi Direct targets device connectivity and could be a direct challenge to Bluetooth. Broad adoption of 32-bit microcontrollers will allow this type of wireless and wired connectivity.
Another networking aspect for developers to keep in mind is IPv6 support. IPv4 is not going away yet, but given the plethora of wired and wireless endpoints, the larger address space and new functionality of IPv6 is where things are headed.
User Interface And Multimedia
If it isn’t multitouch, it isn’t cutting edge. The Apple iPhone and iPad have made swiping and pinching cool and ubiquitous. These gestures have penetrated the embedded space as hardware has become more available and less expensive with the support of a range of software. Many developers are already using touch interfaces, but simpler interfacing and lower costs will make it viable for many more applications.
Touch and screens go together nicely, but it’s not a requirement. Gestures and simply buttons and sliders work very well on a variety of surfaces. Improved noise immunity will make touch interfaces even more reliable and desirable.
Streaming video is very desirable as well, and wireless video streaming via 4G and Wi-Fi is on a steep rise. Higher bandwidth and screen resolution are putting HD-quality videos on smart phones and tablets. Asymmetric multicore processors are making low power and high quality possible too.
Small 3D displays may take off this year as autosteroscopic 3D displays become available. 3D content is still limited and much of it is found in animated movies, but more is promised and the horde of 3D HDTVs already in living rooms provides a target for 3D content. Unfortunately, 3D has not been as compelling as many companies would like it to be.
HTML5 continues to emerge in a way that will hopefully unify the video delivery mechanisms, but watch out for proprietary delivery systems because of copy protection issues. HTML5 and CSS3 are needed soon. The standard is not finalized, though various implementations are already in place. Likewise, sites using HTML5 are on the rise.
HTML5 offers a number of features in addition to streaming video support that embedded developers can utilize. This includes taking advantage of hardware acceleration.
Modules such as COM (computer on module) platforms are gaining as developers of mid-range to high-end embedded systems try to take advantage of the latest technology. The processing complex typically requires the greatest level of printed-circuit board (PCB) complexity.
By keeping it all on the module, a selection of processors can be used in a design while reducing the complexity and cost of the carrier board. Peripherals on module (POMs) like Mini PCIe or Diamond Systems’ FeaturePak (see “Module Packs I/O Features” at electronicdesign.com) complement COM, though POMs tend to wind up on single-board computer (SBC) motherboards.
High-end ATX motherboards continue to host the fastest interfaces that are focusing on USB 3.0, 6-Gbit/s SATA, PCI Express, and HDMI and DisplayPort for video. The shrinking PC market may take a toll on the variety we have enjoyed to date.
Last year was the 10-year anniversary of the Mini-ITX form factor from VIA Technologies. The 17- by 17-cm form factor has been adopted by a wide range of motherboard vendors. The Nano-ITX and compact Pico-ITX form factors followed it.
PC/104 still dominates the stackable market with its ISA bus. There are high-speed serial options and even a StackableUSB stack (see “Micro/sys Dishes Out StackableUSB For Embedded IO” at electronicdesign.com). However, designers aren’t flocking to these options yet as PC/104 often provides sufficient functionality, and it’s hard to beat the price points of existing boards.
The Unified Extensible Firmware Interface (UEFI) is creeping into many PC motherboards, replacing the venerable PC BIOS. Large hard drives and secure boot using the Trusted Platform Module (TPM) are just a couple of reasons for migrating to UEFI. You can find UEFI on x86-style motherboards now, but it’s processor agnostic. There’s an ARM-binding for UEFI as well.
Enterprise systems show almost as much variability as embedded systems in terms of choice and markets. Soon, 1- and 10-Gbit Ethernet interconnects will be ubiquitous. Also, 40-Gbit Ethernet and InfiniBand are part of the mix with PCI Express 4.0 on the horizon.
Embedded systems that can process more data at higher speeds need these alternatives to avoid I/O bottlenecks. RDMA technologies such as RoCE (RDMA over Converged Ethernet) and iWarp will increasingly be employed to boost throughput, reduce latency, and reduce CPU loading.
Look for NVMe (see “NVM Express Delivers PCIe SSD Access” at electronicdesign.com) and SCSI Express to bring some solid-state storage to the PCI Express interconnect. The storage hierarchy just got a little higher.
Finding the right platform for an application’s computational requirements will be key. SeaMicro’s 10U 768 Atom core SM10000-64HD (see “Server Packs 768 Atom Cores To Take On The Cloud” at electronicdesign.com) works well with lots of individual cloud services, but a symmetric multiprocessing (SMP) collection of Intel Xeon chips is often needed for large computational exercises.
GPGPU and CPU/GPU pairings add to the options for enterprise and embedded server platforms. Floating-point support has been key to the GPU’s success on the computational side of the design equation. GPU adoption in these spaces will continue to grow this year because of the industry-leading GFLOPS/Watt, especially for SWaP-constrained (space, weight, and power) applications. These days, that describes almost any application.
Many-core chips like those from Tilera Tile-Gx (see “100 x 0.5W Cores = Cloud Computing” at electronicdesign.com) offer compelling performance in fixed-point applications such as image processing and packet inspection. Intel’s MIC (Many Integrated Core) platform will be available for evaluation.
Sensor Integration in 2012
Sensor integration will dominate system design in 2012 in application areas as diverse as robotics, automotive, military, and avionics. SWaP is a requirement for these areas as well.
Robotics platforms can take advantage of advances in sensors initially designed for smart phones, cameras, and tablets. Accelerometers and gyros are useful for chores other than gaming or determining the proper orientation of an e-book.
External forces such as the FAA’s rules will have an impact on demand for small UAVs. Telepresence robots will be more common, and robot development platforms like the Robot Operating System (ROS) will improve robot time-to-market (see “Cooperation Leads To Smarter Robots” at electronicdesign.com). Even swarm research robots will be less expensive (see “Swarming Robots Get Cheaper,” p. xx).
Automotive systems will get a special version of Ethernet. The OPEN (One-Pair Ether-Net) Alliance delivers 100-Mbit/s Ethernet plus power on two wires. The support chips meet demanding automotive requirements and can be used for networking devices such as cameras to support Advanced Driver ASsist (ADAS) applications (Fig. 2).
The ISO 26262 safety standard has been released. Implementation will start this year. Software vendors will promote their capabilities to support ISO 26262.
Safety and security are also critical for the medical and health care industry. The Continua Health Alliance is one organization addressing interoperability standards.
Health gateways will tie together many devices, especially in the home. Based on Freescale’s Home Health Hub (HHH), Digi International’s iDigi Telehealth Application Kit addresses this area. The i.MX28 platform provides interfaces such as USB, Bluetooth, Wi-Fi, and ZigBee. The HHH can hook into the iDigi cloud-based monitoring and control service.
This type of aggregator approach is likely to be common, permitting developers to concentrate on the sensors and devices that will interact with the hub. The reimbursement climates for remote patient monitoring (RPM) in 2012 remain cloudy.
Open standards aren’t restricted to the health care realm. Open architectures like OpenVPX are becoming more of an issue with the U.S. Department of Defense. The branches of the military are handling them a little differently.
The Army’s VICTORY (Vehicular Integration for C4ISR/EW Interoperability) architecture defines the box/networking level. The Air Force’s Modular Open Systems Approach (MOSA) defines hardware and software interfaces, scalability, and more. These standards are designed to improve interoperability and reduce costs, which will be necessary given the current economic climate.
Cutting-edge technology is making a big difference in military applications. For example, high-speed serial interconnects help move radar sensor data through multicore computational platforms. Robotics in the form of UAVs and unmanned ground vehicles (UGVs) are changing the layout of the battlefield.
Security and Counterfeits
It’s critical to have the right systems doing the right things for COTS military applications. But this is also true of just about any application area, such as medical and communications. This means security is paramount and counterfeits are very bad. These issues are related but quite different.
Security starts with the boot process. UEFI and TPM can help there, but they have to be used. Virtualization and MILS follow from a secure boot process. A wide variety of micros already supports such security and incorporates hardware encryption support, but look for security to be even more prevalent this year as it moves from a requirement to deployment.
This hardware can be used for other security-related tasks such as communication and authentication, which really needs to be used more often in this connected world. It also needs to operate at line speeds like 10-Gbit Ethernet, which is already commonly used. Higher speeds are coming and will need not only encryption support but also packet analysis tools that can operate at these speeds.
Anti-tamper and anti-counterfeit technology encompasses the systems engineering activities intended to prevent and/or delay exploitation of critical technologies. This technology has often been used for critical applications such as smart cards or very expensive devices like enterprise network switches where it prevents tampering with the system.
Developers do need to keep in mind that often this level of prevention is not always required. In many cases it’s sufficient to discourage exploitation or reverse-engineering or to make such efforts so time-consuming, difficult, and expensive that even if successful, the next generation will have replaced a critical technology.
This coming year holds a lot of promise for embedded applications as well as the usual crop of design challenges.