alt.embedded
These Top 2017 Embedded Trends Will Thrive in 2018 Thinkstock

These 2017 Embedded Trends Will Thrive in 2018

What trends in 2017 will have staying power? Senior Technology Editor Bill Wong looks at some of the hottest embedded trends that should keep percolating this year.

Trends rarely follow yearly boundaries, and many significant trends hang around for a long time. What follows are those that emerged last year, and continue to grow in importance in the embedded space.

RISC-V

RISC-V is an instruction-set architecture. That’s important because RISC-V requires a hardware implementation to be usable, but it doesn’t define an implementation. The standard actually defines a set of features that can be combined and implemented in hardware, allowing portability of applications between platforms.

Bolstered by support from the likes of Microsemi and its Mi-V Infrastructure (Fig. 1), RISC-V has become a key player in the FPGA space. It provides a cross-platform solution with common programming tools that will work with system-on-chip (SoC) RISC-V implementations like SiFive’s 64-bit, multicore U54 or its 32-bit E310 microcontroller.

Machine Learning

Ok, machine learning (ML), and deep neural networks (DNNs) in particular, have been hot topics for a couple years. Nonetheless, the changes in hardware are changing how embedded developers look at solving problems. At the high end are chips that do the heavy lifting in the cloud, such as Google’s TensorFlow hardware.

2. Intel’s $79 Movidius Neural Compute Stick is a USB 3.0 dongle that contains a Movidius Myriad 2 VPU.

At the other end of the spectrum, and of more interest to embedded developers, are low-power, high-performance chips like Intel’s Movidius series that can be found in its USB 3.0 development stick (Fig. 2), or in products such as DJI’s SPARK drone. The drone can recognize people and gestures using the on-board HD camera. Gestures can be used to move the drone and take photos. It definitely beats a smartphone for cool control.

3. The Volta GPGPU, developed by Nvidia, targets a range of compute and graphical applications, including machine learning.

Situated between the two extremes are GPGPUs like Nvidia’s Volta (Fig. 3), which features 5,000 cores to tackle deep-learning applications. Look for more DSPs and GPGPUs being tuned to ML applications. They will compete with chips along the lines of Movidius that target specific applications or approaches. In fact, they’re already making an impact in drones and will be doing the same in automotive advanced driver-assistance systems (ADAS).

ADAS, Radar, and LiDAR

Cameras and ML systems will have a major effect on how ADAS works in smart and self-driving cars, but radar and LiDAR will be complementary to visual technology. Though radar has been available for a while, it has typically been large and expensive. It’s found on higher-end vehicles, and usually as a single, forward-looking sensor. LIDAR has had more limited use because of cost and size, too.

4. Texas Instruments’ mmWave looks to shrink radar support for ADAS systems.

Small, low-cost, 3D radar and LiDAR like Texas Instruments’ mmWave radar chipset (Fig. 4) are going to change how many applications will use these technologies. Right now, radar has an advantage over LiDAR in that it can be hidden behind panels. It’s also more immune to bad weather.

That said, the landscape will change for LiDAR as well. 3D LiDAR was announced in 2017, but will be delivered in 2018. Companies such as Leddartech were showing off prototypes at the Consumer Electronics Show (CES) in 2017. Products will be on hand at this year’s show in Las Vegas, and we can expect cars in 2019 and 2020 sporting this technology. 

Voice Control

Voice-control systems and smart speakers like Amazon’s Echo were a hot item at CES 2017. Voice support will likely be found in everything from refrigerators to backscratchers at CES 2018.

5. These development kits from XMOS (left) and Cirrus Logic (right) can get your application talking to Amazon’s Alexa.

Behind the scenes at CES will be the latest dev kits such as XMOS’s Alexa AVS Dev Kit and Cirrus Logic’s Alexa Voice Capture Development Kit for Amazon AVS (Fig. 5). These provide the audio-processing support necessary to deliver a clear stream of information for voice recognition. The cloud actually handles the processing and native language analysis so that commands can be recognized and initiated, but it takes good hardware and software in the device to make it all work.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish