Electronicdesign 8748 0816developmenttoolspromo
Electronicdesign 8748 0816developmenttoolspromo
Electronicdesign 8748 0816developmenttoolspromo
Electronicdesign 8748 0816developmenttoolspromo
Electronicdesign 8748 0816developmenttoolspromo

A Deeper Look at Deep-Learning Frameworks

July 12, 2016
Deep learning is one aspect of artificial intelligence gaining ground thanks to multicore hardware such as GPGPUs, with tools and frameworks also providing more accessibility to the technology.
Download this article in .PDF format
This file type includes high-resolution graphics and schematics when applicable.

Deep learning is one aspect of artificial intelligence (AI) that continues to advance, owing to performance improvements in multicore hardware such as general-purpose computation on graphics processing units (GPGPUs). Tools and frameworks have also made deep learning more accessible to developers, but as of yet, no dominant platform like C has emerged. The plethora of choices can be confusing, and not all platforms are created equal. It’s also an area where cutting-edge development perpetually makes it more difficult to create new applications on top of a solid base.

Deep learning is another name for deep neural networks (DNNs). This type of neural network has many layers, which affects computation requirements: As the size of a layer and number of layers increases, so do those requirements. In addition, wide neural networks, which are shallow in nature, can be useful for many applications. In fact, it’s possible to mix them (see figure) using some frameworks. Neural networks are also in play with recurrent neural networks, convolutional neural networks, and logistic regression.

Some of the more popular, open-source, deep-learning frameworks include Caffe, CNTK, TensorFlow, Torch, and DeepLearning4J. Caffe, developed at the Berkley Vision and Learning Center (BVLC), probably has the greatest following and support. Microsoft’s Computational Network Toolkit (CNTK) is an active open-source project. Torch and Theano are Python libraries that provide deep-learning support. MatConvNet is a toolbox designed for Mathworks’ MATLAB.

Some frameworks allow wide (left) and deep-learning (right) neural networks to be combined within a single implementation.

Google started TensorFlow. The TensorFlow Playground is a website where you can experiment with predefined networks to see how changes affect the recognition process and its accuracy. DeepLearning4J, developed by Skymind, is a deep-learning framework written in Java that’s designed to run on a Java Virtual Machine (JVM). The Skymind Intelligence Layer (SKIL) is based on Deeplearning4J, ND4J, and LibND4J (an n-dimensional array library).

DNN Underpinnings

Most of the DNN platforms often utilize new or existing computational frameworks to do the heavy lifting required by applications. Two well-known computational frameworks are OpenCL and Nvidia’s cuDNN (CUDA DNN). OpenCL has the advantage of running on a range of hardware from multicore CPUs to GPGPU arrays. Nvidia’s solution targets its own GPUs, including the latest Pascal architecture, Tesla P100 (see “GPU Targets Deep Learning Applications”).

DNN applications often require significant amounts of training on large computational clusters to determine the weights associated with the nodes or neurons within a neural net. The plus side is that the resulting network can be implemented on much simpler hardware that may include microcontrollers.

The biggest challenge for developers is to become familiar with DNN. The frameworks typically have a number of preconfigured networks for sample applications, such as image recognition. The tools can be used for much more, but it often takes an expert to develop and tune new configurations.

Unfortunately, commercial support is only available for some frameworks. Select companies like Nvidia have an active support program with tools like the Deep Learning GPU Training System (DIGITS), which is designed to handle image classification and object detection tasks. It can help with a range of functions, such as training multiple networks in parallel.

DNNs and their associated tools are not applicable to all applications. However, they can make a significant difference in terms of capability and performance for many application areas, from cars to face recognition on smartphones.

Looking for parts? Go to SourceESB.

Download this article in .PDF format
This file type includes high-resolution graphics and schematics when applicable.

Sponsored Recommendations

Understanding Thermal Challenges in EV Charging Applications

March 28, 2024
As EVs emerge as the dominant mode of transportation, factors such as battery range and quicker charging rates will play pivotal roles in the global economy.

Board-Mount DC/DC Converters in Medical Applications

March 27, 2024
AC/DC or board-mount DC/DC converters provide power for medical devices. This article explains why isolation might be needed and which safety standards apply.

Use Rugged Multiband Antennas to Solve the Mobile Connectivity Challenge

March 27, 2024
Selecting and using antennas for mobile applications requires attention to electrical, mechanical, and environmental characteristics: TE modules can help.

Out-of-the-box Cellular and Wi-Fi connectivity with AWS IoT ExpressLink

March 27, 2024
This demo shows how to enroll LTE-M and Wi-Fi evaluation boards with AWS IoT Core, set up a Connected Health Solution as well as AWS AT commands and AWS IoT ExpressLink security...

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!