Electronicdesign 16582 Learningml Promo
Electronicdesign 16582 Learningml Promo
Electronicdesign 16582 Learningml Promo
Electronicdesign 16582 Learningml Promo
Electronicdesign 16582 Learningml Promo

Learning Machine Learning

July 14, 2017
Looking to sharpen and expand your knowledge of the accelerating field of machine learning? Then check out these online resources.

Download this article in PDF format.

Machine learning is a hot topic for developers, but where can one learn about how to use the technology?

A lot depends on your current background and your long-term goals. I have already written about the basic differences between machine-learning techniques, but this was done at a relatively high level. Getting into the details can range from learning about machine-learning methodologies at an abstract level to examining deep-learning frameworks used to develop applications.

Here, we’ll take a more detailed look at some of the online resources available to you, and include links to websites with much more information about machine-learning classes, frameworks, and resources.

Classes and Learning Resources for Machine Learning

It will help to have at least a general understanding of machine learning before trying to take advantage of the ML frameworks and hardware. The links here are also a useful starting point for working up more extensive expertise in the area.

Massive Open Online Courses (MOOCs) are a good starting point, with a lot to offer. The problem is that they’re massive in terms of the resources and topics covered. Machine learning tends to be a bit more specialized than calculus or basic electronics. The trick is finding the right resources.

The KDnuggets website has news and other articles focused on knowledge-based topics like machine learning. The article entitled “Top Machine Learning MOOCs and Online Lectures: A Comprehensive Survey” lists a number of good resources. It actually led me to the Udacity website.

Udacity is one of many online course systems. If you haven’t yet delved into artificial intelligence and machine learning, then the Intro to Artificial Intelligence course might be useful.

Almost every engineering university is making its courseware available for free as well. Schools like MIT, Georgia Tech (my alma mater), Cal Tech, and Stanford, just to mention a few, are doing work with machine learning and related topic areas (the links are to the open online courses). Coursera is a site that ties together a number of MOOCs, including those from Stanford. Some sites can get you started working on a degree as well.

MOOCs are handy, but have pluses and minuses. Free courses sometimes have limited courseware materials; they’re simply the notes for an in-person lecture. Others are more extensive, including interactive examples, tests, etc., that are tied into a learning system. Others offer lecture and demonstration videos as well as programming resources.

A number of good resource sites are available for machine learning, too. The Deep Learning website has links to software, datasets, and tutorials. There are even links to job listing sites once you have some expertise in this arena. The challenge with these sites is finding ones that maintain and extend the links over a long period of time. On the plus side, machine learning, as a hot topic, is relatively new.

Also searchable are free books on machine learning, such as Neural Networks and Deep Learning by Michael Nielsen. The Deep Learning Tutorial works with software stored on GitHub.

Machine-Learning Framework Resources

Ok, you know all about machine-learning methodologies, so now it’s time to get some work done. Writing a deep-neural-network (DNN) system for machine learning from scratch might have been a great doctoral thesis, but it can be a massive project. Building and supporting a system over the long term isn’t easy, but one where open-source projects abound and where close source systems are kept close the vest. You will encounter the latter when working for, or with, specific companies. Here, though, we’re looking to expose everyone to what’s available, which typically means open-source projects.

1. The TensorFlow Playground lets you select data sources, number nodes, and layers to see how it changes accuracy of the results.

The challenge for developers using open-source projects concerns their long-term goals. If those include products, then support is an issue. Many of the open-source projects are available from vendors that provide support or third parties have cropped up to support these frameworks. Likewise, many hardware vendors provide support for compatible frameworks.

The list here isn’t definitive and frameworks continue to emerge and become abandoned on a regular basis. The Caffe Deep Learning Framework is probably one of the more famous platforms available. It started on Nvidia’s CUDA and was initially optimized for vision applications. It runs on hardware like Nvidia’s Jetson TX2.

TensorFlow came out of Google’s open-source projects. It runs on Google’s Tensor Processing Unit (TPU) in addition to more generalized CPU and GPU platforms. The Tensor Flow Playground (Fig. 1) is a web-based interface to a TensorFlow system complete with graphical reconfiguration and data sources. It does help to know a bit about machine learning and DNNs before using it.

Theano is for Python programmers when it comes to scripting. The Theano 0.7 documentation presents a good overview on neural networks. Torch is another open-source platform that supports Lua and C. MXNet is an open-source Apache Incubator project. MXNet has some pre-trained models like the MXNet Model Zoo. It supports convolutional neural networks (CNNs) and recurrent neural networks (RNNs).

The Chainer open-source framework garners support from IBM, Intel, Microsoft, and Nvidia. Preferred Networks is the company behind Chainer.

Machine-Learning Data Resources

I haven’t talked in detail about how machine learning and neural networks work, but one aspect of their operations is the need to train a network to do something useful. This requires “more input,” as Number 5 in the movie Short Circuit would say. In most cases, the more input, the better the outcome. Of course, the type of input depends on the application, which may mean images, videos, recordings, etc. Simply archiving and maintaining this information can be taxing.

Luckily, many sites on the internet provide this information, often for free. And a multitude of framework sites offer this information, too, or links to them. For example, the MXNet website lists a number of data set sources for CNNs and RNNs. The Deep Learning site has a list of sources as well.

Third-party data sets can be useful, but often the collection of information tends to be expensive and arduous. For example, self-driving cars with neural networks incorporated into one or more of their systems typically need context-sensitive data. Recording the range of conditions and environments encountered by these vehicles, or a normal driver, means a lot of data must be obtained.

Of course, many data sets or combination of data sets could be used with neural networks. My article on “Big Data and the Voting Booth” talks about voter information as a data set.

If you’re just starting out with machine learning, then data sets will likely be at the tail end of your search. Often the framework or pre-configured neural network will dictate what data is needed.

Machine-Learning Hardware and Software Examples

Most machine-learning algorithms can be run on any computer, from microcontrollers up to cloud-based clusters. Specialized hardware like Google’s TPU come in handy when crunching very large amounts of data. GPUs are tasked with handling machine-learning chores, and optimized versions of GPUs and DSPs, like Cadence’s Vision C5 DSP, have been created to address the growing number of machine-learning applications. These changes become necessary because CPUs, GPUs, and DSPs are normally optimized for large data encoding, such as 64-bit integers and double-precision floating point. Dealing with big arrays of bytes or even bits makes a lot more sense for many neural-network applications.

2. Nvidia’s Jetson TX2 will find homes in the high-performance mobile space.

There’s also the issue of training versus deployment when it comes to machine learning. Those big data sets are not brought along with an application platform. Instead the neural-network data, sometimes referred to as “weights,” generated by the training process is what’s required by the deployment platform, which needs significantly less performance and storage than the training system. This means an array of GPUs might be used for training, but deployment may be done on a microcontroller.

Silicon vendors with high-end processing hardware, such as Intel, Nvidia, and AMD, are also targeting this space. Some of the hardware already in your design or desktop will often fill the bill—it just needs  the right software, which those companies will be happy to supply. They also provide a range of training materials that I didn’t mention in the first section. That’s because this is typically vendor-oriented, or at least that’s the case with the underlying support. 

For example, Intel’s Nirvana AI Academy is a good place to learn about machine learning in general, and Intel solutions in specifics. Intel’s Python-based Neon framework from Nervana, now an Intel company, supports platforms like Apache Spark, TensorFlow, Caffe, and Theano. The company’s range of platforms supports machine-learning training and deployment. This includes everything from the Xeon and Xeon Phi platforms through the desktop and mobile processor that often incorporate GPUs.

Although it’s somewhat of an anomaly with respect Intel’s portfolio, I will mention the tiny Curie platform that I covered in “The Case of the Curious Curie 96Board.” It has a 128-neuron network built into the hardware, suiting this low-power platform as a training or deployment platform.

Nvidia has been pushing its GPUs in the deep-learning space. You can download the drivers for your existing Nvidia GPU to take advantage of the CUDA-based software, including its deep-learning SDK that includes Cuda DNN (cuDNN). The cuDNN support underlies the implementations of TensorFlow and Caffe, which Nvidia supports on its hardware.

Nvidia’s Jetson TX2 (Fig. 2) targets the high-performance mobile space. This SoC module includes a 256 CUDA core GPU; two 64-bit, Denver 2 ARM-compatible cores; and a quad-core Cortex-A57 cluster. CUDA can run on the GPU and CPUs.

AMD’s CPU, GPU, and APUs support machine-learning applications. However, on the firm’s website, emphasis on this area is a bit more sedate than its competitors. It uses OpenCL to build much of its support for machine-learning framework platforms. Some of it can be found if you know where to look, like the press release for the Radeon Vega Frontier Edition. It supports the Radeon Open Compute (ROCm) platform that allows systems with multiple GPUs to operate efficiently.

3. NXP’s i.MX8 platform features a pair of GPUs. One can be combined with Au-Zone’s DeepView to take on image-processing challenges like object recognition.

ARM’s new Cortex-A55/A75 and Mali-G72 combination targets machine learning. The challenge with ARM’s approach is that it doesn’t deliver hardware; that instead falls to its customers. New platform announcements tend to take a while before they’re generally available. Still, ARM platforms are being used now for neural-network solutions. For example, NXP’s i.MX8 platform combined with Au-Zone’s DeepView is taking on challenges like object recognition (Fig. 3).

Machine learning is a broad topic area that addresses an even wider application space. It’s being used for applications to improve PCB and SoC designs, to image recognition on drones. The topic is easy to talk about and get the gist of what’s going on. However, moving to the next level of understanding and utilizing software is a significant step that should not be taken lightly. On the plus side, a plethora of learning materials and readily available hardware will work in this space.

Gauging where machine learning will fit in your development repertoire is going to take some work, but it could be worth it in the long run. The various techniques aren’t applicable to all computational areas; nonetheless, the number continues to grow. This is a space that is still growing, with much of the work still in the research stage. That said, a significant portion of it is applicable to production now, including areas that may require certification.

Sponsored Recommendations

Article: Meeting the challenges of power conversion in e-bikes

March 18, 2024
Managing electrical noise in a compact and lightweight vehicle is a perpetual obstacle

Power modules provide high-efficiency conversion between 400V and 800V systems for electric vehicles

March 18, 2024
Porsche, Hyundai and GMC all are converting 400 – 800V today in very different ways. Learn more about how power modules stack up to these discrete designs.

Bidirectional power for EVs: The practical and creative opportunities using power modules

March 18, 2024
Bidirectional power modules enable vehicle-to-grid energy flow and other imaginative power opportunities. Learn more about Vicor power modules for EVs

Article: Tesla commits to 48V automotive electrics

March 18, 2024
48V is soon to be the new 12V according to Tesla. Size and weight reduction and enhanced power efficiency are a few of the benefits.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!