Every day, people avail themselves of AI systems without ever realizing it. AI systems are used to recognize text and speech, to make purchase recommendations on e-commerce websites, to recognize your friends in Facebook photos, and to suggest films for you to watch on Netflix. The core element of those AI systems is something known as machine learning.
The term “machine learning” sounds daunting, especially because it’s related to the field of artificial intelligence, but it really just refers to the ability of computers to recognize patterns. The key, of course, is not just for a computer to recognize existing patterns that it has already been trained to learn, but also to recognize patterns that it has never seen before. The “learning” in “machine learning,” then, refers to the ability of a computer to recognize an ever-growing number of patterns without the need for any form of human supervision.
The way computer scientists train computers to recognize very basic concepts is via algorithms, which are essentially a series of steps that explains how to do something. Say, for example, you’re working on an image-recognition system. You would need to create an algorithm that explains very specifically how to differentiate between all of those images with 100% precision.
The Dogs vs. Cars Problem
The head of AI at Facebook, Yann LeCun, has explained this image-recognition problem with the basic example of teaching a computer to recognize the difference between a dog and a car. You would need a specific algorithm that the computer can use to distinguish between a dog and a car. The algorithm would explain how to break down each image of a dog or car into a series of pixels, assign a numeric value to each of these pixels, and based on those values, determine whether it’s a dog or a car.
The idea of creating such an algorithm may sound ridiculous, of course, because even toddlers can recognize the difference between a dog and a car without even thinking about it. But for an AI system, this is a very difficult problem, mostly because it’s very hard for humans to explain to computers why dogs are different than cars. It’s obvious, right?
Yet, if humans can’t explain the difference, they can’t write the proper algorithm. That’s why the initial attempts at “teaching” machines was essentially limited to showing a machine millions of images of dogs and millions of images of cars and teaching the computer to recognize which ones were dogs, and which ones were cars.
However, machine learning has progressed to the point where it’s now possible for a machine to recognize images it hasn’t seen before. But this requires training. So, if you show a computer a picture of a dog, and it said it was a “dog,” you wouldn’t refine the algorithm. But if you showed a computer a picture of a dog, and it said it was a “car,” you would need to tweak the algorithm until the computer got it right. The real trick, of course, is creating a powerful-enough algorithm that gives computers much more predictive ability, so that it can be successful even when presented with an image of a dog that it has never seen before.
One of the most talked about sub-fields within AI is deep learning, which is what gives computers the power of speech recognition and computer vision. The “deep” doesn’t refer to any kind of psychological or emotional “deepness”—it simply refers to the fact that a computer goes through different layers of processing before it can deliver a final result. The more layers, the “deeper” the learning.
For example, take the images of dogs again. One way to teach computers about dogs is to train them to recognize the unique features that you can find in a dog. So, the first layer of analysis might look for the typical eyes found on a dog. The second layer might look for the typical nose found on a dog. And the third layer might look for the typical ears found on a dog. If an image contains all of these features of “dog-ness,” then it must be a dog.
Deep Learning in Action
So how do we know that a computer has really learned? After all, it could be the case that the computer is simply guessing and got the answer right. The true test of learning, say computer scientists, is if a computer can perform a task with superior ability over time. If performance improves, then one can say that it has “learned.” (In much the same way, a human “learns” a foreign language by being able to use it without making mistakes.)
You can think of this using the example of self-driving cars, a popular application for deep learning. You can define the task of a self-driving car as getting from Point A to Point B without crashing. Over time, self-driving cars will be able to navigate a specific course without crashing. But can it handle the demands of being on the road with other cars and drivers? If a driverless car is able to negotiate a course repeatedly without crashing, then one can say it has learned.
Other Applications for Deep Learning
Of course, many different applications for machine learning go way beyond just recognizing cars and dogs in images. What other patterns can computers learn to recognize? We’ve already seen machine learning applied to speech recognition and text recognition, but anywhere there’s a pattern, the potential exists for machine learning.
For example, scientists are now teaching machines to recognize different painters and different painting styles. You could show a machine a Picasso, and it will recognize the key characteristics of what makes a Picasso a “Picasso.” Thus, one of the hottest crazes in apps in 2016 was the launch of AI-powered apps like Prisma, which converts any photo you might have on your phone into a Picasso (or a Van Gogh).
And some Silicon Valley VCs say that AI systems could soon replace doctors. If you simplify what doctors do all day, you could say that they recognize patterns of disease. If you come into a doctor’s office during the winter with a cough, a runny nose, and a slight fever, a doctor might recognize that pattern as being “the flu.” You can see where computers can do even better than doctor, as long as data can be broken down into digital bits. They might be able to recognize patterns in brain scans, for example, and be able to predict a malignant tumor. Again, it’s the “dog vs. car” problem, but this time applied to medicine.
Why Are People So Scared?
Based on the above, it seems that AI, machine learning, and deep learning have the potential to make our lives easier, not harder. So why are top innovators like Elon Musk, Bill Gates, and Stephen Hawking have such concerns about the future of AI? At some point, they say, machines may become smarter than humans, and we won’t know what they’re thinking and what they will do. Even worse, say AI theorists like Nick Bostrom (author of “Superintelligence”), artificially intelligent machines may pose an existential risk to humans.
Just as a chess novice has no chance against a chess grandmaster, it may be the case that humans will have no chance against computers. Even ignoring all of the dystopian scenarios for superintelligent machines, what happens if a computer misdiagnoses a patient and a human is unable to provide a second opinion because that computer is seen as infallible? What happens if computers programmed to recognize patterns in the stock market decide to all start selling at once, resulting in a “flash crash”? What happens when autonomous weapons on the battlefield are trained incorrectly, and we end up with human fatalities on the battlefield as a result of “friendly fire” between a man and a machine?