Rick Green 200

Algorithm, explain yourself!

Dec. 4, 2017

We are surrounded by proprietary algorithms that advise us, make decisions for us and about us, and monitor what we do. My phone can measure my heartrate and SpO2 level and track where I go and how fast. At least with heartrate, SpO2, and distance it provides numerical answers that can be checked with other devices. But it also measures stress, presumably using heartrate variability and similar parameters, providing a result on a scale of low to high that is difficult to interpret. If data about my stressful life leaks, could I be denied favorable insurance rates?

On a more serious note, proprietary algorithms are used to determine credit-worthiness and influence the sentences convicted criminals receive. If an algorithm gives evidence against you in a court of law, should you have a right to cross-examine its developer? In the case of deep learning, the developer might have no idea why the algorithm recommended six years in prison instead of six months of probation.

Even if you are not ensnared in the criminal-justice system, an AI could flag you as a likely future perpetrator and advise police to pay extra attention to you, perhaps based on faulty data. In an article in The New York Times titled “’Intelligent’ policing and my innocent children,” Bärí A. Williams, a legal and operations executive in the tech industry, writes, “Unjust racial profiling and resulting in racial disparities in the criminal justice system certainly don’t depend on artificial intelligence. But when you add it…things get even scarier for black families.”

Such concerns have led to the European General Data Protection Regulation, which goes into effect in May 2018 and requires companies offering products and services in Europe to follow privacy by design principles. If you want to deploy your algorithm in Europe, you’ll need to comply or risk fines up to 4% of your company’s global turnover, according to Rand Hindi writing in Entrepreneur.

The regulation addresses issues such as explicit consent, the right to be forgotten, data portability, and—significantly for AI—algorithm transparency. “This one is particularly tricky, as it states that European residents have a ‘right to explanation’ when an automated decision was made about them,” writes Hindi. “The logic behind it is to avoid discrimination and implicit bias by enabling people to go to court if they feel unfairly treated.” He cautions that a requirement for algorithm transparency could effectively prohibit the use of deep learning. Consequently, he adds, “Many researchers are working on explaining how neural networks make decisions, as this will be a requirement before we can hope for AI to enter areas such as medicine or law.”

Cliff Kuang addresses this issue in an article in The New York Times Magazine titled “Can AI be taught to explain itself?” He describes XAI, or explainable AI, which addresses the disconnect between how machines make decisions and how humans make decisions with the goal of making machines able to account for what they are doing in ways humans can understand.

To demonstrate the complexity of the problem, Kuang cites the work of psychologist Michal Kosinski, who fed 200,000 publicly available dating profiles, including images, into an open-source facial-recognition deep-neural-network algorithm. He found that by looking at a picture, the algorithm could predict the sexual orientation of the subject with accuracies from 83% to 91%, vs. about 60% for humans.

What accounts for the algorithm’s success rate? It might have been discovering hormonal signals in facial structure, but the evidence remained fragmentary. In the end, writes Kuang, “…it was impossible to say for sure.”

Kuang notes that AI’s success at playing Go or Jeopardy! is usually attributed to data-crunching power. However, he continues, “Kosinski’s results suggest something stranger: that artificial intelligences often excel by developing whole new ways of seeing, or even thinking, that are inscrutable to us.”

Kuang comments on the work of David Gunning, whom Kuang describes as “one of the most consequential people in the emerging discipline of XAI.” Kuang reports on an encounter Gunning had with an intelligence analyst, who told him she would need to be able to justify any decision an AI might recommend. Writes Kuang, “The analyst was pointing to a legal and ethical motivation for explainability: even if a machine made perfect decisions, a human would still have to take responsibility for them—and if the machine’s rationale was beyond reckoning, that could never happen.”

Gunning is now a program manager at DARPA overseeing an XAI initiative with $75 million in funding. Kuang quotes him as saying, “The real secret is finding a way to put labels on the concepts inside a deep neural net.” One possible solution is designing deep neural networks made up of smaller, more easily understood modules. Another is the Hamlet strategy. If the goal is image recognition, pair the image-recognition network with a language-translation network—the latter could deliver a soliloquy describing what the former is doing.

Kuang also describes the work of Chris Olah, a research scientist at Google. Ohla and fellow researchers developed a feature-visualization tool developed to test a deep neural network. The tool first provides a random image of visual noise and then gradually tweaks the image to see how the network responds. The image might eventually morph into that of a dog that the network can recognize, letting researchers study how the network reached its conclusion.

Such a tool might be useful in explaining the success Kosinski’s AI exhibited in predicting sexual orientation. Kuang concludes that “…with the tool Olah showed me, or one like it, Kosinski might have been able to pull back the curtain on how his mysterious AI was working. It would be as obvious and intuitive as a picture the computer had drawn on its own.”

About the Author

Rick Nelson | Contributing Editor

Rick is currently Contributing Technical Editor. He was Executive Editor for EE in 2011-2018. Previously he served on several publications, including EDN and Vision Systems Design, and has received awards for signed editorials from the American Society of Business Publication Editors. He began as a design engineer at General Electric and Litton Industries and earned a BSEE degree from Penn State.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!