Thinkstock
Electronicdesign 20977 Imagination Ai Promo

Can We Trust AI?

March 2, 2018
As the pace of advances in artificial intelligence accelerates, mechanisms must be in place to address responsibility and transparency. Otherwise, there’s a chance it could run amok.

Artificial Intelligence (AI) is at the heart of the future of technology. Rarely has there been a technology with more potential to benefit society. AI systems learn from vast amounts of complex, unstructured information and turn it into actionable insights. It’s not unreasonable to expect that within the next couple of decades, due to the growing volumes of data that can be gathered and analyzed, we could make significant medical advances, better analyze climate change, and manage the complexity of the global economy.

Just as it is in people, trust in AI systems will have to be earned over time. However, that doesn’t mean that time alone will solve the issue of trust in AI. We trust things that behave as we expect them to, but like people, AI will make mistakes. This is because despite the rapid advances, AI is still very much in its infancy. Most of the systems we read about today use deep learning, which is just one form of AI. Deep learning is ideal for finding patterns and using them to recognize, categorize, and predict things such as shopper recommendations.

Nonetheless, these systems can still make mistakes, stemming either from limitations in the training set or from an unknown bias in the algorithms, caused by a lack of understanding of the way the neural network models are operating. The results of these errors are scare stories of “AI going rogue,” such as a chat bots that post racist and sexist messages on Twitter, or AI programs exhibiting racial and gender bias. What needs to happen for society to place its trust in AI-based systems?

Why is Society Hesitant About AI?

Advances in AI and their embodiment—robotics—happen daily. Today, robots can perform many human tasks, from vacuuming the carpet to farming, saving us from doing tedious time and labor-intensive tasks, as well as transforming many professions. However, when AI and robots are mentioned, especially in the media, they often raise reactions such as concern and suspicion, typically with people imagining scenes from Blade Runner or Terminator, with the human race kneeling to the robots. This is not reality and nor is it likely to be.

There are more warranted discussions, such as those about robots taking jobs and leaving people without the opportunity to earn a living. In some instances, AI will replace people, but it has always been the case with evolution. Take autonomous vehicles as an example. AI will impact on professions such taxi and truck drivers. It will also disrupt the wider automotive industry because it will change car ownership–why would you buy a car, which typically costs over £2,000 to run a year, excluding fuel, when you could request one as when you need it and not have to worry about parking, servicing, and the many other expenses that come with owning a car?

Another concern is around intelligence and whether robots could one day be more intelligent than people and “take over.” Robots are already very intelligent, but will they develop cognitive or behavioral intelligence such as feeling or morals? That is the main difference between humans and robots—instinctively knowing what is right and wrong. Moral intelligence is not something that has been mastered with AI, and in all honesty, it’s debatable whether it ever will be. This is because there’s no worldwide code of ethics from which an algorithm can be created from. It also highlights the issue of bias in AI.

 “Algorithmic bias” is when seemingly harmless programming takes on the prejudices of its creators or the data it’s fed, as highlighted in the examples at the start of this article. However, and putting it simply, machine bias is human bias. The key to dealing with the issue depends on technology companies, engineers, and developers all taking visible steps to safeguard against accidentally creating an algorithm that discriminates. By carrying out algorithmic auditing and maintaining transparency at all times, we can be confident of keeping bias out of our AI algorithms.

The Importance of Transparency in AI Systems

Going forward, transparency from technology companies and the AI systems that they create will be key in addressing any concerns—benefits in performance alone will not create acceptance of AI. Yes, we all find digital personal assistants such as Siri and Google Now helpful when we want to find a restaurant or play our favorite song, but it doesn’t mean that AI is completely accepted. As consumers becomes more familiar with AI, the awareness and demand for transparency will grow, too, but one question will remain: How do we actually make these systems transparent?

AI can be divided into two main categories: transparent and opaque. For companies to communicate the workings of their AI system, they first need to clarify which category their systems fit into.

Transparent systems use self-learning algorithms that can be audited to show their workings and how they have arrived at their decision. Opaque systems do not. Instead, it works things out for itself and cannot show its reasoning.

Simpler forms of automated decision-making, such as predictive analytics, tend to follow fairly transparent models. Opaque AI can form deep insights beyond the dreams of its developers, but in exchange it takes away a degree of control from the developer. For example, with Google speech-to-text recognition or Facebook facial recognition, the algorithms are mostly accurate, but cannot explain their workings. Therefore, in the long term, focus needed to be placed on designing and using systems that don’t just think, but can think and explain to ensure full transparency.

How Can Developers Help?

Developers have an important role to play in creating and deploying trustworthy AI systems. They need to be designed so that they function in accordance with values of the people and society they will be part of, in order to be accepted. But how?

When creating new applications, developers should questions the motives behind them. For example, will it make peoples’ lives better or would they want it in their lives? With this mindset, developers will start coming up with the right kinds of applications. It’s also important that developers continuously question how and why machines do what they do and challenge the process.

There are also things that can be done at grass-roots levels. Technology companies need to involve a broad range of people when creating a new application, product, site, or feature. Diversity will mean that algorithms are fed a wider variety of data. Furthermore, there’s a greater chance that any issues will be spotted if a range of people are continuously analyzing the output. 

So, Can We Trust AI?

To trust AI, we all have a responsibility to educate ourselves on the advances and the terminology, while technology companies must also be responsible and ensure they are transparent about the AI systems they create and what they can do. For example, is a facial-recognition system looking to verify a person’s identity or is it also analyzing and recoding facial reactions?

More conversations also need to take place in regards to the impacts of AI on society and cultural. For example, how much time should we expect to work in the future and should there be social policies regarding pay if certain jobs are done by robots? There’s also the question Bill Gates raised regarding whether robots should get paid and in turn, pay taxes, as well as the wider and complex issue as to whether capitalism can survive AI in the coming decades.

The ethical programming of robots must also be a priority—not just the technological. As professor Stephen Hawking said, “Computers will overtake humans with AI at some stage in the next 100 years. When that happens, we need to make sure the computers have goals aligned with ours.” This can only be done with transparency and collaborations in the responsible advancement of AI.  

AI is not going away; the advances are only going to increase with time. So to ensure we can trust AI, we all have a role to play in ensuring it’s designed and used responsibly.

Russell James is Vice President of Vision & AI, PowerVR, at Imagination Technologies.

Sponsored Recommendations

What are the Important Considerations when Assessing Cobot Safety?

April 16, 2024
A review of the requirements of ISO/TS 15066 and how they fit in with ISO 10218-1 and 10218-2 a consideration the complexities of collaboration.

Wire & Cable Cutting Digi-Spool® Service

April 16, 2024
Explore DigiKey’s Digi-Spool® professional cutting service for efficient and precise wire and cable management. Custom-cut to your exact specifications for a variety of cable ...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!