Ethical Considerations in Autonomous-System Design

Ethical Considerations in Autonomous-System Design

With the lightning-quick pace of innovation in driverless cars and other autonomous systems, are manufacturers and designers forgetting about the humane aspects in this burgeoning industry?

Technologies that operate without human intervention have been emerging spontaneously in diverse applications, from commercial transportation such as driverless cars, to military vehicles like remote-controlled tanks. Support is affluent from eager investors, who are perhaps looking forward to a market effect akin to the IoT.

While an autonomous system (abbreviated AS for the rest of this article) has its perks of relegating tedious manual tasks toward a more comfortable lifestyle for consumers, they attract a lot of worry due to the freshness of the concept and the lack of a firm foundation in guaranteeing security. The risk of an unanticipated catastrophe is very real. There’s still not enough literature that addresses questions on safety. Furthermore, much of that literature which addresses our qualms falls short of a satisfactory answer.

Electronic Design’s Lou Frenzel shares his refutations on autonomous vehicles in his blog posts (read “Forget this Self-Driving Car Nonsense” and “Just Say No to the Driverless Car”). Personally, I also have a similar distaste for autonomous vehicles, because I do not find machine learning (or at least, the current state of the algorithms) sufficient for unmanned driving. I do not condemn it altogether, though. Perhaps a more accurate GPS technology working congruently with a more effective learning algorithm will boost my confidence.

Aside from immediate consequences, such innovations are potently deleterious in the long run, necessitating regulation and proper training to involved designers and engineers. As the overused saying goes, “An ounce of prevention is better than a pound of cure.” Integration of preemptive measures with an AS against plausible long-term threats can save a company from lawsuits or bankruptcy and a consumer his/her life. After all, human well-being is always the top priority no matter what the case may be (have you heard of Aristotle’s Eudaimonia?).

Let’s now discuss some significant issues and considerations on designing an AS.

The Autonomous Intelligent System vs. Human Beings

The interaction between the AS and a human being is unique with respect to the situation. But how do we know that human rights aren’t infringed given various degrees of social and cultural norms? Obviously, it won’t be realistic to specify a universal set of constraints for everyone to follow. Rules tailoring to where and to what purpose they will be deployed for will have to be laid out. A clear delineation is a must to avoid compromising situations. The artificial intelligence (AI) of an industrial robot that deals with equipment in a production line, for example, must differ greatly from the AI of a robot in healthcare, where requires much more stringency in terms of environment sensitivity.

Many existing documents define the rights of an individual, the “Universal Declaration of Human Rights” being the most well-known. Cultural diversity makes this definition particular to location. Because of a multiplicity of norms, the odds that conflict between values aren’t remote. Thus, an AS can have algorithmic biases that are disadvantageous to a particular group no matter what.

In my opinion, there’s an omnipotent element in defining the human right, regardless of geography or culture, and that’s the safeguard against physical harm of any form. Morally officious matters, which are still unacceptable, can be solved after discovery of the problem. Arising tensions can be pacified when both sides approach with tolerance and maturity. Unfortunately, physical harm, or at worst death, is irreversible. It can lead to unprecedented chain reactions of violence, hatred, and anger. Remember what the death of an Austrian, on the fateful day of June 28, 1914, did to the world?

The Need for Methodologies to Guide the Design Process

When a robotics engineer is asked how he designed his robot to satisfy an adequate level of trust between the people it will interact with, he/she can’t just respond with “Oh! I just kept in mind Isaac Asimov’s three laws of robotics. I’m sure it won’t even hurt a fly. Sha-la-la la-la…”

After identifying all definitions, the actual act of merging them into the design process remains convoluted. How do I make my robot comply with this culture’s taboos? Should I also program it with the same level of sensitivity it has with people when it interacts with cattle (they are considered sacred in Hinduism)? Will my robot offend anyone when it makes this gesture or if it’s shaped this way? Again, the absence of an elucidated official guide to such pressing concerns will yield answers characterized with protracted variability.

Academic institutions seldom prioritize courses that discuss ethics in AS. Maybe such a fiddling topic doesn’t necessitate in-depth study or discussion. The underlying arguments, after all, seem to be postulated from common sense. Do we really need models for intercultural education to account for specific issues in an AS?

How often do you see a news article where an unmanned aerial vehicle (UAV) has struck a wrong target? Imagine those poor victims, whose lives were unreasonably cut short because of… what? A measly glitch in the AS? How about accidents involving driverless cars? Is there a need to empower awareness on such loopholes in these AS systems so that prompt solutions can be provided by the tech community? Do we need better documentation practices for such events so that the next designers will not repeat such flaws?

In such accounts, the AS is obviously accountable. But what about cases where accountability is obscure? There’s also the challenge of creating a system that can properly identify when the AS is liable to a fault, so that an effective solution can be implemented.

The AS designer is also exposed to the risk of self-bias. An ostensible sense of security can be reached even though there’s imminent peril. A third party responsible for the AS’s value alignment can avert such a danger.

Finally, an AS system is bound to evolve as technology moves forward. In effect, both ethical and safety issues will change, and it’s incumbent on the manufacturers of the AS to adjust appropriately.

I usually end my articles with an interrogative sentence to provoke after-thought, but I’m afraid I’ve given enough above (in fact, my college professors once complained that I ask too many questions). So, instead, I end with a declarative sentence. A resolution that any autonomous project I embark on in the future will undergo the thoughts and insights mentioned in this article.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish