Forbes Feature: Why The Ethics Of AI Are Complicated

Ethics Of AI

Written by Debarshi Chaudury, the CEO and Founder of Quantilus Innovation Inc., as part of the Forbes Technology Council. Click here to see the original post. 

 

If you’ve given any thought at all to artificial intelligence (AI) and the progress made in the field, you’re probably in one of these two camps:

 

Camp 1: AI is the biggest possible threat to humankind and will take over and make slaves of us humans (aka The Matrix or The Terminator camp).

 

Camp 2: AI should be embraced by humankind and will drive us to unprecedented new levels of creativity, productivity and societal advancement. AI will largely remain subservient to us, and we will coexist harmoniously (aka the “R2D2” camp).

 

Most books and movies that deal with AI also lean toward one of these two camps. “Good” and “evil” are somewhat nebulous terms — our baseline empathy sets our definition of “good.” For example, most of us know that we should value human life over material objects without needing anyone to tell us so explicitly. Someone who sacrifices a baby to get a new car would automatically be branded “evil.” These macro laws/rules are hardwired into us as human beings. But why should human life or animal life be valuable to AI? A dog has no greater intrinsic value to a machine than, say, a sandwich — unless we program our values into our AI systems.

 

The question isn’t really whether AI will eventually become more intelligent than humans (it definitely will) or whether it will turn good or evil. It’s what we can do right now to make sure it turns “good” (or, at the very least, doesn’t turn “evil”).

 

Self-Preservation

Self-preservation is at the core of our existence, and even “good” humans operate within this principle. If we’re to model AI with this principle, it will imply that an AI system would prioritize its existence over the life of a human. This would lead to all kinds of “robots killing humans” scenarios.

 

So for an AI to be “good” based on human standards, it would have to prioritize human life over self-preservation. This was pretty well encapsulated decades ago by Isaac Asimov with his original “Three Laws of Robotics,” in which Law 1 is all about protecting human life, and Law 3 is about AI self-protection. However, it’s pretty hypocritical of us to convince a truly intelligent system that human life should take precedence over its own and not choose to define “good” in its own context.

 

“Value Of Life” Considerations

If we can overcome the previous issue and hardwire the “sanctity of human life” into AI systems, we face various “relative value of a life” questions. Is a child’s life more valuable than that of an older person? Are two people more valuable than one? Is one child with a dog more valuable than two middle-aged people?

As AI systems proliferate, they’ll be frequently faced with lose-lose Cornelian dilemmas in real-life scenarios — say, a self-driving car has to choose between turning left and hitting a child or turning right and hitting two adults. We’re essentially trusting the programmers of these systems to make the right decision — a tall task considering that we’d be hard-pressed to make the decision ourselves.

 

“Needs Of The Many Over The Few” Considerations

As AI systems become more sophisticated and more intelligent, they’ll consider the long-term consequences of actions and the consequences in the broader population. We’d all tend to agree that the needs of many humans should take precedence over the needs of individuals.

 

This, however, is a rather slippery slope. A simple example is that of a terrorist threatening to kill many people. That’s a pretty easy decision for both a robot and a human cop. They’d both try to take out the terrorist before he killed any people. Now, take the more complicated example of a factory polluting a river that’s poisoning the people in a small town.

 

A machine could decide that for the benefit of the many thousands of people in the town, the best option is to destroy the factory. This is something that a “good” human being would never do — we’d try to fix it in other ways.

 

Human Laws Vs. Machine Laws

This brings us to another ethical conflict for AI — the conflict between human laws and the laws embedded into the machines. A primary reason that “good” humans don’t take extreme steps (like blowing up a polluting factory) is that we have laws that prohibit these actions and the negative consequences of disobeying those laws.

 

So, should machines just obey human laws, and would that keep them in check? Well, maybe — but unfortunately, we humans don’t have much uniformity there either. Laws are different from place to place — the laws in New York are different from the laws in California, and they’re both very different from the laws in Thailand.

 

Conclusion

We can’t impose ethics or morality in AI programming because we haven’t ever really worked it out despite the best efforts of centuries of philosophers. This article is about the ethics of AI, but it could just as well have been about ethical considerations for humans. There’s a lot of handwringing about how machines will behave when faced with ethical scenarios, yet there’s no consistency on how humans behave or even how they’re supposed to act.

 

Maybe a sufficiently intelligent system will be able to figure it out for itself, or at least to the level we’ve been able to figure it out. Our best course of action may be to treat AI as a child that has to learn “good” behavior as it grows up. Like with all children, our approach should be to expose it to the broad principles of good behavior — not to cause unnecessary harm, not to discriminate, but to do things for the betterment of society as a whole (with the understanding that society may be a mix of humans and AI), and mostly to be able to balance competing and sometimes contradictory pulls of good behavior.

eBOOK

Harnessing the Power of

ARTIFICIAL INTELLIGENCE

Need help navigating your AI journey?

Download our eBook for insights and tips to ease your way into adopting AI for your business – the smart and practical way!

Need help navigating your AI journey?

Download our eBook for insights and tips to ease your way into adopting AI for your business – the smart and practical way!