The Ethics of Robots: Is There an Algorithm for Morality?

robot ethics

Picture this: you’re driving through an intersection, and suddenly a group of people run into the street. Would you swerve out of the way, potentially endangering yourself? What if it was a group of children? A gaggle of geese?

What if you’re in the car, but you aren’t the driver? No, it’s not your wife, brother, or best friend driving. The car is driving. How do you think your self-driving car would react? And, perhaps more interestingly, how would you prefer it to react?

The million dollar question is: should a robot be programmed to decide right from wrong?

Self-Driving Cars

Put on your seatbelts; the future is here.

Ubers are already driving themselves around Pennsylvania. Check out this video of two Pittsburgh Steelers riding in a self-driving car through Pittsburgh.

And while the plan to roll out self-driving cars to a much larger audience won’t happen for a few more years, both Ford and Volvo announced they will begin debuting completely self-driving cars by 2021. Unlike the self-driving cars that Uber is currently using, the cars in 2021 will not include pedals or a steering wheel.

Robots in the Battlefield

The military has integrated robots into decision-making roles that govern life-or-death situations. These robots decide whom to save first, whose life is more valuable, and whether a perceived enemy is really just an innocent civilian. This may seem like an example of robots responding ethically to different situations, and brings up an important question: should a robot be programmed to act ethically, or should we purposefully leave out sophisticated moral agency?

Currently, the military prohibits the use of fully autonomous robots for lethal purposes. Instead, it relies on an authorized human operator to select targets for the robot. Most robots are controlled remotely by human operators.

There are two categories of robots used in the military: unmanned ground vehicles (UGVs) and unmanned aerial vehicles (UAVs). Unmanned ground vehicles rely on wheels or tracks to mobilize and perform a wide variety of tasks, including examining potentially hostile situations. UGVs are used for reconnaissance and threat assessment.

robot ethics
Roland Hoskins, Source: Daily Mail UK

 

Unmanned aerial vehicles are essentially automated or remotely controlled helicopters and planes. UAVs, like drones, are used mainly for reconnaissance and intelligence.

robot ethics
Source: Reed Switch Developments

The Ethics of a Robot Soldier

Imagine you’re a military commander with the mission of taking out an enemy surrounded by civilians. Do you put those civilians at risk, or hold off and wait for another opportunity?

How do you prioritize the lives of civilians and your mission?

These are terribly difficult decisions for human commanders to make. Should we program a robot to make such choices?

More and more, robots will be the agents of action in ethically tricky situations like these.

Recommended Reading: Six Robots That Sum Up 2016

A Robot’s Ethical Limitations

Robots are electronic devices that are programmed to do a task that a human can do. But are there tasks that robots should not be involved in? Robots’ roles in ethical decision making are being contested in the robotics community.

Can robots be qualified to make ethical decisions? Or are humans uniquely ‘programmed’ with a moral compass?

The Source of a Robot’s Ethical Code

If we do program ethics into robots, who defines these ethics?

The robots that begin to make moral-based decisions are sourcing these systems from their designer. The ethics of the designer behind the robot are the root of a growing debate in the robotics community.

Whose ethical system should be the basis for these robots, and in whose interest should these robots be used? As we begin implementing ethical systems in military and police robots, in particular, we need to question whether or not it’s ethical to use robots in the interest of one group against another.

This became a hot button issue in the case of the Dallas shooting in July of 2016.  A disgruntled young man began shooting at Dallas police, which resulted in the death of five officers. The police employed a bomb squad robot, which detonated a bomb, killing the shooter. Following this incident, many questioned what this meant for a future use of robots in such situations.

The 3 Degrees of Robot Ethics

From situations of war to driving in city traffic, a robot must be armed with the appropriate ethical framework to respond to all situations that may arise.

Members of the ‘Technology and Ethics’ community differ in how they think these decision-making skills should be programmed into robots. According to one such expert, Wendell Wallach, there are three different degrees of morality that can be programmed into a robot:

Operational morality means that the designer programs the moral responses for all possible situations that he or she can predict that the robot will come across.

Functional morality goes a step further, to cover when a robot begins to enter new situations that it was not technically prepared for. In this case, the robot draws upon a programmed ethical reasoning, instead of a simple moral answer.

Finally, a designer could provide the robot with moral agency. This provides the ability to learn ethical lessons post-programming, and in turn, develop and evolve its own moral compass.

Anyone else thinking about Westworld?

In deciding whether or not to give robots moral agency, humans will need to determine whether empathy and sympathy are even programmable qualities. Both empathy and sympathy require the ability to interpret actions and perceive the motivations or feelings of others. Human empathy is a mixture of biological processes and shared experience, both of which are deeply rooted in what it means to be human. And even if we determine that we could program these qualities, does that mean we should do it?

Building A Robot Code of Ethics

Robotics expert, Ronald Arkin, believes in building a moral compass for robots because he thinks robotic soldiers have incredible moral potential.  He even suggests the possibility that robots will outperform human soldiers. He states, “It is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield, but I am convinced that they can perform more ethically than human soldiers are capable of.”

Can robots execute moral decisions more efficiently and objectively than humans? As moral members of society, should robots be subject to the same justice that govern human members of society? And what does this all mean for our future?

Learn more about the robotics industry and its possible future impact on transportation, warfare, and society in the years to come.







Disclaimers

The content appearing in this communication is provided for general informational and educational purposes only and is not intended as investment advice. Past performance does not guarantee future results. There is a potential for loss as well as gain in investing. StashInvest does not represent in any manner that the circumstances described herein will result in any particular outcome. While the data and analysis StashInvest uses from third party sources is believed to be reliable, StashInvest does not guarantee the accuracy of such information. Nothing in this article should be construed as a solicitation or offer, or recommendation, to buy or sell any security. StashInvest does not provide personalized financial planning to investors, such as estate, tax, or retirement planning. Investment advisory services are only provided to investors who become StashInvest Clients pursuant to a written Advisory Agreement. For more information please visit www.stashinvest.com/disclosures.


Note: These are just a few of the investment choices available on Stash and may not be suitable for everyone. Depending on your risk profile, you may not see the investments on Stash. Investing involves risk and investments may lose value. See our Disclosures.