813
Views
1
CrossRef citations to date
0
Altmetric
GUEST EDITORIAL

The Problem with Automated Ethics

According to the Association for Safe International Road Travel, nearly 1.3 million people die worldwide each year as a result of traffic accidents. Given the tragic prevalence of these events, it is unusual for any one incident to attract widespread attention. Yet, on May 7, 2016, one accident did just that. Joshua Brown was killed when his Tesla Model S plowed into a tractor-trailer in Florida while traveling at 65 miles per hour. At the time of the accident, Brown was using the car’s Autopilot feature, which did not activate the braking system because it did not detect the vehicle crossing Brown’s path against the background of a bright sky. Reports on Brown’s fatal crash called into question both the safety and the wisdom of self-driving cars, including the decision-making processes that must be programmed into these autonomous vehicles.

While Tesla defends the safety of its Autopilot feature, other car manufacturers are testing the waters of autonomous vehicles with some hesitation. Reports indicate that Audi and GM, for example, are approaching this new era of automobile technology in incremental ways to address a variety of connected human and technical issues. One of the most difficult issues is related to the ethical aspects of autonomous car design. Specifically, car manufacturers must decide the rules for guiding the car when lives are at stake. In these situations, the car makes life-or-death choices, notably whether passengers or pedestrians are prioritized. The writer George Dvorsky aptly noted that we have entered an age when Foot’s classic “Trolley problem” is no longer just a thought exercise in choosing between two evils.

A 2016 Science study by Bonnefon, Shariff, and Rahwan found that survey participants approved of autonomous vehicles that are programmed to sacrifice passengers in order to save others, but at the same time, they would not want to ride in such vehicles. This finding reveals the contrast between the utilitarian ethic (“save the most lives”) ethic and the survival instinct (“but me first”). This study is a reminder that while there may be value in machines that are programmed to prioritize the greater good, our own human condition affects emerging technologies. Transportation is not the only arena where this impact is seen. Rather, as the world evolves to embrace automation, ethical programming will emerge as a concern in other contexts.

For example, the use of technology in the workplace is seen as an opportunity to improve efficiency, including as it relates to human resource management functions. Yet in the context of personnel selection, other priorities should be balanced with efficiency. The commitments to equal employment opportunity and nondiscrimination are important considerations when selecting applicants. Recent experiments, though, demonstrate the prevalence of biased decision-making. Research shows that an applicant’s gender, race, religious affiliation, and sexuality could impact the likelihood of being selected for employment. These findings remind us that even so-called objective processes involve subjectivity. To address this issue, automation of the process has been recommended.

Regrettably, computer programs designed to screen applicants show little promise in removing subjectivity. A 2015 study by Feldman, Friedler, Moeller, Scheidegger, and Venkatasubramanian found that automated approaches are not completely free from selection bias, namely because their programming is derived from actual selection data. The authors note that algorithms can “learn” and have the potential to make choices that mimic human ones. Computers programmed to be objective may adapt, for instance, to select candidates based on information that infers the individual’s race or gender, even if that protected information is not disclosed. Thus, the potential for discriminatory hiring remains.

In another illustrative and surprising news story from 2016, machines, for the first time, judged an international beauty contest. The goal was to improve objectivity in the process. Yet, when the system identified the contestants it deemed the most beautiful, people of color were glaringly missing. Analysis determined that the results were a product of the racially unbalanced database that the programmers used to establish the standard of attractiveness. Again, this illustrates how human influence impacts the goals of automation.

Given these examples, it is appropriate to ask about the future of automated decision-making in the workplace and beyond. In the applicant screening and beauty contest examples, the ideal of objective, equitable treatment was affected by human programming choices that resulted in unintentional bias. These lessons are reminders of the connections between people and technology as well as the limitations of automatic processing. In the context of ethical decision-making, prohibitive codes of conduct present an interesting parallel. Prohibitive codes offer a rubric for determining whether a particular action is acceptable or not. If a public official receives a gift, for instance, the code determines whether the official can accept it.

While yes-no guidelines have utility in terms of respecting rules and regulations, aspirational goals, such as promoting ethical organizations and demonstrating personal integrity, represent broader ideals that are not always achieved through a series of either-or choices. Rather, reaching these ideals may mean balancing competing values in a way that reflects a both-and perspective. Striving for the aspirational requires thoughtful interpretation of the particular situation and appropriate action as determined by the individual. There is no automatic decision-making when different people in the same situation may make different choices.

As we enter an age when algorithms can make decisions with potentially significant consequences, it is necessary to ask whether binary systems, be they prohibitive codes of conduct or automated computer programs, can fully capture the nuances inherent to addressing the ethical challenges of today and tomorrow. To revisit the initial example from the world of transportation, both the desired destination and the roads to get there should be considered with care.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.