Steering Toward a Future We May Not Be Ready For

Sep 23, 2022 by Barry Eisenberg

Imagine you are driving on a narrow, unlit, one-lane rural highway. It’s late at night and you are the only one on the road. It’s so dark that you’re not comfortable driving at the 50-mph speed limit, opting to cruise between 40-45-mph. As you approach a bend to the left, a four-foot-high brick median appears, separating your lane from that of the oncoming traffic.

Just as the road straightens, an on-ramp appears on the right. You glimpse a single headlight emerging from the darkness on the on-ramp. A motorcycle. Judging from your respective speeds, you are sure that you can pass it ahead of the merge point. Just to play it safe and create a cushion, you speed up. But much to your surprise, the motorcycle abruptly speeds up as well. Maybe the driver didn’t see you? Maybe he was accelerating to get ahead of you?

The motorcycle enters the roadway a mere twenty feet ahead of you. You are going too fast and he’s not accelerating enough to prevent a catastrophe. Panic sets in. Your foot jams on the brake. Your heart races. You’re faced with a horrific choice, a potentially deadly choice – either veer into the brick wall or slam into the motorcycle from behind. You have a split second to decide, less actually.

What would you do if you were in that situation?

I recently attended a conference on ethics in healthcare. One of the speakers was an ethicist who works on policy for Health & Human Services. She was discussing the pervasiveness of technology in healthcare and was emphasizing that even though technology is used for virtually every diagnostic and medical procedure, human beings remain in charge of decision-making.

Take robotic surgery, for example. It’s being practiced more and more. Even so, many people report that the term “robotic” makes them uneasy, that it conjures unnerving images of some futuristic sterile operating suite devoid of human oversight… Artoo-Detoos, gowned in surgical gear, whizzing about on hoverboard rollers, sterilizing their hand-like grippers in the scrub sink.  

In its effort to allay such anxiety, UCLA Medical Center describes robotic surgery in soothingly reassuring terms on its website: “The robot is never, ever making decisions or performing incisions. Rather, your surgeon is telling the robot what to do, and the robot allows for greater precision than the human hand on its own. The Robotic system cannot ‘think’ on its own. It only responds to your surgeon’s precise hand and finger movements. Your surgeon is in the operating room, directing the procedure the entire time.”

That’s the state of things today. But the ethicist observed that we may soon be at an inflection point at which decision-making shifts from a human overseer to the machine. Technology can perform calculations far more quickly and consider a much broader range of factors. That’s an advantage.

But, as an ethicist, her area of interest involves moral judgments. She posed this question: Can technology be programmed to accommodate the values, beliefs, and attitudes we humans bring to decision-making situations? This is not unique to healthcare; it’s happening in every industry. To illustrate, she presented the case of the imminent collision of the car and motorcycle on the highway and how technological advances in vehicles could change how we think about it.

I became intrigued by the idea of ethics and driving and decided to read more about it.

It is shocking, not to mention tragic, just how many car-related fatalities there are – in 2021 alone, there were over 42,000 such fatalities in the U.S. How many of those involved even a tiny bit of decision-making about what to avoid and what to hit?

I debated with myself about how I would react if I was that driver about to collide with the motorcycle. It’s terrifying to contemplate. Suddenly faced with the very real possibility of inflicting fatal harm, either to yourself or to the driver of the motorcycle, what would you do?

Research shows that the answers people give are dependent on how they are asked. If they could click a box on an online survey, believing that they are answering anonymously, a large majority, about 80%, say they would protect themselves over the other person. But in focus groups, which require us to state our choices in the presence of others, that percentage drops to around 60%.

These findings are not surprising. Social scientists tell us that we are inclined to respond quite differently in those information-providing settings. When we present our intentions publicly, we’re apt to inflate our willingness to self-sacrifice.

Our conviction to protect ourselves softens when other variables are added to the dilemma. For example, suppose you saw a child riding on the back of the motorcycle. Would that change your response?  Or suppose you recognized the motorcycle and knew the rider?

Or consider this scenario: You don’t know the person directly, but you know something about the person. For example, what if you were aware that the motorcyclist was racing to say good-bye to a dying friend? Or if you knew the driver was like you in ways you regard as important… shared religion, similar job, grew up in a similar town, or simply, and however insignificant it may seem, that she was a huge fan of the musical, Hamilton, just as you are.

Would it make a difference if you knew the person was a scientist who was on the verge of discovering a cure for a rare disease? Or was a convicted felon who served jail time for robbery? Would it matter if the person was a parent with two young children? A single parent?

Such questions seem pointless to even ask because there would be no way of knowing in the moment. All you would know in that instant is that you are staring at a horrific choice: crash into a wall to avert a deadly collision or smash into a motorcycle with such impact as to make the driver’s death unavoidable. That’s the only information you have. Nothing else. Even if by some miracle you had some information on the driver, incorporating it into your decision-making would be nearly impossible given the brevity of time you have.

Thus, the questions are meaningless because there’s no possible way to have that information. And even if you did, there’s no possible way to process it in the allotted time. Right?

Well, you might not be able to. But fast forward a few years and your self-driving car might be able to assess all these variables in a nanosecond.

The advances in automobile technology in recent years are breathtaking. Think of what a self-driving car – or autonomous vehicle, AV for short – can do. The array of sensors and computers that permits it to navigate a highway is mind-boggling, enabling it to anticipate the movement of vehicles all around it and continuously adjust its direction and speed. Its programming must be sufficiently sophisticated so that it can react to a ball rolling into a street, “see” a stop sign that might be hidden by a tree, and adapt its movement to a car whose brake lights are not working. Those developing AVs claim that since the programming substantially prioritizes safety over speed, a future with AVs would result in fewer crashes and fatalities.

Are AVs way off in the future, separated from today by decades? By generations? Not at all. The basic technology has been developed and some is already in widespread use, for example, lane departure warnings and forward collision alerts. And “smart” cruise control actually regulates the car’s speed, relieving the driver of some responsibility for acceleration management.

But, and this is central to where we are today: we – people – still make the decisions. The technology helps us make the decisions by giving us relevant information. But we are in charge. Even with “smart” cruise control, the driver can take over speed control at any moment.

Yet, at some point, the technology is likely to become so sophisticated that its effective use can occur only if it assumes some of, much of, or even all the decision-making. And because an AV will have the ability to assess a great range of variables related to safety, this overarching question will need to be addressed: can it – and ethically, should it – examine its choice spectrum through a lens reflecting the judgments we would make?

Undoubtedly, most of an AV’s programming will be boilerplate, conforming to highly regulated mandates ensuring that safety is central to its algorithmic infrastructure.

But what about the unthinkable choices we might be forced to make about whether to absorb the impact of a crash or inflict it on another? Or the gut-wrenching choice about which of two people to injure if avoiding both is unavoidable? Which way do we swerve if swerving is inevitable – toward the man on the right or the woman on the left?

In that worst of unimaginable moments, it would not be possible for the vehicle’s technology to provide us with all the information it has at its virtual fingertips, information that could, conceivably, influence our reaction. Thus, if a goal is to preserve the driver’s right to exercise some degree of choice, it would be necessary for us to give guidelines to the AV’s programmers so that they can enter it into the system before we ever step into the car. As bizarre as that is to contemplate – and dodging for the moment the ethical quandary of whether this should be permitted at all – providing such input in advance would be the only practical way of ensuring that one’s leanings would be carried out.

So, what are our predilections when it comes to the distribution of harm that would result from an impending collision? Interestingly, our choices are informed by where we live.  

Research shows that a country’s socio-economic and socio-cultural traditions shape our behavior with respect to how we would behave in emergency situations, assuming of course that we had the time and presence of mind to act on those intentions. For example, people raised in societies in which distances in economic and social class are small tend to see others as more like themselves. Those citizens are more predisposed to protect a random other’s welfare than in societies with big separations between rich and poor.

To illustrate, according to a comprehensive study reported in the journal Nature, with over two million people participating from around the world: “Scenarios that forced survey participants to choose whether to save a homeless person on one side of the road or an executive on the other revealed [that] the choices people made often correlated with the level of economic inequality in their culture. People from Finland — which has a relatively small gap between the rich and the poor — showed little preference for swerving one way or the other. But the average respondent from Colombia — a country with significant economic disparity — chose to kill the lower-status person.” (Perhaps the author could have spared us the grizzly imagery, saying instead “chose to save the higher-status person,” as well as clarify that “status” is a function of perception, not a fixed state, but that’s a matter for another day.)

The findings also indicate that the more we value and abide by a common set of regulations and trust in the law, the less sympathy we may feel for those who suffer consequences when they violate those regulations. Such breaches may be interpreted as dishonoring the common bond that holds that society together. Again, as reported in Nature: “… people from countries with strong government institutions, such as Finland and Japan, more often chose to hit people who were crossing the road illegally than did respondents in nations with weaker institutions, such as Nigeria or Pakistan.”

Age is another important variable that differs by country. For example, respondents in North America and in several European countries expressed a stronger inclination to sacrifice older people in order to save younger people. In several Eastern and Mideastern counties, the opposite was found. This is not surprising given those countries’ traditions of revering the elderly and associating age with wisdom. Those countries have more firmly rooted practices of caring for elderly. For example, it is more common in Eastern cultures for loved ones to provide direct caregiving to elderly relatives, including bringing them into their homes when they become infirm. Thus, it is not surprising to discover those protective values reflected in choices about whom to save when multiple people are endangered and not all can be spared.  

Other research indicates that our choices are influenced by the extent to which we view the other person in the collision as similar to us. But in what respect? Do we both have young children? Older parents who depend on us? What if we were similar in both those ways but had a different religion? And given the divisiveness of today’s political climate, what if we did share those family circumstances, but were active members of different political parties? Ultimately, it is a question of which similarities are most salient, most relevant, in that moment.

The AV may be able to access such data, for example, by simply “reading” the license plate or through facial recognition or from detection devices not yet invented. What’s more, if our preferences were plugged into the software – and, again, ethics aside – its instantaneous processing could enable the AV to act on the information, much like it can take measures to avoid hitting the ball that rolls into the street. What we lack in time and presence of mind, stifling our capability to act on our preferences, may not be a problem for the AV, at least not to the degree it is for us.

At this point, we know two things. First, people do have ideas about how they would apportion harm in an accident. Second, an AV will, unquestionably, have a greater capacity than humans to assess the emergent situation and act on the driver’s wishes if they were built into its processors.

Thus, the ethical dilemma looms: As a society, should individuals be permitted to record their choices in the AV’s programming? If yes, where do reasonable preferences give way to biases? Where does the slippery slope begin? Who decides what constitutes a “reasonable preference?”

If we knew that we could choose between the cancer researcher and the felon who lived a life of crime, should we be permitted to preprogram our choice into a machine? Today, in the event we lost control over our conventional car, we couldn’t know those individuals’ identities while we do our best to avoid hitting both. But if the technology in the car enabled us to know, what would we do? As a society, should we allow the technology to be designed with such capability at all?

Among the proposals for criteria for reacting to an accident that I have seen is that the AV should calculate where the least harm will occur. Suppose you are driving on a highway and two pedestrians appear, seemingly out of nowhere. The AV can either strike them or veer into a guardrail, possibly killing you, the driver. Either two people get killed or one. The least harm in this case is to you. Would you purchase the AV under those conditions? Or would you insist that the AV do all it can to safeguard the driver and the other occupants?

Or suppose the decision is based on which party is determined by the AV to be more responsible for causing the imminent accident. With AVs being programmed to place safety over all else, in all likelihood it will almost always be the other party. But what if the other party is a child who darted into the street, chasing after her runaway puppy? Would you risk moderate harm to yourself to prevent certain death for her?

There are no simple solutions to the conundrums emanating from these scenarios. Ethical questions invariably take longer to formulate, let alone resolve, than the technological developments that give rise to them. While Facebook and Twitter came about and achieved scale relatively quickly, today we are still grappling with the implications related to privacy and whether the platforms should exercise restrictions on content. We have discovered quite dramatically just how much of a price we pay for not invoking stronger guidelines on what is permissible to be posted.

On the other hand, freedom of speech has always been a cherished national value. Reconciling the inevitable tensions between suppression and expression of speech has been part of our national discourse since prior to the nation’s inception. But with Twitter and Facebook, it wasn’t until those social media platforms evolved from their infancy that we began a serious debate about content management and the incredibly consequential implications for society.

Ultimately, where should responsibility lie for decision-making in those cruelest of moments, when harm is inevitable and some control over whom it might be inflicted on can be exercised, a moment complicated exponentially when one of those parties is us. The mutually exclusive nature of selecting one outcome over the other – avoiding one person and causing another to be a victim – is torturous. Where should control over that selection lie? If any portion lies with us, it may come about only by funneling our inclinations into the AVs brain in advance. But wow, how mind-numbing that is to think about. It would demand that we confront how our self-preservation and concern for others intersect as well as how our biases influence our thought processes on the matter.

Powerful stakeholders will surely seek to have their interests represented in the criteria for AV performance in accidents. The AV industry will be motivated to sell cars as well as minimize culpability in the event of an accident. Insurance companies crave the very predictability and defensiveness that would be compromised in the event an AV’s programming was individualized to the driver’s preferences regarding accident outcomes.  

Certainly, government, through relevant agencies, will have a central role. Given that, how can we gain confidence that the matter won’t be excessively politicized? If we believe that will not occur, look no further than climate change. The science on climate change is clear. But how many of us believe that the best interests of the planet and those who inhabit it have been adequately reflected in our political discourse. With AVs, when the lobbyists and special interests push their agendas, will their best interests or those of the consumer prevail?

However it all proceeds with AVs, the social media industry is a fitting analogy. Looking back to the early, heady days of those platforms, it’s evident that we were woefully unprepared for the astoundingly complex ethical challenges they have come to pose. In that formative period, social media were presented as neutral vehicles for making connections… benign electronic bulletin boards allowing us to share our favorite movies and meet others, albeit on a scale theretofore unimaginable. The balance between the rights of and limitations on the user barely registered in society’s institutions charged with oversight. Today, our thinking about social media is in an entirely different place; consider that social media’s lack of control over content may have contributed to the outcome of a presidential election.

At the same time, infringement on one’s voice is antithetical to the country’s founding values. Should AV drivers be permitted some voice when their car swerves, and how much risk do we expose ourselves to if that voice is unchecked?

This morning, I was driving on a highway and wanted to change lanes. As I put on my left blinker, I saw the small red warning light on my sideview mirror and heard the corresponding beeps alerting me to a car in my blind spot. That little warning light was activated by sophisticated technology that we have come to take for granted. I didn’t have to glance over my left shoulder to know that a car was there. What’s more, the technology is more reliable than I since I might not have noticed the car even if I did glance back. This small, seemingly ordinary convenience reinforced my realization that as such technology evolves, more and more of the driving functions will be performed by the car, less by me. It may very well be that my grandchildren may never actually drive a car the way we do today.

That little warning light is surely a sign of what’s to come. And, I suspect, whatever does come will be with us sooner rather than later. All too often, though, we create then assess -- the dazzle of the invention often obscures the philosopher’s caution. It doesn’t take more than a glimpse at social media to understand. Or the atom bomb for that matter.

While we may not have easy answers today for the great challenges that extraordinary leaps in technology will bring, I do hope that we commit to raising as many questions as we can in hopes that our grandchildren are spared the price of our generation’s naïveté.