December 6, 2018

Moral Algorithms

On a Sunday evening earlier this year in Tempe, Arizona, 49-year-old Elaine Herzberg was struck and killed by a Volvo XC90 sport utility vehicle as she was walking her bicycle across Mill Avenue. The driver of the vehicle was, well, not driving the car. It may be more accurate to say that the driver was a computer, though the vehicle had a human backup driver who was unable to prevent the incident.

This was the first recorded fatality caused by an autonomous vehicle (AV). Though it drew a temporary and appropriate pause in the research into the concept, the development of AV transportation continues inexorably onward. And, though only one in four Americans today say they would feel safe in an AV, researchers press on, pointing out that 94 percent of car accidents are caused by human error.

The future of road travel seems to be settled. As it is researched and developed, like any other venture into science and engineering, problems that remain to be solved include those pertaining to issues that are beyond mere technology and engineering. What, for example, should a driverless vehicle do in situations that call for judgment? If there is an instant choice between hitting three people crossing the street directly in front of the vehicle or swerving off the street where only one person is on the sidewalk, what should the computer have the vehicle do? Should a car make a sudden correction to avoid hitting an animal if that may in some way harm the humans in the vehicle itself?

These are the kinds of questions that AV developers are having to consider. Interestingly, these and lesser matters of judgment, exercised daily by traffic-bound humans, will need to be addressed in purely mathematical calculations. Researchers such as those at the Massachusetts Institute of Technology have concluded that “before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms.”

Here is an interesting expression: “moral algorithms.” What an ironic combination of two seemingly disparate concepts. “Moral,” of course, suggests the aspect of life touching on the heart; “algorithms” refers to that of the mind. How can these two concepts be brought together in an almost oxymoronic meld?

The expression “moral algorithms,” in fact, doesn’t necessarily denote an oppositional or contrasting relationship at all. It refers to an attempt to design a computer that would exercise discernment when faced with questions of judgment. But how can such questions of judgment be boiled down to mere numbers, to cold calculation? From where do the values derive on which to call for a response to a critical situation?

An old expression has occurred in many circumstances when a decision had to be made that, though difficult or problematic, would result in a good that would surpass the bad that would result otherwise. It’s called “the greater good.” Facetiously, it has been claimed by some that the first recorded instance of someone doing something for the greater good was Noah’s refusal for tyrannosaurus rex to enter the ark because of the obvious benefit to the rest of the animal kingdom after the Flood. The extinction of one species would be for the greater good of all other species? Approaching the concept from this viewpoint could actually suggest that by sending the Flood God Himself was intending the greater good by exterminating nearly the entire human species—to save the human species, which was clearly headed for self-extermination.

Some have asked: Why did God, omniscient Being that He is, knowing “the end from the beginning” (Isa. 46:10, KJV), go ahead and create humankind in the first place? Wouldn’t it have saved a lot of trouble just to lay that line of research aside and go on to something else—say, a creative project that would not include potentially dangerous knowledge of good and evil? Wouldn’t a moral algorithm have been useful at a time like that?

But He didn’t. Instead He, consulting the other members of the triune Godhead, purposed to “make man in our image, in our likeness” (Gen. 1:26, NIV). Apparently, this was not, even from the very beginning, intended as a mere cosmological experiment. From an omniscient viewpoint, there is no need to prove through experiment something that is already known.

The mystery of the why of Creation is not stated explicitly anywhere. But the answer is implicit throughout Scripture. It can be summarized by the assertion of the apostle John: “God . . . loved the world” (John 3:16, KJV) and sought relationships with those who live on it. This nearness—this relationship—with God was humankind’s only hope in the sinful state to which it had come.

“The tremendous thing,” writes William Barclay, “about [John 3:16] is that it shows us God acting not for his own sake, but for ours, not to satisfy his desire for power, not to bring a universe to heel, but to satisfy his love. . . . God is the Father who cannot be happy until his wandering children have come home. . . . He yearns over them and woos them into love.”

God showed this care for His Creation again and again, right from the very beginning, when He was “walking in the garden” (Gen. 3:8, NIV), apparently expecting to see them—else, why would they have hidden? During the exodus of God’s people out of Egypt, out there in that 40-year desolation of the wilderness, He expressed His desire to “‘live among them’” (Ex. 25:8, NLT).

“It was no messenger or angel but his presence that saved them” (Isa. 63:9, NRSV). That the nature of God includes relationship is one of the most eloquent expressions of the Trinity. From even before the beginning it was, “‘Let us . . .’”

It may well be that the development of the autonomous vehicle is inevitable. But it will never be truly autonomous if it is to be controlled by moral algorithms that have been put in place by its creators.

Gary B. Swanson is editor of Perspective Digest.