Just How Intelligent Are We?

In the face of our rapidly advancing technologies, it’s time to make some important decisions. Yes, we can do things never before possible. But should we?

Beyond the lawn mower programmed to work its way around the garden, or the Roomba that vacuums the carpet while we’re gone, much more sophisticated applications of artificial intelligence (AI) are set to revolutionize the whole of life. Voice-recognition technology can already save the doctor’s administrative assistant keying in multiple case histories, and robotic surgery for knee replacement is more precise and effective than a human surgeon. And while many tasks that demand manual dexterity in small spaces (such as plumbing installation and repair) may not be automated in the foreseeable future, others (such as auditing complex businesses) have already become the domain of intelligent machines.

None of these are areas where people have concerns about the dangers of AI, because at the moment software robots cannot replace refined human judgment. But we all know that when something becomes possible, though still ethically undesirable, we can’t prevent its eventual use by someone.

This is a concern in any number of fields of human endeavor, not just AI. The possibilities afforded by gene-editing technologies such as CRISPR-Cas9, for example, create some unique ethical dilemmas concerning not only which genes should be edited, but how and when. In terms of the human genome, many countries restrict such editing to somatic cells. For now, even “curing” a form of deafness through genetic alteration is highly controversial. Nevertheless there is technically nothing to prevent the eventual modification of egg or sperm cells to enhance traits such as height or intelligence. Illegality and/or ethical concerns may continue to erect barriers for many practitioners, but personal, national and financial advantage may outweigh such concerns for others. CRISPR has already been used to alter human embryos; can the era of designer babies and human-animal chimeras be far off?

Likewise, researchers are exploring the “enormous potential to create AI-enabled ‘game changers’” for addressing environmental issues, including the mitigation of climate change, says the World Economic Forum in a 2018 report. Yet the authors acknowledge that “AI technology also has the potential to amplify and exacerbate many of the risks we face today,” among them security risks (such as cyber intrusion and privacy breaches stemming from human abuse) job displacement; and the risk of AI going rogue.

There is going to be interest in creating machines with will, whose interests are not our own. . . . I think the notion of Frankensteinian AI, which turns on its creators, is something worth taking seriously.”

Journalist and author William Poundstone, in response to The Edge’s 2015 Annual Question: “What Do You Think About Machines That Think?”

In fact, advances in AI led renowned physicist Stephen Hawking to tell the BBC in 2014 that “the development of full artificial intelligence could spell the end of the human race.”

The point is that the refashioned world that our technologies make possible puts humanity on the brink of great advancement yet also on the cusp of existential disaster.

Is there a time to unitedly say “no” to some kinds of development? This is the first of two questions that come to mind from a biblical perspective. They are not “religious” in the sense that causes people to recoil from anything tied to belief, but reality-based, because they speak to our physical existence and survival.

This first question relates to a time when human beings had achieved much technologically through common purpose and common language. In early urbanized Babylonian society, an account of the building of a high tower, which would symbolically challenge God’s domain, ends with the forestalling of further development because “this is what they begin to do; now nothing that they propose to do will be withheld from them” (Genesis 11:6). They could have taken a different path and chosen not to “play God.” But as a result of their overreaching, some form of outside control had to be asserted; the people were scattered, their language confused.

How to determine right from wrong in human endeavor is the second question that comes to mind. Is there a universal standard by which we can know right from wrong action? Today more than ever, we can readily admit that the problem with technologies that can be used for good and bad (dual-use) arises from the selfish side of human nature.

An international group of concerned professionals met regularly from 2015 to 2017 to deliberate on the ethics of human germ-line gene-editing and to produce a statement on standards. They recognized that guaranteeing global compliance is the problem. They wrote, “In some countries with inadequate ethics committee oversight or strong institutional review boards (IRBs), the potential for abuse exists.” In other words, without a common code and a system of enforcement to govern our technologies, chaos waits just around the corner.

Such a global code lies within the law of love, defined in the Bible as love toward God as Creator and toward fellow human beings as neighbors. Artificial intelligence can certainly augment human intelligence, but only the mind of God at work in humanity can provide us with spiritual “intelligence” so that we live ethically and at peace with all.