Incredible advances are taking place in computer science and information theory. Today’s personal computer has calculating power far exceeding that of all 1950s computers put together. It is more complex than the electronic “brain” that ran the bank of computers needed to land the Apollo 11 astronauts on the moon.
Many pundits are predicting that this generation will see computers with more intelligence than the human mind. Is this possible, or are we nearing a technological barrier? If we do succeed in surmounting such a barrier, what will we have created? Should we fear being taken over by some computerized monster as depicted in the cult movie, The Matrix?
Neurologists tell us that the human brain is the most complex structure in the known universe. Weighing about three pounds (less than one and a half kilograms), it is plumbed with myriad blood vessels to nourish it with oxygen and is studded with nearly two billion neurons—tiny cells that function as triodes, altering, enabling, damping or enhancing electrochemical signals.
Back in the 19th century, when the study of the human mind was still in its infancy—and long before any electronic computer existed—British mathematician George Boole introduced a strict, formal grammar in which logical thinking could be performed. He attempted to show the practical laws that govern the human brain’s ability to think.
Subsequent generations of scientists have applied Boole’s algebra laterally. They began to ask, If the human brain works according to well-defined rules, can a machine be devised that would function as an artificial brain?
In 1940, Claude Shannon of the Massachusetts Institute of Technology (MIT) combined Boolean algebra with his own understanding of electronics. He demonstrated that all of Boole’s laws of thought could be modeled in electronic circuitry. This not only made modern telephony possible but sparked the idea that Artificial Intelligence was indeed feasible.
In the years immediately following the Second World War, many minds focused on bringing together military technologies and applying them to revival of the peacetime economy. Gigantic engines that had aided the breaking of enemy encryptions were now employed to perform mundane business calculations with speed and accuracy.
As computers became more powerful almost day by day—yet paradoxically more compact and less expensive—researchers busied themselves with the task of modeling within them a kind of intelligence. At the same time there was a revolution in neurology, so the scientific community could finally begin to explain the actual processes of the brain.
We are now at the stage where a number of scientists expect to be able to reproduce intelligence in a computer. Computation speeds double every few months, and new technologies such as laser circuitry and quantum computing, with their promise of speed and power millions of times greater than exist today, loom on the horizon. Will the next few years see the development of a truly intelligent computer? What characteristics would it have? Are we on the verge of trumping ourselves and dooming our species to relegation? Will our own tools become our masters?
Are we on the verge of trumping ourselves and dooming our species to relegation? Will our own tools become our masters?
Always in Control
Neil Gershenfeld, a director at MIT’s Media Laboratory, is well known for his advice to large companies on how to capitalize on current and forthcoming technologies. In his book When Things Start to Think, he explains that we humans are in the driver’s seat. His thesis is that whatever happens, human beings will remain in charge of computer technology, no matter how powerful it becomes.
He marshals to this view the fact that even the latest computer technology is cumbersome, and that whenever a person communicates with a machine, some form of ritual must be adopted (even if that only means pressing certain keys). He insists that this is good news, because the revolutionary new power of machines to think with the same (or greater) skill as humans will simply lie dormant until harnessed and applied by human beings.
The new machines, he explains, may be powerful, but they will remain utterly dependent on us. In fact, he foresees computers that are extensions of our own will.
Gershenfeld deftly paints a scene of the near future, when humanity will be more mobile and in control of the environment, and tasks that would mystify our own generation will be considered mundane—just as Michelangelo would be nonplussed by some of our routine activities, such as turning a dial to wash clothes or to cook a meal.
Not So Fast
Charles Jonscher runs a London-based investment firm, but his résumé also sports teaching appointments at Harvard. His book, Wired Life, pours scorn on the concept that a computer could truly “think” in any way similar to the human thought process. Jonscher delights to point out the specialization that is built in to computers. He cites the example of Deep Blue, the computer that played and defeated chess champion Garry Kasparov in 1997.
“If, during one of the games in the New York match,” he writes, “the room had started filling with smoke from a raging fire, every adult and every child—even a bee with a pinprick-sized brain containing just 7,500 neurons—would have known to leave, but the computer would have gone on playing. Where in the room was the intelligence and where the dumbness?”
His argument is at least to some degree specious. Granted, flight from smoke is useful for survival, and Deep Blue does not possess that ability. Had IBM researchers thought it desirable, however, this powerful chess-playing computer could no doubt have been built with sufficient intelligence to find the nearest exit.
Jonscher is adamant that there is a great gulf fixed between the nature of intelligence and the workings of digital circuitry.
Where Gershenfeld maintains an optimistic vantage, anticipating that humankind will incorporate Artificial Intelligence in helpful and leisure-making ways, Jonscher is adamant that there is a great gulf fixed between the nature of intelligence and the workings of digital circuitry.
For decades now, mathematicians, philosophers and researchers have debated the limits of computer thinking and even the meaning of intelligence.
For instance, English computer pioneer Alan Turing insisted that the question is not so much “Can a machine think?” but “Can a machine be made to show behavior indistinguishable from thinking?” He then buttressed his own answer by demonstrating that every component of thinking can be formalized.
Author, entrepreneur and award-winning technologist Ray Kurzweil takes this view to an extreme, believing strongly that the computerized execution of simulated thinking is entirely equivalent to human thought.
When, in the 1970s, someone pointed out that it was beyond the ability of a machine to read printed words and speak them aloud, Kurzweil developed technology to do precisely that, dedicating it to the service of the blind. In case after case, this highly acclaimed inventor and engineer has taken objections from skeptics and invented ways for a computer to perform the supposedly impossible tasks. In 1990 he published a prize-winning and influential book titled The Age of Intelligent Machines. He has now followed it with The Age of Spiritual Machines, presenting a scenario that spans 1999 to 2099.
Kurzweil foresees the development of true intelligence in computers within the first two decades of the new millennium. By 2019 he expects that computers will have the memory capacity and computational ability of the human brain. With persuasive arguments he insists that when we pass this milestone, the destinies of computers and of humankind will be indistinguishable.
He is convinced that by 2099 nobody will give an idle thought to whether a machine is intelligent: each will be a true individual, endued with its own spiritual existence—a new species apart from humans. Could he be right?
Mary Shelley’s novel Frankenstein painted a picture of what might happen if someone invented another intelligent being, and countless stories in all tongues and throughout all dramatic genres have repeated this alarm. According to the thinking of many scientists, we will continue to rudely ignore that Klaxon and will, probably within a few decades, bring into existence machines—computers and robots—that achieve and even exceed human intelligence.
Denial, optimism, enthusiasm and fear are the contrasting reactions to the phenomenon we see before us—the incredible increase in the power of computers. Will they be our tools, our slaves, our nemesis, or even (as Kurzweil suggests) part of us—an amalgam of human and machine?
The mathematical treatment of logic and the development of calculating technology—both essential elements of computer design—rely on models based on how humans function. Our computers seem to work more and more like the human brain because they are modeled on our knowledge of the brain. But this raises an important question: How well do we understand ourselves?
In reality, not very well at all. We are confused about what we are as a result of conflicting ideas about human consciousness.
Kurzweil speaks of spiritual machines. Is there any biblical sense in which a computer can be spiritual?
Because of the powerful influence of Aristotle’s teachings, the Western mind easily falls into thinking that there are two universes—the real one, and an unreal one to which our mind alone has access. Aristotle called these two universes physical and spiritual. We often confuse this use of these terms with their very different biblical use. Kurzweil speaks of spiritual machines. Is there any biblical sense in which a computer can be spiritual?
A human is composed of both a physical and a spiritual element. But this is not in the Aristotelian sense. In biblical use, they are not contrasted with each other. This spiritual element is neither imaginary nor mystical.
The spirit in man is what defines humanity. It is the human essence. “What man knows the things of a man except the spirit of the man which is in him? Even so no one knows the things of God except the Spirit of God” (1 Corinthians 2:11). This scripture points out that humankind can only be understood in terms of the “spirit of man,” and in the same way, God can only be understood through the Spirit of God.
If a computer were ever designed to function to a considerable extent like a human, would spirit govern its behavior?
Computers are machines that we have created. Whether they ever achieve consciousness (however we define the term) or outstrip us in creativity—or begin to show altruism or vice—they can never have their own spirit. They will simply reflect their creation by humans.
To coin a phrase, computers are us. Whichever version of the future (as portrayed by the three books reviewed here) most reflects what will happen, there is, on the basis of Scripture, no reason to fear that we are on the threshold of creating some alien life-form. Whatever we create will reflect only a distinctively human origin.
But because computers are ineluctably stamped with the human imprint, we must bear the responsibility for the future of computer technology. It is possible, after all, to guide the development of a computerized intelligence for either good or for ill. Knowing the human propensity for evil, we must determine to build only the good into any future artificial intelligences we may create.