Prewired

Hardwired Behavior: What Neuroscience Reveals About Morality 

Laurence Tancredi. 2005. Cambridge University Press, New York. 240 pages.

The Ethical Brain

Michael S. Gazzaniga. 2005. Dana Press, Washington, D.C. 232 pages.

Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong 

Marc Hauser. 2006. HarperCollins, New York. 512 pages.

Birds do it. Bees do it. Termites and ants, while not so lyrical, also do it. They all live in social groups.

All cooperative creatures have an important similarity: their social structures do not operate arbitrarily. But in the familiar social species called Homo sapiens, social rules are usually considered more than mere animal behavior. In fact, the idea of a moral basis for behavior is uniquely human. Morality implies that we bring judgment to our decisions, and that indicates something beyond social structure.

Similar ideas of what is “right” and “wrong” (murder and incest are universal wrongs) are found in all of the world’s religions. Birds and bees do not know religion; they simply do what they do, no questions asked. But humans are different, or so we like to believe. We seem to have a constantly running inner conversation asking “Why?” then “Why not?” We struggle with our behavior because we consider future consequences; we have the capacity to “think it through” and usually stay within the moral ropes.

What is the source of this both vexing and comforting ability? According to social science as well as neuroscience, humans appear to be prewired with moral circuitry. How much control do we have over this circuitry? Maybe our conscious sense of this process is only an illusion. And how does this mental mechanism become calibrated? To put these questions and the new neuroscience of morality in perspective, it helps to revisit the sociobiological view of human moral development.

Culture and Genes

One longstanding explanation for our social behavior intertwines cultural history and genetics. The concept was proposed almost 30 years ago, but it was sociobiologists Charles J. Lumsden and Edward O. Wilson who in 1981 coined the term culturgens to summarize the interplay between our exterior social world and the interior biological world of genes. In Promethean Fire (1983), they argued that a “self-sustaining reaction” was ignited within the social crucible of ancient man, which “carried humanity beyond the previous limits of biology.” This reaction, they believed, formed an evolutionary feedback system; namely, that “culture affects genetic evolution, just as genes affect cultural evolution.”

It was in this context that social innovations, or what they called “mutations of culture,” could be tested. New ideas that enhanced survival, and the mental wiring that gave rise to those ideas (and therefore the genes that directed the wiring), were selected. A distinction between right and wrong behaviors then became embedded in both the culture and the genetics of humankind. Calling this ability to innovate “the fourth step” of evolution, they wrote, “It is as if some power reached in and plucked forth one lucky species out of a vast milling horde. . . . Some extraordinary set of circumstances—the prime movers of the origin of mind—must have existed to bring the early hominids across the Rubicon and into the irreversible march of cultural evolution.”

For the scientist, that prime mover is natural selection. “Moral judgment,” Lumsden and Wilson concluded, “is a physiological product of the brain,” which, however unique and unusual, is nevertheless a product of material evolution alone. Thus the ethical premises that we hold are not extraordinary edicts; they’re not “handed to us on stone tablets, but they can be changed at will.” From this perspective, what is wrong today is free to become right tomorrow as new social pressures bias our decisions. From a wholly evolutionary viewpoint, whatever behavior leads to more viable offspring is by definition right and must therefore be the source of our moral insight. “Only by penetrating the physical basis of moral thought,” they note, “will people have the power to control their own lives” (emphasis added).

The authors of three recent books carry the discussion further into the realm of the neurosciences as they try to shed light on the sources of that control. They are Laurence Tancredi, a lawyer and forensic psychiatric consultant; Michael Gazzaniga, a psychobiologist whose expertise is split-brain research; and Marc Hauser, an evolutionary biologist and psychologist.

Mind or Brain?

Tancredi’s Hardwired Behavior encapsulates in general overview what is known about the interrelationship between neurology and psychopathology. As would be expected from a book compiled by a lawyer, much of what is found here concerns culpability for personal behavior, especially criminal responsibility. At first concurring with the sociobiologists, Tancredi remarks that “changes in attitudes and mores about human conduct will bring about adaptation in us to conform to what is going on in the environment.”

While one would be hard pressed to disagree with such a basic observation of the human condition (a simple consideration of social peer pressure well illustrates his point), Tancredi’s overarching premise is problematic. What he attempts to do in this brief book is to move the locus of immoral behavior from the abstract mystery of mind (what he terms “mentalism”) to the modular functioning of the brain (“physicalism”). While it is accurate to say that “genes first, then early interaction with cultural experience, etch a pattern that influences thinking and behavior,” the wheels come off when he insists that such influences are both unavoidable and their consequences unchangeable. Just as Lumsden and Wilson suggested that incest could be made a moral good if positive biological rewards could be rendered from it, Tancredi offers a rogue’s gallery of behaviors that he argues should be accepted as defects of brain rather than condemned as failures of character. From lying (avoiding punishment), greed (a dopamine rush) and addiction (circuitry, not choice), to uber-psychopathy (drugs and poor parenting) and sexual predation (hormones), there is a hardwired disorder and brain-deficiency explanation for each.

Each chapter of unpleasant anecdotes and pathological cases concludes with what one must call a biological trump, a get-out-of-responsibility-free card that absolves individuals from any liability for the poor choices they make. Highlighting Texas multi-murderer Ricky Green, Tancredi notes, “His behavior was heavily influenced by biological factors [that] affect not only how the brain processes information, but the scope of one’s ability to think about moral considerations.”

The author’s reading of the neuroscience literature (almost 20 percent of the text consists of background notes) has led him to the commonly accepted conclusion that conscious thought is not actually conscious at all. By this reasoning, “my brain made me do it” is not an insanity plea; it’s really just the way we are. Our actions are the result of the labyrinthine systems of the physical brain working, so to speak, behind the scenes. If those systems are cross-circuited through abusive experience—whether socially or personally inflicted—or if one’s brain is simply congenitally predisposed to misfiring, such an individual deserves sympathy rather than the penitentiary.

Neuroplasticity, the brain’s ability to change its structure in response to our decisions and actions, is one of the most exciting findings in neuroscience. While Tancredi opens the door to its therapeutic potential, calling it “our best friend if we’ve gone wrong and want to reform,” he quickly shuts it. Strangely overlooking research that reveals the brain’s power to rewire thought and behavioral patterns through the owner’s conscious attention, he cites neuroplasticity mainly for its possible use in genetically engineering behavior.

Along with this deficit, Tancredi believes that as the cultural environment shifts, we must be more tolerant of those who push the ever-stretching moral envelope. If the sociobiologists are correct, morality is just a cultural construct. Our social contracts expire and must be reconfigured: “What was totally unacceptable a mere hundred years ago in our society, for example, may today be treated with a ‘get over it’ attitude or by divorce.”

The modifications in morality empowered by neuroscience will lead to hard choices on how we as a society want to handle these changes, how we want to deal with each other, and the untoward potential consequences of a biologically engineered morality.”

Laurence Tancredi, Hardwired Behavior: What Neuroscience Reveals About Morality

The conclusion Tancredi tries desperately to support is that brain science is giving credence to the get-over-it attitude while removing the stigma and persecution society places on those who break the rules. In his view, neuroscience is pointing out “the importance of handling abnormalities through medicine rather than guilt, shame, and criminal sanctions.”

In this he succumbs to the conclusion that outside interventions may ultimately be necessary to control the hardened brain. A society framed by a “biologically engineered morality,” he suggests, will be built upon pharmaceuticals or therapies that “alter specific elements of an individual so that he or she can function in society” (Tancredi’s italics). This is a prescription little better than the shock therapy or lobotomy of decades ago. It is stunning how far off track a little book can go.

Between the Lobes

A moral conscience is often cartooned as a devil on one shoulder and an angel on the other. While such an idea seems childish, bilateral aspects of brain structure do come into play in our mental conversation. The two hemispheres of our brain have different ways of experiencing the world and responding to it. Since 1962, Michael Gazzaniga, now professor of psychology and director of the Sage Center for the Study of Mind at the University of California–Santa Barbara, has investigated how these two become one as we create our perceptions and arrive at subsequent decisions.

In The Ethical Brain, Gazzaniga asks a simple question related to the problem of moral responsibility: “When we become consciously aware of making a decision, the brain has already made it happen. This raises the question, Are we out of the loop?” His conclusion is that we are not. Brains may be automatic, but people are “responsible agents, free to make their own decisions.”

While this may seem contradictory, Gazzaniga defines responsible in a clever way. He writes that “brains are determined,” wired in a certain way, and the rules of social structure “exist only in the relationships that exist when our automatic brains interact with other automatic brains.” Responsibility, then, “is a human construct that exists only in the social world, where there is more than one person. It is a socially constructed rule that exists only in the context of human interaction. . . . It does not exist in the neuronal structures of the brain.” Thus ethics exist only in relationships; the old saw that moral character is proven by what one does when alone is out the window. “What keeps us from being totally random is, I think, an internal core—a barometer of what is more right than wrong. We always appeal to that, and it influences the final course of action.”

It appears that all of us share the same moral networks and systems, and we all respond in similar ways to similar issues. The only thing different, then, is not our behavior but our theories about why we respond the way we do.” 

Michael S. Gazzaniga, The Ethical Brain

That appeal is based on memories and emotions that are routed through what he has termed the left-hemisphere interpreter (LHI)—a small bead of neurons above the corner of the left eye. The LHI creates the talking-to-yourself voice, such as “hearing” the words as they are read silently. The job of the LHI is to create a continuing story that “reconciles our past and present knowledge to come up with ideas about the world around us. . . . Ultimately, our sense of self and our worldview are in a constant state of synthesis.”

But because the system is adapted for speed, the LHI often relies on stereotyped memories, which may themselves be stereotypes of previous misfiled and misperceived events. Gazzaniga, who explored the deceptive nature of the LHI a decade ago in his book The Mind’s Past, explains that the LHI therefore “distorts incoming information to fit in with our current beliefs about the world.”

He illustrates this plasticity of interpretation by recalling a debate concerning human embryonic stem cell research that took place while he was on the President’s Council on Bioethics. By inventing the term clonote to replace the term embryo, one member convinced himself to accept human stem cell research. If biomedical cloning does not create an embryo but a clonote, then the process is okay. “It is a wonderful solution” to the research dilemma, writes Gazzaniga. “It shows how our interpreter gets us out of jams.”

With all of these individual stories continually running, where does the moral sense we have in common come from? Perhaps, Gazzaniga surmises, “the mind has a core set of reactions to life’s challenges, and . . . we attribute a morality to these reactions after the fact.” These reactions, he believes, engage the emotions: “When someone is willing to act on a moral belief, it is because the emotional part of his or her brain has become active when considering the moral question at hand.”

Just as Tancredi targeted the brain itself for moral therapy, so Gazzaniga has a whipping boy. His, however, is not the wiring of the brain but religious superstition. Baseless beliefs, he says, stir the emotions and entice the LHI to invent clashes of identity between different peoples and cultures. Our received wisdom, the driver of culture that gave order to life in the past, is simply the “first guesses” and stories that arose in the prescientific era. This needs to be replaced, says Gazzaniga, and he proposes that “it is the job of modern science to help figure out how that order should be characterized.”

Moral Algorithm

Interestingly, research by Harvard University’s Marc Hauser may be getting a bead on a moral system that operates before the emotional centers and the LHI kick in. Using an online tool called the Moral Sense Test (MST), Hauser believes he has found a moral grammar of unconscious principles that drive moral judgment. He reports his findings in Moral Minds, where he argues that “moral judgments are mediated by an unconscious process, a hidden moral grammar that evaluates the causes and consequences of our own and others’ actions. This account shifts the burden of evidence from a philosophy of morality to a science of morality.”

The MST requires volunteers to consider various moral dilemmas. In one example, a runaway trolley is about to kill five workers on the track. You have the option of pulling a switch and sending the trolley off on a siding where it will kill only one worker. What do you do? Fifty-two percent of respondents said they would pull the switch. When the option was changed to pushing one worker onto the tracks to stop the trolley and save the other five, only 10 percent were in favor. This result was found across both sexes and all age groups, races, and religious or nonreligious orientations.

Although the results were consistent, the explanations were not. “When people give explanations for their moral behavior, they may have little or nothing to do with the underlying principles,” says Hauser. “Their sense of conscious reasoning . . . is illusory”; we do not have innate morals, no inborn code, but a system that is built to acquire a moral code within certain constraints.

As illustrated by Hauser’s MST, the introduction of technology makes moral dilemmas easier to handle; it allows the inner voice to make certain choices more acceptable. How we calibrate our inner voice, that left-hemisphere interpreter, is not trivial. What we must understand, Gazzaniga says, is that “the brain reacts to things on the basis of its hard-wiring to contextualize and debate the gut instincts that serve the greatest good—or the most logical solutions—given specific contexts.” Gazzaniga’s “gut instincts” are Hauser’s “moral grammar.”

All people, from the devoutly religious to the strictly atheistic, engage in a similar moral debate, Hauser concludes. And while all groups come to the same moral conclusions, they are all over the map when it comes to explaining the reason for their decisions. This, Hauser believes, indicates that the explanation process is culturally based, just as language is. The tales we spin to justify ourselves are unique, but we all seem to be justifying the same thing.

The universal conclusions that Hauser draws from the MST may still be beyond the data, however. Although more than 150,000 have participated, critics wonder whether the test-takers are a true cross section of the population. Nevertheless, the results are intriguing because they indicate that factors more important than feelings underwrite our behavior—that doing what feels good is too simplistic an answer because it is too individual. “The data we have thus far suggest that most moral judgments don’t recruit the classic emotional systems,” Hauser told Vision.

Emotions do not inform moral behavior, according to Hauser, but they do drive action and provide justification for our choices. The LHI is not a dry calculator; it is fired by emotion and becomes very situational in its ethics. The fact that technology makes the onlooker more amenable to killing another (pulling the track switch as opposed to actually pushing the bystander onto the track) seems a disturbing revelation, because it is shorting out something that otherwise says “Don’t do it.”

For Gazzaniga, however, the moral algorithm is simple: “We have cognitive processes that allow us to make quick moral decisions that will increase our likelihood of survival. If we are wired to save a guy right in front of us, we all survive better. . . . Long distance altruism just isn’t necessary; out of sight, out of mind.”

Yet even when we are standing side by side, technology increasingly separates us. And today, unlike in the past, treating one’s neighbor as one would like to be treated has global implications. Even if the other guy is at the far end of the boat, if his end is sinking, so is yours.

A Moral Compass

As UCLA research psychiatrist Jeffrey Schwartz noted at the What Makes Us Human? conference in Los Angeles (cosponsored by Vision.org and the Oxford International Biomedical Centre) in April 2008, our minds are built for decision making, not preprogrammed or hardwired to behave a certain way. We have the capacity “to look inside,” Schwartz said, and “cognitively reframe and reappraise” our actions in ways far beyond simple “Skinnerisms.”

The ability to observe, then bring our perceptions to conscious focus, is certainly a key attribute of what makes us human. We are able to judge ourselves, evaluate our behavior and willfully change, to repent of one way and turn to another. This moral sense makes it possible to work together side by side and across the globe. “We recreate the world,” anthropologist Ian Tattersall noted at the same conference, “mediated by our individual mental processing.” Humans, he remarked, have jumped the “gulf of cognition” to reach “a peculiar consciousness.”

That leap leaves us each with both the capacity and the obligation to determine between good and evil, and to choose good. “Even if biology contributes something to our moral psychology,” Hauser notes finally, “only religious faith and legal guidelines can prevent moral decay. These two . . . must step up to the plate, knocking back those self-interested impulses.”

This acknowledgment comes to the heart of the moral dilemma. After all of the self-quizzing, after discussions of brain wiring and conversations with those who have given in to “self-interested impulses,” one must come to the conclusion that inside human beings, as Gazzaniga says, “there is a moral compass.” But “we have to be smart enough to figure out how it works.” Across the realm of human experience—personal, collective, historical, and now neuroscientific—it is abundantly clear that we have the capacity to consciously consider consequences and choose our actions. The Bible, too, is unequivocal in this (see, for example, Proverbs 3:31 and Job 34:2–4), adding that there is a spiritual factor responsible for imparting this ability to the human mind (Job 32:8–9). The mind is a physio-spiritual mechanism built for choice, but it must be given direction. We may be endowed with a moral compass, but it does not arrive with prewired direction. Moral calibration is required, and the Bible serves as the lodestone that sets our compass’s orientation and helps us establish our moral bearings.

As secularists, of course, these authors cannot be expected to pursue that avenue in their search for the source of moral standards, especially when, as Gazzaniga notes, so much of what constitutes religious faith is founded on superstition rather than on truth. And so, as researchers improve drug cocktails to ultimately manipulate and control the brain (as Tancredi believes they will), and as society haltingly accepts science as arbiter of good and evil (as Gazzaniga believes it must), it is not too farfetched to imagine that the moral grammar Hauser describes can be refashioned as well. In fact, if history provides any clue, it seems a done deal. The only question that remains is whether our ongoing recalibrations will be for the better or for the worse.