Ordering Chaos

Humanity’s Ongoing Quest to Make Sense of Things

Both Stephen Hawking and Martin Rees have expressed optimism about the role of science and technology in resolving humanity’s overarching questions and problems. Vision reviews their recent books.

Brief Answers to the Big Questions

Stephen Hawking. 2018. Random House, Bantam Books, New York. 256 pages.

On the Future: Prospects for Humanity

Martin Rees. 2018. Princeton University Press, Princeton and Oxford. 272 pages.

We’ve all had questions—big questions. Where did life come from? Is there a God? What does the future look like?

Of course, in the context of our busy lives, when most of us consider the future we’re likely thinking in terms of forthcoming holidays, dental appointments, the school calendar or the next set of bills to be paid. Some of us may extend our thoughts to retirement planning or leaving a legacy, perhaps making a cash donation to a worthy cause.

Sometimes, though, we probably worry about the future of humanity as a species, recognizing that our planet, and all life on it, faces an increasing range of threats to its very existence.

In recent years a number of academics in the fields of science, technology and philosophy have voiced concern about these existential threats, or “x-risks,” and offered possible solutions. A concerted effort is now under way to rally the general public, governments, and high-net-worth individuals and organizations around an urgent need for planning and threat avoidance.

Two books by leaders in their respective fields of science address a number of the big questions. Though their topics are wide-ranging, their primary focus is on the dangers we all face and on promoting solutions from within the world of science and technology.

Two Voices Crying in the Wilderness

Few academics who have turned their attention to existential threats are as famous as Martin Rees, the United Kingdom’s Astronomer Royal, and cosmologist Stephen Hawking.

Hawking’s Brief Answers to the Big Questions was in development at the time of his death in March 2018. It demands particular attention because it comes at the pinnacle of an incredible career. He offers his scientific view on questions such as Is there a God? Can we predict the future? What’s inside a black hole? Is time travel possible? More subjectively, Hawking also applies himself to questions such as Will we survive on Earth? Is there intelligent life in the universe? Should we colonize space? Will artificial intelligence outsmart us? The entire work is punctuated by the cosmologist’s trademark wit, as well as by an irrepressible spirit of life and endeavor made all the more profound now that he’s no longer with us.

Like Hawking’s book, Rees’s On the Future carries the weight of an eminent scientist. Throughout its pages the work offers a lucid perspective on the future, strengthened by the precision and force of the diction. Rees’s book goes farther than some treatments of the issues in that it attempts to grasp the detail and complexity at the nexus of our interrelated problems, showing how intervention A may impact the seemingly unrelated B. Despite the discouraging nature of some of its subject matter, it’s a book that ultimately wants to offer and inspire practical solutions to real problems.

On the Future is written from Rees’s triple perspective “as a scientist, as a citizen, and as a worried member of the human species.” It charts a course through our own Anthropocene (the current geological period in which humanity became a prime influence on the environment), humanity’s future on earth, humanity in a cosmic perspective, and the limits and future of science.

New science offers huge opportunities, but its consequences could jeopardise our survival.”

Martin Rees, On the Future

Hawking, too, is clear from the outset: “I want to add my voice to those who demand immediate action on the key challenges for our global community.” Like Rees, who describes himself as “a techno-optimist,” Hawking hopes “that science and technology will provide the answers to these questions,” but with the caveat that it will take “people, human beings with knowledge and understanding, to implement these solutions.”

Hawking is under no illusions, however, recognizing that human beings have a poor track record for making important decisions. He suggests that “we can be an ignorant, unthinking lot.” Equally, Rees recognizes “a potential downside” to technological advancements and solutions, noting that “the global village will have its village idiots,” but with “global range.”

Hawking points to populism, the current societal trend of revolt against authority and experts exemplified by Donald Trump’s presidency and Brexit. He laments that scientists, as experts in their respective fields, have been caught in the crosshairs. To those who sense that time is running out and who look to science and technology for solutions to the existential threats that bear down on us, this inhibitor is serious. With a polluted and overpopulated planet as our legacy, Hawking says “there is no time to wait for Darwinian evolution to make us more intelligent and better natured.”

Rees remarks with regard to the environment, “Extinction rates are rising—we’re destroying the book of life before we’ve read it.” Hawking uses the same biblical allusion, but in a more positive light; he says we have “read ‘the book of life’” in the sense of mapping DNA, and that new possibilities are opening. For human beings, that might include human-computer interfacing, cybertech, biotech, robotics and artificial intelligence, but these are all ingredients in a potent recipe for both good and ill in human hands.

Hawking in particular believes that looking outward may ultimately be our only option—to seek out more-intelligent alien life or voyage into the depths of space to an as-yet-unknown place that humanity (or the superintelligent machines we might create) can reach before we ride the earth to death.

Not to leave planet Earth would be like castaways on a desert island not trying to escape. We need to explore the solar system to find out where humans could live.”

Stephen Hawking, Brief Answers to the Big Questions

Enter the Philosophers

Scientists are often reluctant to weigh in on the moral implications of our various options, perhaps preferring to leave the ethical ramifications to moral philosophers. Indeed, Rees spends few words on morality per se in his book, and Hawking none. While perhaps understandable, this would seem an unfortunate omission given the tremendous importance of ethics and morality in any discussion of these topics. It should also be noted that moral philosophy has a reputation for relativism, for avoiding hard-and-fast rules; and a sliding moral scale could itself have unintended consequences.

Swedish philosopher Nick Bostrom, one of Rees’s colleagues, has been a leading voice on the philosophical implications of existential risks. A founding director of Oxford University’s Future of Humanity Institute, Bostrom offers “a rule of thumb for moral action.” He calls it Maxipok: “Maximize the probability of an okay outcome, where an ‘okay outcome’ is any outcome that avoids existential disaster.” While Bostrom acknowledges that “there clearly are other moral objectives” than preventing terminal disasters, Maxipok carries with it a willingness to break some eggs in the making of omelets. The reason, in part, is to offset the tendency to focus on feel-good moral projects as long as existential threats haven’t yet wiped us out. From Bostrom’s perspective, the challenge is simply to ensure that we “play our cards right.”

To that end, he identifies “an offshoot moral project, namely to reshape the popular moral perception so as to give more credit and social approbation [or approval] to those who devote their time and resources to benefiting humankind via global safety compared to other philanthropies.” This reshaping of moral perception to favor those who are devoted to saving us is a seismic idea that should give us all pause for thought. The United Kingdom’s Guardian newspaper dubbed one such group, the bright minds at Cambridge University’s Centre for the Study of Existential Risk where Rees serves as cofounder and Bostrom as adviser, “Guardians of the Galaxy.”

A critic of Bostrom’s approach is Warwick University philosopher-sociologist Steve Fuller. He contends that Bostrom and “his fellow doomsayers” take a “no pain, no gain” approach to solutions, “whereby an ultimately good end justifies the widest possible range of means deployed in its pursuit, which may well include the destruction or radical transformation of aspects of our world that we currently value.” He notes further that history labels such critical value decisions as right or wrong only with the benefit of hindsight.

Interestingly, Fuller acknowledges that “the maxipok principle could be embraced by both Catholics and Darwinists, who by rather different chains of causal reasoning reach largely the same conclusion—namely, that our survival as a species is dependent on our recognising and following a normative order implicitly given by nature. The difference is that for Catholics this order is permanent and underwritten by God, whereas for Darwinists it is transient and purposeless. The concept of existential risk would not appear so rhetorically compelling, were it not for this hidden convergence of world-views.”

Rees, who describes himself as “a practising but unbelieving Christian,” says that “the great religious faiths can be our allies.” He notes that he sits on the Pontifical Academy of Sciences, an ecumenical body representing “all faiths or none.” The advantages from Rees’s perspective are that “the Catholic Church transcends political divides” and that “there’s no gainsaying its global reach, its durability and long-term vision, or its focus on the world’s poor.” Rees suggests that “a high-level conference on sustainability and climate,” held at the Vatican in 2014, “offered a timely scientific impetus” for the 2015 papal encyclical Laudato Si’. The encyclical provided what he calls “a clear papal endorsement of the Franciscan view that humans have a duty to care for all of what Catholics believe is ‘God’s creation.’”

Religious traditions, linking adherents with past generations, should strengthen our concern that we should not leave a degraded world for generations to come.”

Martin Rees, On the Future

Such cooperation between religion and science has the potential to redefine moral perception. It also provides access to institutional networks that can skate over political divides. This hidden convergence of worldviews, to use Fuller’s phrase, could emerge as an unlikely force to unify big business, high-net-worth investors, technological technicians, scientists and world religions.

The will to do so is certainly there, at least in some quarters. Liberal democratic governments are somewhat late to the table, however, and are at least partly bound to the will of the majority and to how far threats of existential crises make themselves felt in the public psyche. Still, it would seem that only a unified world government could provide the scale of infrastructure required to really guard the galaxy and guarantee global safety—but government of what kind?

Looking to the Future

There is no doubt that, as a species, we are largely asleep at the wheel when it comes to limiting our negative impact on the world. Some, among them Rees and the late Hawking, have sighed and cried over the need to meet imminent threats head-on before it’s too late. Will their cry be loud enough to be heard across the seeming chasm that separates the many special-interest groups and ideologies around the globe? Even science and religion often view each other across a gulf—though as Rees and others have pointed out, it’s a gulf that on some terms is perhaps not so wide after all.

What, then, is the way forward? Hawking argues that “the Earth is becoming too small for us.” So “we will continue to explore our cosmic habitat, sending robots and humans into space. We cannot continue to look inwards at ourselves on a small and increasingly polluted planet. Through scientific endeavour and technological innovation, we must look outwards to the wider universe, while also striving to fix the problems on Earth.” Yet while advocating that we face and fix our problems here at home, a few pages earlier Hawking lists some of the most pressing of those problems and declares that “we can avoid this potential for Armageddon, and one of the best ways for us to do this is to move out into space and explore the potential for humans to live on other planets.”

Rees similarly devotes a section of his book to discussing “a post-human era,” when pioneer space explorers will “harness the super-powerful genetic and cyborg technologies that will be developed in coming decades” as a “first step towards divergence into a new species.” But beyond this he disagrees with Hawking, warning, “Don’t ever expect mass emigration from Earth. . . . It’s a dangerous delusion to think that space offers an escape from Earth’s problems. We’ve got to solve these problems here.”

The reality is that no spaceship can take us to a star past the reach of human nature’s more toxic aspects. Too often the way forward that seems right—seems our only option, even—ends in further problems for humanity and our world. In the longer term, it’s impossible for us to fix the problems on Earth (or any other planet) without first fixing the underlying reason for most of those problems: our own dark, self-centered and often violent nature.

Rees concludes his book by declaring that “now is the time for an optimistic vision of life’s destiny.” But in the same breath, he acknowledges the very things that humanity as a whole is failing to do: “We need to think globally, we need to think rationally, we need to think long-term—empowered by twenty-first-century technology but guided by values that science alone can’t provide” (emphasis added).

Until our all-too-human nature changes to universally embrace those values, no amount of optimism about the future will be enough to save us from ourselves.