Defusing Biological Bombs

Global threats to humanity’s future are termed “existential” because they put all of us in danger. The list is long and often includes disease agents, whether occurring naturally or manufactured, with the capacity to spread rapidly and sometimes widely.

Piers Millett is a senior research fellow at the University of Oxford’s Future of Humanity Institute. He spoke with Vision publisher David Hulme regarding potential global biological catastrophes stemming from such risks as bioterrorism, pandemic disease and even genetic manipulation.

Piers Millett of the Future of Humanity Institute

Piers Millett cofounded a consulting firm that works with government, industry and academia. He also consults for the World Health Organization and spent more than a decade working on the United Nations international treaty banning biological weapons.

DH We’ve had pandemics before, with catastrophic effects: black death, smallpox, Spanish flu. What’s the likelihood of a similar global catastrophic biological risk (GCBR) today?

PM The likelihood is very small. They happen very infrequently naturally; they could happen more often if humanity becomes involved.

DHIf these kinds of threats are few and far between, why spend time and effort on them when there are many more important things to do?

PMWhen we think about risk, we think about both likelihood and consequences. Where I focus my research is not on the most likely events but those with the highest consequence. It’s those two factors when put together—likelihood and consequence—that give us some estimation of risk. Existential and catastrophic risks may be unlikely, but they have incredibly high consequences. So it’s wise to invest some of our time and resources thinking about events where the consequences could be very high even if the events themselves are very unlikely.

DHPandemics are, perhaps, an example of risks we tend to downplay. We tend to think that once a pandemic runs its course, it’s over and done with. What about the intergenerational aspects of an outbreak?

PMIt’s an interesting question. Perhaps flu is the best example here. A flu pandemic happens roughly every 50 years. In fact, flu scientists suggest that we’re long overdue. We’re beginning to understand some of the science behind that, which suggests that the immunity we get as a disease spreads around the world stays with us for life—that certain subtypes of pathogens don’t bother us for the rest of our lives. We’re beginning to see how we might unlock the power of that to prevent and treat natural disease. It hopefully means we’ll be able to deal with pandemics more effectively in the future.

DHAre we, though, in a unique time when it comes to the potential for pandemics?

PMWe’re certainly in a very special time. We have only been aware for just over a century that pathogens cause disease. We’re in a time where we are unlocking the power of biology to make things and solve problems. And that means that we have a great deal of power to use biology. We have a great deal of power to apply it to the questions around disease. But at the moment we don’t necessarily know how to use that power, that technology, responsibly or safely. We won’t know what the long-term consequences are. As I say, the time frame in which we’ve had this understanding is comparatively short.

I believe in the power of that technology. I want to live in a future where we use biology to solve problems.”

Piers Millett

I want to live in a future where many people use those technologies to make the world a better place. So the question for me is how do we get from where we are now to that future in a way where we unlock those benefits?

DHWhat’s optimal virulence theory, and what does it contribute to the study of GCBRs?

PMThe idea here is that when a pathogen causes disease, two characteristics are important. First of all, what’s its ability to cause disease? How sick is it going to make you? What’s the likelihood that you’re going to die from it? We call that pathogenicity or virulence. The second characteristic is transmissibility—how easily can it spread? This theory states that those two things are mutually exclusive, so that if a pathogen becomes more pathogenic (in other words, more dangerous), it’s less likely to spread. Equally, if it increases its ability to spread, then it probably is going to be less pathogenic.

The question here is whether we as humans—as scientists, as engineers—could work our way around this and, as we understand the science behind it more, whether we could then build pathogens that would have these qualities that would be worse than anything we’ve seen in nature. There is an ongoing academic stream of research focused on producing potentially pandemic pathogens that explores this space.

DHWhat’s the likelihood of a bioterror attack?

PMI think the likelihood is very small. Terrorist attacks in general are fairly infrequent. Those involving biological weapons are even less frequent. It is a real threat. There is a real risk. But we do need to think in a broader way about who would use this, why they would use it, and what the likely consequences of that event would be.

DHSo you distinguish between biocrime, bioterror and biological warfare.

PMAbsolutely. As I said, the use of biological weapons can vary vastly on different types of consequence—and the motivation behind it. So traditionally we’ve used three spaces to define different types of weapons use.

Firstly there’s crimes. This would be an individual largely using biological agents or toxins to harm or poison somebody else. You might think about a husband killing a wife or a wife killing a husband, or indeed an academic getting revenge on colleagues for being passed over for promotion. In many cases this is when biology is the easiest or most accessible or most comfortable tool or weapon that could be used.

In the case of terrorism, we need to think about a larger level of impact—maybe tens or hundreds of people, normally with some sort of political motivation, probably with a few more resources, and with the intent to deliberately create chaos.

And then moving up a step, states have definitely had offensive weapons programs in the past. They’ve deliberately made and used biological weapons. The use has differed from assassinating individuals to wide-area dispersal, where the intent is to cause thousands or tens of thousands of casualties.

DHWhat are the dangers of dual-use technologies in the biological field?

PMIt’s pretty well established by the scientific community itself that virtually all knowledge and technology is dual-use. By that I mean that they can be used to do great things; they’re going to be important for solving problems in agriculture, in food, in power; we’re going to use them to make our life better. But at the same time, it’s possible that others might use them to cause deliberate harm. That’s what we call the dual-use dilemma.

A lot of the knowledge and technologies can be used for both good and bad, depending on the intent of the user.”

Piers Millett

DHYou’ve written, “We should expect that in the next hundred years there will be dangerous biotechnological breakthroughs that we can hardly imagine now.” What’s the basis for that warning?

PMA good example is something we might talk about as a gene drive. Traditionally we’ve thought about biological risks in terms of a pathogen causing disease. We’re now beginning to understand a little bit more that, really, diseases are about the disruption of the healthy functioning of a biological system. And there are many ways in which that can be changed into an unhealthy state.

We’re beginning to see the development of technologies that are able to change biological systems in ways that we can’t necessarily think about but that fall outside the traditional pathogen-disease scope. A gene drive is an excellent example of this. It’s a powerful technology that could be used to solve many questions around disease or invasive species. But it’s also a powerful technology that could be misused—potentially to cause harm.

That is a good example of something that’s cropped up over the last five years, which we hadn’t previously thought about, and that has led us to think in different ways about the things we worry about.

DHWhat do you see as next steps in mitigating the dangers posed by GCBRs?

PMIn particular I want to see a change from framing the scientists and engineers—those who develop and use advanced biotechnology—as being part of the problem (somebody to be regulated, to be kept at arm’s length) to seeing them as part of the solution. They’re uniquely placed to understand, to some extent, what the impacts of those technologies will be. They’re uniquely placed to be able to spot behavioral attitudes among their own community that they’re uncomfortable with or that don’t meet with societal norms about how we unlock the power of technology. And so I really want to see us change the nature of our relationship with scientists and engineers in the biological space and empower them to help us make sure that we reach that very positive future where these technologies are used to solve world problems and not to cause harm.

The interesting thing to think about in biological global catastrophic risks is the way in which they may interact with other catastrophic risks. A body of literature is being built around cascading effects—where we could see risks combining. If anything genuinely caused societal disruption on a global scale, I would be really surprised if disease wasn’t a component.

I would encourage you and your readers and viewers to think about how different types of risks can fit together and how that might escalate and combine in a way that’s very difficult to foresee.