Artificial Intelligence: Thinking Outside the Box

Part Two

Will AI make the world a safer or a more dangerous place? An expert shares his thoughts.

In Part One of Vision publisher David Hulme’s interview with Seán Ó hÉigeartaigh, the AI expert had much to say about the pros and cons of artificial intelligence. Part Two is a continuation of that discussion. Where is the developing technology leading us?

 

DH At the end of the Second World War, Albert Einstein wrote a famous editorial in the New York Times. He’s obviously regretting that the nuclear genie is out of the bottle. Scientists sometimes get to this point. We did an interview with Jennifer Doudna recently.

SOH Yes, the CRISPR-Cas scientist.

DH During the interview she seemed not too worried, but when her book came out later, she expressed a concern as to what might now be possible as a result of her work. Where along the line does the scientist ask him- or herself whether something should be said now, before this becomes a big public issue?

SOH This is a topic that we discuss regularly. There’s clearly a role and a responsibility for scientists to think about where their research might lead and how it can be used in various ways. But if we decided to stop scientific progress because of all the ways it could potentially be misused—think about electricity: we would have developed electricity and thrown it away because of its myriad [negative] uses, but the net impact on humanity has undoubtedly been good.

I think it’s probably a mistake to put all the responsibility on the scientific community. There are roles as well for scientific funders, publishers, regulators and policymakers—discussions at every level but also at different points in the development and deployment process.

DH You mentioned dual-use technologies earlier [see Part One]. Can you explain that a bit?

SOH Dual-use technologies are those that are developed for a beneficial or generally benign civilian purpose but that lend themselves to being repurposed for a malicious intent.

An example might be research on the influenza virus: we might make changes to the virus to understand exactly how it mutates in the wild and thus help us develop better vaccines. The hypothetical concern is that if we demonstrate how to make the virus more dangerous or more transmissible, somebody could use that for a biological attack. In the case of artificial intelligence, fundamental advances allow many of the good things—whether tagging your friends in a photo or a self-driving car finding its way around the city to allow an elderly person to get to their family’s house. Yet the same advances potentially allow a battlefield robot to find its way around a battlefield with many obstacles, and perhaps identify by facial recognition the target that it wants to take out.

Even if our purpose in developing the technology is to help our communities in various benign ways, we need to think about whether there are ways in which people will apply it in a very different context, which we may be less sanguine about.”

Seán Ó hÉigeartaigh

DH The “Malicious Use” report notes that AI creates “anonymity” and “psychological distance.” What does this mean, and why is it of concern?

SOH There are two reasons in particular. One is that when you have to harm somebody with a knife, you have to really get up close and personal. It’s a psychologically difficult thing to do, and perhaps this plays a role in making sure it doesn’t happen so often. Similarly, if you have to see somebody face-to-face and fire a gun at them, that requires overcoming all sorts of barriers. On the other hand, if all you have to do is send a flying robot out to kill somebody you’ll never have to see face-to-face, that might make it easier psychologically to take on certain activities.

Another reason is the problem of attribution. If we’re to have safeguards in place, we need to be able to catch people and assign blame, whether it’s a robot that explodes and harms somebody, or a cyber attack that takes down a power grid and indirectly causes deaths. If we put enough distance between the attacker and the targets—by, for example, having the system deployed and then acting autonomously—it may make it much harder to trace back to the person who originally undertook the attack, which means that it would be much harder to enforce rules around us.

DH Another concern expressed in the “Malicious Use” report is political security. You say that surveillance and deception are especially problematic in authoritarian states. Even the truthfulness of public debates in democracies is in danger. How does AI empower propaganda?

SOH There are a number of ways, and I think we will only keep discovering more. One is that it will be easier for me to influence you with my political message or political propaganda if I can tailor it to a certain extent to your personality profile, your hopes and fears.

We know this works quite well in marketing already; by analyzing, for example, Facebook posts, they can come up with some sort of crude idea of whether somebody is more likely to be extroverted or introverted, anxious or secure, and so on. If you can show somebody the same ad for the same hotel with two different messages—one for the extrovert, showing a big party by the pool, and one for the introvert or anxious person, showing them sitting by the pool with a book—the click-through rate goes up an awful lot.

You can imagine taking the same approaches and applying them to political messaging. If you’re particularly anxious, maybe the message is about hordes of immigrants coming over the border. If you’re optimistic, maybe it’s something about “this party will create lots of new jobs.” So a message could be crafted that roughly fits your profile and is more likely to influence you. In some ways this is an evolution from normal advertising, but one concern is whether it in some way subverts our conscious decision-making.

There are more direct concerns as well. Techniques are being developed using artificial intelligence that might make it possible to create a fake video of a political leader saying something that you find completely abhorrent. If we flood information channels with these fake videos, it may become much more difficult to tell what’s true and what’s not. Now, there are bodies of work going on to enable automatic identification of fake video, but can you get a refutation out in time?

DH Is there any aspect of life in which AI makes us safer? Where are the threats decreasing?

SOH In nearly every aspect of life there are ways in which artificial intelligence may make the world safer. We’re in the process of developing self-driving cars; there are still some technical and legal challenges to overcome, but I think it’s most likely that self-driving cars, when a fully mature technology, will cause considerably fewer crashes than human beings do. They will not drive when drunk, they will not drive when tired, they will not get distracted by what the kids are doing in the back seat.

Another example is health care. In many diseases, early identification is key to avoiding a bad outcome. But a lot of the processes involved in, for example, cancer identification and treatment are very time-consuming and require a consultant with decades of experience and training. If we can automate a lot of those processes, then suddenly these treatments become available more cheaply and more quickly to more people, cutting down waiting times and saving many lives.

I think artificial intelligence will help us do more sophisticated climate modeling. It will allow us to make our energy grids more efficient, thus cutting down on our energy usage. Google applied some of their most advanced AI to their server farms just two years ago and cut the energy use for data cooling by 40 percent. That’s the energy use of a small city.

Artificial intelligence is also being applied at disaster areas, scanning through images to help responders prioritize. We will in time have robots that are able to go into risky areas, such as disaster areas, where you don’t want to risk a human life.

In nearly every domain of human life, there is a way in which artificial intelligence can make it safer.”

Seán Ó hÉigeartaigh

DH According to psychologist Kirk Schneider, “high tech fulfills many needs; most of them physical, informational and commercial. What it tends not to fulfill are ‘existential needs’—purpose, connection, awe for life.” What’s your reaction?

SOH I think that this statement is true and not true at the same time. Technology is just technology. A lot of the trick is in how we use it. I don’t think technology is ever going to replace the human touch and spending time with one’s family, or going down to the pub and meeting somebody. Anyone who’s done a Skype meeting or sent an e-mail knows that.

On the other hand, if it automates some processes that make somebody’s life easier and allows them to spend more time with their loved ones, then perhaps it makes it easier for that existential need to be met. Even when you don’t have the face-to-face connection, technology allows us to put a human being in our minds when we think of people in different parts of the world. Many of my close collaborators are in the US or India or Australia, who in a previous lifetime would’ve been a fact in a book or a statistic to me. Now they’re real people; if a disaster were to befall India, they would be real people to me.

So I think that technology, while it can distance us from each other and fail to address our human existential needs, can also enable them. It’s all about how we use it.

DH Stephen Morse, professor of law and psychology at the University of Pennsylvania Law School, remarked at a 2005 neuroscience conference, “We have no idea how the brain enables the mind. We know a lot about the localization of function, we know a lot about neurophysiological processes, but how the brain produces mental states—how it produces conscious, rational intentionality, we don’t have a clue. When we do it will revolutionize the biological sciences.” How do you view the human brain? Is it just another kind of machine for you? Is it different from mind?

SOH If the brain is a machine, it’s the most remarkable machine that exists in our world by a long shot, far beyond anything we ourselves have created. I do believe that someday we will understand the human brain and how it creates the human mind, at least to a much greater extent than we do right now. But we’re a long way from that.

Understanding more about the brain will allow us to revolutionize biological science and also cognitive science and will provide incredible insights into artificial intelligence. However, I’m not sure we need to understand every aspect of the human brain and how it enables the human mind before we create really transformative levels of artificial intelligence. In the past we found various ways to solve a problem. We gained insights from how birds fly in order to develop a plane, but a plane is in no way the same (in terms of the mechanics) as a bird flying. And I would argue that we were able to create manned air flight long before we had a deep and proper understanding of how even a bumblebee flies.

DH The program director for AI at the Alan Turing Institute recently said he didn’t feel he had to understand everything that went into an AI decision. Just as long as it was reliable at something like the 95 percent level, he would be satisfied. Is that a view you share?

SOH I’m not sure I share this. Let’s say it’s a medical diagnosis system. One viewpoint would say, “Well, if it’s 99.9 percent accurate and the human expert is 99.2, then you don’t need to understand the medical system, because it’s just better.” I think there is considerable value to knowing why that 0.1 percent exists for the AI system, and in what circumstances it’s wrong, because they may be different than the ones in which the human expert would be wrong.

This will only become more important as we develop more-capable systems that are in more and more of our processes. When a human expert is wrong, we often have a pretty good reason: maybe they haven’t been trained enough on the particular tasks they have to perform; maybe they’ve been forced to work for 20 hours without a break; maybe they’ve gone too long without a meal. Things become dangerous if we have absolutely no idea why something’s right and why something’s wrong, even if the overall accuracy is higher.

DH One of the dangers inherent in the application of AI to decision-making is that, while it may be objective, it may not be as capable of fine discrimination as a human being would be—say in deciding whether a person should go to jail or not. Would AI be able to deliver a decision that demonstrated mercy?

I think AI will find good uses, but we do need to deploy it with appropriate care and consideration.”

Seán Ó hÉigeartaigh

SOH These systems will inevitably be trained on a lot of historical data that we give them. If that data reflects the reality that we live in and want to live in, that’s ok; but it’s likely that it might reflect historical biases. For example, an article released last year talked about the application of artificial intelligence to predicting whether criminal defendants released on bail might reoffend or flee. It turned out that it was much more likely to approve bail for people who were from a well-off background and of a certain skin color.

If there are biases in the data, we might exacerbate those or lock them in in a certain way, which might prevent us from moving to the more fair society in which we want to live. One concern is that you might train an AI system on, for example, health-care results in the UK, and it’s pretty good at giving a good decision on people in the UK because that’s where it’s been trained. But then you apply it somewhere else—let’s say in India or Brazil. Suddenly your system, which can recognize whether a cancer is malignant or benign on a pasty UK skin tone, has a lot more difficulty against somebody with darker skin, and as a result it gives more false positives or more false negatives. It basically gives people the wrong diagnoses.

In some ways this is a human error; it’s us taking a system that is trained in one context and putting it into another. But as we bring AI into decision-making contexts, we need to be very mindful of these things. Are the environments in which we’re putting them the same as what we train them on? Is the data that we’re providing them fair and balanced, or does it reflect historical biases that we don’t want to see reflected in their decisions?

Now, humans are not perfect decision-makers either. Humans are also biased, sometimes more so than others. One optimistic viewpoint is that perhaps with careful deployment, we might be able to get the best of both worlds, in that AI systems might help to uncover biases that we’d not really thought of before and also to help us overcome human irrationalities, such as bad human decision-making when we’re tired or haven’t had lunch, or just human bias that we’re unaware of but nonetheless displaying in our decisions.

DH Are you optimistic or pessimistic overall? Will we make it to the end of the 21st century?

SOH I’m inherently an optimistic person. You need to be if you’re thinking about these things all day long, because otherwise it would be hard to come in to work. There are legitimate things that we need to be concerned about. On the other hand, we are also helping to improve the lot of everyone on the planet. There are fewer people in absolute poverty than there have been in the past. We’re extending the principles of human rights to more people than ever before, and we’re learning to recognize our fellow human beings in every part of the world as people just like us, who have lives like us and who have the same right to look at life and the future that we do.

It’s this shared global understanding that we need to develop and that will allow us to live sustainably, both on our planet and in the development of very powerful technologies in the coming decades.