Surveillance Capitalism and the Road to Evil
Sharing personal information through apps, Internet browsers and messaging services is a necessary element of engaging with the digital world, one might say. But when we become aware of the full reach of “surveillance capitalism,” it’s difficult to deny that the use of our data is insidious and widespread. It wasn’t meant to be this way.
Google was in its infancy at the turn of the century. It might have taken any number of routes to establish itself, but—at least initially—it chose an unusual option. Paul Buchheit, who would later create Gmail, took part in codifying the company’s values. It was he, along with colleague Amit Patel, who proposed a striking phrase: “Don’t be evil.”
“I was sitting there trying to think of something that would be really different and not one of these usual ‘strive for excellence’ type of statements,” he told investor Jessica Livingston, who interviewed him for Founders at Work: Stories of Startups’ Early Days. “I also wanted something that, once you put it in there, would be hard to take out.”
It became the very first phrase in Google’s Code of Conduct, standing out amidst the usual vacuity of corporate speak. “Strive for excellence” might denote fuzzy aspirations, or a cloak for profit-seeking. “Don’t be evil,” by contrast, suggests self-restraint and benevolence. And that seemed to be the intent. The Code explained what “Don’t be evil” meant for them: “providing our users unbiased access to information, focusing on their needs and . . . doing the right thing more generally—following the law, acting honorably, and treating co-workers with courtesy and respect.”
Furthermore, Buchheit saw this moral stance as a competitive advantage. He told Livingston it was “a bit of a jab at a lot of the other companies, especially our competitors, who at the time, in our opinion, were kind of exploiting the users to some extent.” It’s significant to note that “exploiting users” was a key element of what, to him, constituted “evil.”
Paved With Good Intentions
The phrase recurred in many documents over ensuing years, including a 2004 letter from their founders: “Don’t be evil. We believe strongly that in the long term, we will be better served—as shareholders and in all other ways—by a company that does good things for the world even if we forgo some short term gains.”
“Google is not a conventional company. We do not intend to become one. . . . Serving our end users is at the heart of what we do and remains our number one priority.”
They presented themselves as a benevolent company based on firm principles, focused on making the world a better place. It was a remarkable moral stance, considering it came at a point in history that arguably marks an apex of confidence in secular Western materialism.
As time went on, “Don’t be evil” became an informal motto for Google, though the intent of the phrase has echoed differently throughout the company’s history. In relation to artificial intelligence (AI), a 2018 blog post by CEO Sundar Pichai pledged that Google wouldn’t pursue technologies that are “likely to cause overall harm” or whose purpose is “to cause or directly facilitate injury to people.” As a result, it pulled out of AI development related to weaponry.
Google wasn’t alone among fledgling tech companies presenting themselves as socially beneficial. The early 2000s witnessed a rise in online interpersonal networks. Foremost among them was Facebook. Founder Mark Zuckerberg later stated that “Facebook was not originally created to be a company. It was built to accomplish a social mission—to make the world more open and connected.” The moral intent was central from the start, he claimed—its focus on putting people first: “Personal relationships are the fundamental unit of our society. Relationships are how we discover new ideas, understand our world and ultimately derive long-term happiness. . . . We are extending people’s capacity to build and maintain relationships.”
Certainly, once Facebook launched as an international network (ignoring its somewhat shadier origins), it did seem possible that it could transform our social world. Many of us reconnected with school friends or distant family members, congratulated people on birthdays we’d never otherwise remember, and easily kept in touch with acquaintances worldwide. Zuckerberg went on to affirm, “We’ve always cared primarily about our social mission, the services we’re building and the people who use them.”
Apple, for its part, has consistently focused more on product and innovation, but they also promote a moral stance. Their values statement currently claims, “We are committed to demonstrating that business can and should be a force for good. . . . It also means leading with our values in the technology we make, the way we make it, and how we treat people and the planet we share. We’re always working to leave the world better than we found it, and to create powerful tools that empower others to do the same.”
“We believe that business, at its best, serves the public good, empowers people around the world, and binds us together as never before.”
It might be easy to dismiss all this as hollow corporate speak or press-release sound bites. But such words have had an impact. Since the turn of the century, there has been a general perception that Big Tech companies (including Google, Facebook [now Meta] and Apple) have acted with the customer primarily at heart. As part of a wider digital revolution, they declared themselves on a mission to make the world better and safer, improve relationships, and empower others to do the same. Their tools have boosted many businesses worldwide. The clean minimalist design of the Apple iPhone, the global interpersonal reach of Facebook, and the array of free services provided by Google all seemed to promise a better world.
It might surprise many that all was not as it seemed, but the warning signs were replete in history, if only we could see them.

Same Old Road
In the 19th-century United States, particularly in Midwestern manufacturing cities, saloons began to offer free lunches to their patrons. The origin of the practice is hazy, but the deal was (on the face of it) simple: If you bought a drink, you got a free lunch. It was an ingenious—and popular—means of drawing people into the saloon. Saloons then did everything they could to keep patrons there (and to keep them drinking), from providing entertainment to offering drinks tokens. Often the lunches comprised salt-heavy foods that would induce thirst, and hence further drinks purchases.
From this practice came the saying “There’s no such thing as a free lunch.”
The same might be applied to the rise of the tech giants. Their gifts contained a catch: “a Pandora’s box whose contents we are only beginning to understand,” according to social psychologist (and popularizer of the term surveillance capitalism) Shoshana Zuboff.
The generally positive public feeling toward Google, Apple and Facebook quickly established their products as important tools for billions of people. We used them to make our lives more efficient and enjoyable. But Big Tech corporations soon realized that all this usage created an interesting by-product: data. From photos to contact details to search queries, from GPS locations to web histories to family networks, we supplied Google, Apple and Facebook (among others) with a mountain of personal information.
This data presented a solution to a critical problem. Not long after its founding and despite its early momentum, Google faced an existential threat. Venture capitalist Michael Moritz notes that “the first 12 months of Google were not a cakewalk. . . . Cash was going out of the window at a feral rate during the first six, seven months.”
They found the solution to their financial plight in the mountain of personal data they were collating. In an interview with The Harvard Gazette, Zuboff attests that “it was Google that first learned how to capture surplus behavioral data, more than what they needed for services, and used it to compute prediction products that they could sell to their business customers, in this case advertisers.” Harvesting personal and behavioral data for profit is what Zuboff dubbed surveillance capitalism. The concept was what initially powered Google’s advertising model, but the practice quickly expanded.
“With Google’s unique access to behavioral data, it would now be possible to know what a particular individual in a particular time and place was thinking, feeling, and doing.”
The idea is that this mountain of information—whether it’s what you like to eat on holiday, where you were last Tuesday at lunchtime, or the color of your child’s hair—is incredibly valuable to companies who would like to sell their products to you. It’s more comprehensive, more detailed and more accurate than any customer survey. It’s also more personalized, meaning that the next time you pass that cafe on your Tuesday lunch break, they can pepper you with offers to entice you to visit. It’s why you find ads on your browser for items you’ve recently searched or perhaps merely talked about. As always with advertising, the profit motive is foundational. Via such freely available tools as Google Search and Facebook’s Timeline, companies have the information they need to persuade you to spend more. There truly is no such thing as a free lunch.
But, you might wonder, wouldn’t this clandestine use of data come under the umbrella of “exploiting users”? Of making profit from our personal information without permission? Zuboff says it was the risk of imminent failure that led Google to change its principles. “Exceptional threats to their financial and social status appear to have awakened a survival instinct,” she writes. “The Google founders’ response . . . effectively declared a ‘state of exception’ in which it was judged necessary to suspend the values and principles that had guided Google’s founding and early practices.”
It was a familiar human reaction. The desire for profit and stability trumped their founding moral principles.
Who Pays the Toll?
In terms of finances, capitalizing on user data has certainly paid off—extraordinarily well. By the end of 2002, as Google began training these techniques (and just two years after inaugurating their “Don’t be evil” Code of Conduct), their net revenue increased by 400 percent and they turned their first profit. Zuboff notes that by 2004, when the company went public, the discovery of “behavioral surplus” had produced more than a 3,500 percent increase in reported revenue.
At first glance, you might find this practice reasonable—or at least benign. Companies need to find ways to survive, and what Google did was ingenious. You might even welcome it. Maybe you always wanted to try that cafe for lunch on a Tuesday, and a targeted discount might be exactly the prompt you need. But what should make us pause is the way Big Tech companies have gone about it. The use of this material has far greater and more sinister potential than we could imagine. Though its consequences may seem innocuous, the clandestine usage of our data is so unrestrained and pervasive that it’s already shaping our lives in ways we may not realize.
It was clear to Google that collection of this data had to be done secretly. Zuboff writes: “Right from the start at Google it was understood that users were unlikely to agree to this unilateral claiming of their experience and its translation into behavioral data. It was understood that these methods had to be undetectable.”
They expected people to object to the practice, and they were right. The revelation that they were scanning personal emails to supply data to advertisers created such a furor that in 2017 they promised to stop (though their promise was limited, to say the least). The 2012–2014 introduction of Google Glass, their innovative “smart” glasses, met huge resistance when people realized the product was designed to gather data from location, audio, video and photos—not just of the user, but of those around them. It was even suggested that the device could be illegal in countries that prohibit inconspicuous espionage.
That such companies felt the need to hide their activities is surely a red flag. “The rhetoric of the pioneering surveillance capitalists, and just about everyone who has followed, has been a textbook of misdirection, euphemism, and obfuscation,” Zuboff asserts. As users, we may be unaware of these practices and the rampant exploitation of what we might naturally presume belongs to us alone (name, phone number, Internet activities, voice, facial characteristics, etc.).
“It is important to acknowledge that in this context, ‘smart’ is a euphemism for rendition: intelligence that is designed to render some tiny corner of lived experience as behavioral data.”
Google has often succeeded in claiming territory by simply presuming ownership of it. As Zuboff observes, the audacity of this is quite astonishing. It begins by taking whatever is not yet legally defended: your computer, your phone, your face, your daily habits, photos of your kids, your voice. In the case of Google’s Street View, it presumes that images, videos and data relating to open spaces (whether public or private) are free to take. It then moves at speed to collate it, confident that any initial resistance will give way to habituation, or at least general resignation.

Mapping a Lucrative Future
Street View—and its relatives Google Maps and Google Earth—did exactly that, roaming the world collecting data without permission. The data they extracted wasn’t just images of houses and buildings. As later investigations proved, they were also harvesting unencrypted personal information from home Wi-Fi networks, including names, credit information, telephone numbers, messages, emails, records of online dating, browsing history, medical information, and video and audio files. They denied they used the data in any Google products, but this was impossible to confirm; federal investigators reported that the company “willfully and repeatedly violated Commission orders to produce certain information and documents that the Commission required for its investigation.”
In the face of legal challenges, Big Tech companies often present their activities as “inevitable” in the inexorable march of technological progress (and who wouldn’t want that?!), and then begrudgingly offer adaptations that pull them just out of the range of judicial attack. Tech giants presume, with good reason, that they can weather any legal or governmental challenge that may arise. The law moves slowly. It cannot keep up with the digital world—and these corporations know it, exploiting a multitude of areas the law has never before had reason to legislate. The result is that the landscape changes irrevocably in Big Tech’s favor. Street View, Google Maps and Google Earth reshaped legal precedent, creating huge income streams for their founders.
To further preclude legal resistance, Big Tech uses lengthy Terms of Service (TOS) agreements, thick with legalese, to manipulate users to agree to their data collection. We see them constantly, and most of us rarely read them before clicking “I Accept.” The agreements often have no opt-out possibility and are presented in small type. It’s a deliberate—and successful—ploy to compel our unthinking or resigned consent. A 2008 report suggested that a reasonable reading of all the privacy policies we encounter in a year would require 25 full workdays. Given the rise in Internet usage since, that figure is now unquestionably higher. Zuboff reports that in a 2018 study, 74 percent opted to “quick join,” bypassing the need to read the contracts at all. Those who did read spent an average of fourteen seconds doing so. Researchers estimated it would take about 45 minutes to comprehend what they were signing up to.
To make matters worse, TOS wording may legally be altered by the firm at any time, without user consent. The contract may also implicate other companies without stating their responsibilities or commitments, or what the user has unwittingly agreed to with them. And if you don’t agree? Well, then your product or app probably won’t function, or will do so in such a limited way that it won’t be of much use. This is now the guiding principle of Big Tech everywhere.
Surveillance capitalism permeates the entire digital sphere. Your sleep app, your virtual assistant device (e.g., Amazon’s Alexa), your “smart” vacuum cleaner, and even your car are likely gathering data from you. It might be details you have uploaded yourself, or information it gathers while the device is not being used or even switched on (for instance, recordings of your voice). This data is packaged and sold to insurers, product developers, hospitality companies, AI developers and many others.
“We must prevent our corporations and ourselves from acting like psychopaths, because we have been seduced by the simplicity of reducing complex issues to money.”
The Internet has changed our lives, whether through filter bubbles, social media or AI. The impact of social media on our mental health and cultural and political divisions has been widely documented. Surveillance capitalism changes us too, and in ways we mostly cannot see. As Zuboff notes, “the apparatus learns to interrupt the flow of personal experience in order to influence, modify, and direct our behavior,” all in the interest of making money for the company. The Internet search results that you may have presumed were driven by unbiased algorithms are instead fueled by business interests and sponsored advertising. Your personal details and search histories will shape and modulate those results; generalizations made by data based on your age, location and gender will give you different results than your neighbor’s.
On the Verge of Evil
If you (like many of us) rely on the Internet as a source of information, this should be worrying. Armed with details they’ve never had access to before, car insurers may go beyond research-based risk factors to adjust your premium if they feel (in their subjective judgment) that your driving suggests risk of a claim. Lenders may prevent your car from starting (and then confiscate it) if you’re late with a payment. Health insurers may use data from your personal fitness or sleep app to determine your viability for coverage. You may think this sounds fair; and it might be, if the underlying motivation was fairness. But it isn’t; it’s all about maximizing profit. While profit is king—and every indication suggests that it rules in Big Tech—there is no incentive to care about fairness to consumers.
It marks an enormous shift in power from the consumer to big business. Tech companies now own enormous databases of information, which they jealously guard and which are opaque to the consumer. If and when you dispute your new insurance premium, the answer is likely to be simple: The computer says “No.” Our lives and views of the world are increasingly shaped by systems founded on desire for profit—and that is a frightening prospect. Such changes are never for our benefit, except by accident; surveillance capitalism is essentially profit motivated.
This reality seems a far cry from the early idealism of “Don’t be evil.” Buchheit wanted something that would be “hard to take out.” Yet it seems they’ve worked very hard to do just that. Once the first line in the Code of Conduct, it’s now the last, and Google’s promise to avoid AI research that would cause “harm” or “injury” was recently retracted. They are not alone in this; other AI developers also recognize the profit potential of their programs for military weaponry.
Our data has already been exposed to bad actors—those who would use it to crack passwords and logins to bank accounts or hold companies to ransom by hacking their systems. Big Tech makes minimal promises about what happens to our data once it’s harvested.
Past the Crossroads?
One might suppose it’s already too late to do anything about this. The tech giants hold extraordinary power, and our lives are so deeply entwined with their systems that it’s hard to see how to effectively disengage. Big Tech behaves as if it’s subject to no one, a stance that evidence supports. Zuboff asks critical questions about all this: “What does a smart product know, and whom does it tell? Who knows? Who decides? Who decides who decides?” We might conventionally say that judicial systems, or national and international governments, ought to have this power. Instead, Big Tech has its own answer: They have the power, and they decide.
“Examples of products determined to render, monitor, record, and communicate behavioral data proliferate, from smart vodka bottles to internet-enabled rectal thermometers, and quite literally everything in between.”
The consequences of surveillance capitalism are hard to fully grasp, because the extent of its usage is not fully understood. Zuboff rightly points out its threat to democratic institutions, to fundamental principles of privacy and consent, and to international power balances. Big Tech’s behavior doesn’t square with its initial promises, a tendency that’s not specific to the digital world (or even big corporations) but is instead very human.
Despite tech giants’ claims of innovation and novelty, the course they’ve taken is wearily familiar. It was, nonetheless, hard to anticipate. Everyone will have their own perspective on this. Certainly, many of us were initially very much drawn to Google’s promises and free, user-friendly tools. We signed up to Facebook, cherished our smartphones—perhaps even trusted them. Looking back, it may seem inevitable that any stated social mission for improving the world by these technologies was spurious; but at the time, for many, it didn’t feel that way. Perhaps we knew, in theory, that there was no such thing as a free lunch; but we ate it anyway.
The failure of human endeavor to live up to moral principles or hopeful promises is a well-trodden path, but it’s something we often only recognize in hindsight. The historical evidence is clear. Whether it’s grand new schemes, political promises, or simple hope in human ingenuity, disappointment is a frequent theme: Many marched to the Western Front in 1914 singing of the glory of war but were soon disillusioned. Many in the early years of the Soviet Union believed the promises of communism but were soon disillusioned. Many were encouraged by the nationalist pride of the promised Third Reich in 1933 but were soon disillusioned. Many cheered the democratic “end of history” heralded by the end of the Cold War but were soon disillusioned. Many believed in the optimism of the tech-led Arab Spring of 2010–2013 but were soon disillusioned.
Those who trusted in the promise that a digital world would make life better seem well on their way to disillusionment as well, if they’re not there already.
“We desperately need to change corporate culture, to introduce questions not just about what we can do, or how much money we can make, but what we should do.”
Without overstating the case, the pattern is clear. What seems true and virtuous and inexorable frequently reveals itself as self-serving propaganda. Google, Facebook, Apple—to say nothing of Microsoft, Amazon and many others—have long abandoned the high moral principles they claimed at the outset. The promise of change is oft-repeated and oft-disappointed. Human history is a cycle of disappointing or inadequate solutions to bad situations.
It could have been different if Big Tech had followed the law—or worked with the law to create a judicial landscape that’s fair and generous to everyone. If they had stuck to the principle of not exploiting people and worked to provide and monitor the supply of unbiased information. If they had acted honorably. If they had resisted the appeal of short-term gains. If the needs of others had been their prime concern, rather than a by-product of a money-making operation. If they had rejected the appeal of profit in favor of generosity, outgoing concern and social benefit.
If they had decided not to be evil after all.
The major tech companies have become unfathomably rich via surveillance capitalism and have exploited many millions in doing so. They have dramatically altered the world in innumerable ways, sometimes positively but often negatively. With the big picture in mind, it’s worth asking: Was it worthwhile? Is the enrichment of a small minority worth harm to billions of others? Ultimately we must admit there are ways to act morally—with generosity and honesty and kindness—and they offer far greater benefits to many more people than any profit and loss spreadsheet. Big Tech knew this, or at least claimed to. But, in dispiritingly familiar fashion, they quickly suspended those admirable principles in pursuit of short-term profit. The reputational damage they’ve incurred cannot easily be healed.
It’s an example that can perhaps be useful to each of us. The legacy and influence of Big Tech is clear, but how they’ll proceed in years to come has yet to be seen. In the meantime, who will pause to consider the indelible and enduring value—greater than any profit margin—of not being evil?