These highlights are from the Kindle version of LikeWar: The Weaponization of Social Media by P.W. Singer.

The prized target for ISIS was Mosul, a 3,000-year-old multicultural metropolis of 1.8 million. As the ISIS vanguard approached and #AllEyesOnISIS went viral, the city was consumed with fear. Sunni, Shia, and Kurdish neighbors eyed each other with suspicion. Were these high-definition beheadings and executions real? Would the same things happen here? Then young Sunni men, inspired by the images of the indomitable black horde, threw themselves into acts of terror, doing the invaders’ work for them.

Defenders began to slip away, and then the trickle became a flood. Thousands of soldiers streamed from the city, many leaving their weapons and vehicles behind. Most of the city’s police followed. Among Mosul’s citizens, the same swirling rumors drove mass panic. Nearly half a million civilians fled. When the invading force of 1,500 ISIS fighters finally reached the city’s outskirts, they were astounded by their good fortune. Only a handful of brave (or confused) soldiers and police remained behind. They were easily overwhelmed.

How had it gone so wrong? This was the question that haunted Iraqi officials ensconced in the capital, U.S. military officers now working marathon shifts in the Pentagon, and the hundreds of thousands of refugees forced to abandon their homes. It wasn’t just that entire cities had been lost to a ragtag army of millennials, but that four whole Iraqi army divisions—trained and armed by the most powerful nation in the world—had essentially evaporated into thin air.

The true power of the German blitzkrieg was speed: a pace of advance so relentless that French defenders were consumed with an unease that turned swiftly to panic. The weapon that made all this possible was the humble radio. Radio allowed armored formations to move in swift harmony. Radio spread reports of their attacks—sometimes real, sometimes not—which spread confusion across the entire French army. Radio also let the Germans bombard the French civilian leaders and populace with an endless stream of propaganda, sowing fear and doubt among what soon became a captive audience.

Where the Germans had harnessed radio and armored vehicles, ISIS pioneered a different sort of blitzkrieg, one that used the internet itself as a weapon. The same Toyota pickup trucks and secondhand weapons of countless guerrilla groups past had taken on a new power when combined with the right Instagram filter, especially when shared hundreds of thousands of times by adoring fans and automated accounts that mimicked them. With careful editing, an indecisive firefight could be recast as a heroic battlefield victory. A few countering voices might claim otherwise, but how could they prove it? These videos and images moved faster than the truth.

The abrupt fall of Mosul showed that there was another side to computerized war. The Islamic State, which had no real cyberwar capabilities to speak of, had just run a military offensive like a viral marketing campaign and won a victory that shouldn’t have been possible. It hadn’t hacked the network; it had hacked the information on it.

In the months that followed, ISIS’s improbable momentum continued. The group recruited over 30,000 foreigners from nearly a hundred countries to join the fight in its self-declared “caliphate.” The export of its message proved equally successful. Like a demonic McDonald’s, ISIS opened more than a dozen new franchises, everywhere from Libya and Afghanistan to Nigeria and Bangladesh. Where franchises were not possible, ISIS propaganda spurred “lone wolves” to strike, inspiring scores of terrorist attacks from Paris and Sydney to Orlando and San Bernardino.

More people were killed by gang violence in 2017 in Chicago than in all U.S. special operations forces across a decade’s worth of fighting in Iraq and then Syria. At the center of the strife is social media. “Most of the gang disputes have nothing to do with drug sales, or gang territory, and everything to do with settling personal scores,” explains Chicago alderman Joe Moore, who witnessed one of the shootings of Young Pappy. “Insults that are hurled on the social media.”

Much of this violence starts with gangs’ use of social media to “cybertag” and “cyberbang.” Tagging is an update of the old-school practice of spray-painting graffiti to mark territory or insult a rival. The “cyber” version is used to promote your gang or to start a flame war by including another gang’s name in a post or mentioning a street within a rival gang’s territory. These online skirmishes escalate quickly.

Digital sociologists describe how social media creates a new reality “no longer limited to the perceptual horizon,” in which an online feud can seem just as real as a face-to-face argument. The difference in being online, however, is that now seemingly the whole world is witnessing whether you accept the challenge or not. This phenomenon plays out at every level, and not just in killings; 80 percent of the fights that break out in Chicago schools are now instigated online.

The decentralized technology thus allows any individual to ignite this cycle of violence. Yet by throwing down the gauntlet in such a public way, online threats have to be backed up not just by the individual, but by the group as well. If someone is fronted and doesn’t reply, it’s not just the gang member but the gang as a whole that loses status. The outcome is that anyone can start a feud online, but everyone has a collective responsibility to make sure it gets consummated in the real world.

Wherever young men gather and clash, social media now alters the calculus of violence. It is no longer enough for Mexican drug cartel members to kill rivals and seize turf. They must also show their success. They edit graphic executions into shareable music videos and battle in dueling Instagram posts (gold-plated AK-47s are a favorite). In turn, El Salvadoran drug gangs—notably the Mara Salvatrucha (MS-13)—have embraced the same franchise model as ISIS, rising in global prominence and power as groups in other countries link and then claim affiliation, in order to raise their own social media cachet.

For the first time, entire populations have been thrown into direct and often volatile contact with each other. Indians and Pakistanis have formed dueling “Facebook militias” to incite violence and stoke national pride. In times of elevated tensions between the nuclear-armed powers, these voices only grow louder, clamoring for violence and putting new pressure on leaders to take action. In turn, Chinese internet users have made a habit of launching online “expeditions” against any foreign neighbors who seem insufficiently awed by China’s power. Notably, these netizens also rally against any perceived weakness by their own governments, constantly pushing their leaders to use military force. Attending a U.S. military–sponsored war game of a potential U.S.-China naval confrontation in the contested Senkaku Islands, we learned that it wasn’t enough to know what actions the Chinese admirals were planning; we also had to track the online sentiment of China’s 600 million social media users. If mishandled in a crisis, their angry reactions could bubble up into a potent political force, thus limiting leaders’ options. Even in authoritarian states, war has never been so democratic.

Clausewitz’s (and his wife’s) theories of warfare have since become required reading for militaries around the world and have shaped the planning of every war fought over the past two centuries. Concepts like the “fog of war,” the inherent confusion of conflict, and “friction,” the way plans never work out exactly as expected when facing a thinking foe, all draw on his monumental work.

The internet, once a light and airy place of personal connection, has since morphed into the nervous system of modern commerce. It has also become a battlefield where information itself is weaponized.

Social networks reward not veracity but virality. Online battles and their real-world results are therefore propelled by the financial and psychological drivers of the “attention economy,” as well as by the arbitrary, but firmly determinative, power of social media algorithms.

Our research took us around the world and into the infinite reaches of the internet. Yet we continually found ourselves circling back to five core principles, which form the foundations of this book.

First, the internet has left adolescence. Over decades of growth, the internet has become the preeminent medium of global communication, commerce, and politics. It has empowered not just new leaders and groups, but a new corporate order that works constantly to expand it.

Second, the internet has become a battlefield. As integral as the internet has become to business and social life, it is now equally indispensable to militaries and governments, authoritarians and activists, and spies and soldiers.

Third, this battlefield changes how conflicts are fought. Social media has rendered secrets of any consequence essentially impossible to keep. Yet because virality can overwhelm truth, what is known can be reshaped. “Power” on this battlefield is thus measured not by physical strength or high-tech hardware, but by the command of attention.

Fourth, this battle changes what “war” means. Winning these online battles doesn’t just win the web, but wins the world. Each ephemeral victory drives events in the physical realm, from seemingly inconsequential celebrity feuds to history-changing elections.

Fifth, and finally, we’re all part of this war. If you are online, your attention is like a piece of contested territory, being fought over in conflicts that you may or may not realize are unfolding around you. Everything you watch, like, or share represents a tiny ripple on the information battlefield, privileging one side at the expense of others. Your online attention and actions are thus both targets and ammunition in an unending series of skirmishes.

The printing press revolution began in Europe in around 1438, thanks to a former goldsmith, Johannes Gutenberg, who began experimenting with movable type. By 1450, he was peddling his mass-produced Bibles across Germany and France. Predictably, the powers of the day tried to control this disruptive new technology.

In 1517, a geopolitical match was struck when a German monk named Martin Luther wrote a letter laying out 95 problems in the Catholic Church. Where Luther’s arguments might once have been ignored, the printing press allowed his ideas to reach well beyond the bishop to whom he penned the letter. By the time the pope heard about the troublesome monk and sought to excommunicate him, Luther had reproduced his 95 complaints in 30 different pamphlets and sold 300,000 copies. The result was the Protestant Reformation, which would fuel two centuries of war and reshape the map of Europe.

In ancient Greece, the warrior Phei-dippides famously ran 25 miles from Marathon to Athens to deliver word of the Greeks’ victory over the Persian army. (The 26.2-mile distance of the modern “marathon” dates from the 1908 Olympics, where the British royal family insisted on extending the route to meet their viewing stands.) It was a race to share the news that would literally kill him.

While figures like J.C.R. Licklider and Robert W. Taylor had conceived ARPANET, Cerf is rightfully known as the “father of the internet.”

Recognizing the problem of compatibility would keep computerized communication from ever scaling, Cerf set out to find a solution. Working with his friend Robert Kahn, he designed the TCP/IP (transmission-control protocol/internet protocol), an adaptable framework that could track and regulate the transmission of data across an exponentially expanding network. Essentially, it is what allowed the original ARPANET to bind together all the mini-networks at universities around the world. It remains the backbone of the internet to this day.

Back in 1980, the British physicist Tim Berners-Lee had developed a prototype of something called “hypertext.” This was a long-theorized system of “hyperlinks” that could bind digital information together in unprecedented ways. Called ENQUIRE, the system was a massive database where items were indexed based on their relationships to each other. It resembled a very early version of Wikipedia.

In 1990, there were 3 million computers connected to the internet. Five years later, there were 16 million. That number reached 360 million by the turn of the millennium.

“Today, Apple is reinventing the phone!” Jobs gleefully announced. Although nobody knew it at the time, the introduction of the iPhone also marked a moment of destruction. Family dinners, vacations, awkward elevator conversations, and even basic notions of privacy—all would soon be endangered by the glossy black rectangle Jobs held triumphantly in his hand.

Tim Berners-Lee has written, “The web that many connected to years ago is not what new users will find today. What was once a rich selection of blogs and websites has been compressed under the powerful weight of a few dominant platforms. This concentration of power creates a new set of gatekeepers, allowing a handful of platforms to control which ideas and opinions are seen and shared… What’s more, the fact that power is concentrated among so few companies has made it possible to weaponize the web at scale.”

There is one more difference between this and earlier tech revolutions: not all of these new kings live in the West. WeChat, a truly remarkable social media model, arose in 2011, unnoticed by many Westerners. Engineered to meet the unique requirements of the enormous but largely isolated Chinese internet, WeChat may be a model for the wider internet’s future. Known as a “super app,” it is a combination of social media and marketplace, the equivalent of companies like Facebook, Twitter, Amazon, Yelp, Uber, and eBay all fused into one, sustaining and steering a network of nearly a billion users.

When the internet first began to boom in the 1990s, internet theorists proclaimed that the networked world would lead to a wave of what they called “disintermediation.” They described how, by removing the need for “in-between” services, the internet would disrupt all sorts of longstanding industries. Disintermediation soon remade realms ranging from retail stores (courtesy of Amazon) and taxi companies (courtesy of Uber) to dating (courtesy of Tinder).

Any information put online comes with “metadata,” akin to digital stamps that provide underlying details of the point of origin and movement of any online data. Each tweet posted on Twitter, for instance, carries with it more than sixty-five different elements of metadata.

The amount of data being gathered about the world around us and then put online is astounding. In a minute, Facebook sees the creation of 500,000 new comments, 293,000 new statuses, and 450,000 new photos; YouTube the uploading of more than 400 hours of video; and Twitter the posting of more than 300,000 tweets. And behind this lies billions more dots of added data and metadata, such as a friend tagging who appeared in that Facebook photo or the system marking what cellphone tower the message was transmitted through. In the United States, the size of this “digital universe” doubles roughly every three years.

In 2017, General Mark Milley, chief of staff of the U.S. Army, summed up what this means for the military: “For the first time in human history, it is near impossible to be unobserved.” Consider that in preparation for D-Day in June 1944, the Allies amassed 2 million soldiers and tens of thousands of tanks, cannons, jeeps, trucks, and airplanes in the British Isles. Although German intelligence knew that the Allied forces were there, they never figured out where or when they would strike. That information came only when the first Americans stormed Utah Beach.

In the fighting that began in Ukraine in 2014, Russian military intelligence pinpointed the smartphones of Ukrainian soldiers arriving on the front lines. Just as Ashley Madison uses geographically targeted data to fire off web ads to potentially philandering travelers, the Russians used it to send messages like “They’ll find your bodies when the snow melts.” Then their artillery began firing at the Ukrainians.

At the current pace, the average American millennial will take around 26,000 selfies in their lifetime. Fighter pilots take selfies during combat missions. Refugees take selfies to celebrate making it to safety. In 2016, one victim of an airplane hijacking scored the ultimate millennial coup: taking a selfie with his hijacker.

Exhaustive though it is, Trump’s online dossier encompasses only a single decade and started only when he was in his 60s. Nearly every future politician or general or soldier or voter will have a much bigger dataset, from much earlier in life. Indeed, this inescapable record may well change the prospects of those who wish to become leaders in the future. As Barack Obama said after he left office, “If you had pictures of everything I’d done in high school, I probably wouldn’t have been President of the United States.”

The ten killers snuck into Mumbai’s port aboard inflatable boats on November 26, 2008. Once ashore, they split up, fading into the megacity of some 18 million people. The attacks started soon after: a staccato of massacres at a railway station, a tourist café, a luxury hotel, and a synagogue. Over the next three days, 164 civilians and police officers would be killed. Another 300 would be injured. The tragedy would mark the deadliest Indian terror attack in a generation. It also signaled a radical change in how the news was both parsed and spread.

Mumbai’s online community kicked into gear, sharing riveting stories that quickly spread across the digital ecosystem. One brave resident took to the streets, snapping dozens of pictures. He posted them to the image-sharing service Flickr, originally created for video gamers. In a reversal of journalistic practice, these amateur photographs filled the front pages of newspapers the next day.

In a move that would soon become the norm, the Mumbai attacks got their own Wikipedia page—roughly four hours after the first shots had been fired. Dozens of volunteer editors pitched in, debating everything from serious allegations of external support (rumors of Pakistani government involvement were already swirling) to tricky issues of phrasing (were the attackers “Muslim militants” or “Muslim terrorists”?). All told, before the last terrorist had been cornered and shot, the Wikipedia entry was edited more than 1,800 times.

Hundreds of witnesses—some on-site, some from afar—had generated a volume of information that might previously have taken months of diligent reporting to gather. By stitching these individual accounts together, the online community had woven seemingly disparate bits of data into a cohesive whole. It was like watching the growing synaptic connections of a giant electric brain.

There is a word for this: “crowdsourcing.” An idea that had danced excitedly on the lips of Silicon Valley evangelists for years, crowdsourcing had been originally conceived as a new way to outsource software programming jobs, the internet bringing people together to work collectively, more quickly and cheaply than ever before. As social media use had skyrocketed, the promise of crowdsourcing had extended into a space beyond business. Mumbai proved an early, powerful demonstration of the concept in action.

A generation ago, Al Qaeda was started by the son of a Saudi billionaire. By the time of the Syrian civil war and the rise of ISIS, the internet was the “preferred arena for fundraising” for terrorism, for the same reasons it has proven so effective for startup companies, nonprofits, and political campaigns. It doesn’t just allow wide geographic reach. It expands the circle of fundraisers, seemingly linking even the smallest donor with their gift target on a personal level. As The Economist explained, this was, in fact, one of the key factors that fueled the years-long Syrian civil war. Fighters sourced needed funds by learning “to crowdfund their war using Instagram, Facebook and YouTube. In exchange for a sense of what the war was really like, the fighters asked for donations via PayPal. In effect, they sold their war online.”

Less and less is there a discrete “news cycle.” Now there is only the news, surrounding everyone like the Force in Star Wars, omnipresent and connected to all. The best way to describe the feeling that results is a term from the field of philosophy: “presentism.” In presentism, the past and future are pinched away, replaced by an incomprehensibly vast now. If you’ve ever found yourself paralyzed as you gaze at a continually updating Twitter feed or Facebook timeline, you know exactly what presentism feels like. Serious reflection on the past is hijacked by the urgency of the current moment; serious planning for the future is derailed by never-ending distraction. Media theorist Douglas Rushkoff has described this as “present shock.”

From favela life to cartel bloodlettings to civil wars, social media has erased the distinction between citizen, journalist, activist, and resistance fighter. Anyone with an internet connection can move seamlessly between these roles. Often, they can play them all at once.

Through their stubborn and breathtakingly focused investigation of the MH17 case, Eliot Higgins and Bellingcat displayed the remarkable new power of what is known as “open-source intelligence” (OSINT). With today’s OSINT, anyone can gather and process intelligence in a way that would have been difficult or impossible for even the CIA or KGB a generation ago.

At its most promising, the OSINT revolution doesn’t just help people parse secrets from publicly accessible information; it may also help them predict the future. Predata is a small company founded by James Shinn, a former CIA agent. Shinn modeled his unique service on sabermetrics, the statistics-driven baseball analysis method popularized by Michael Lewis’s book Moneyball. “By carefully gathering lots and lots of statistics on their past performance from all corners of the internet, we are predicting how a large number of players on a team will bat or pitch in the future,” Shinn explains. In this case, however, the statistics his firm mines are tens of millions of social media feeds around the world. But instead of predicting hits and strikeouts, it predicts events like riots and wars.

For years, analysts had labored to maintain a sprawling, updated encyclopedia on the regions of the Soviet Union. Now there was Wikipedia. However, a few forward-thinking intelligence officers dared to take the next big cognitive leap. What if OSINT wasn’t losing its value, they asked, but was instead becoming the new coin of the realm? The question was painful because it required setting aside decades of training and established thinking. It meant envisioning a future in which the most valued secrets wouldn’t come from cracking intricate codes or the whispers of human spies behind enemy lines—the sort of information that only the government could gather. Instead, they would be mined from a vast web of open-source data, to which everyone else in the world had access. If this was true, it meant changing nearly every aspect of every intelligence agency, from shifting budget priorities and programs to altering the very way one looked at the world.

Before the rise of social media, he explained, 90 percent of useful intelligence had come from secret sources. Now it was the exact opposite, with 90 percent coming from open sources that anyone could tap. Flynn sought to steer the agency in a new direction, boosting OSINT capabilities and prioritizing the hiring of computational analysts, who could put the data gushing from the digital fire hose to good use.

He didn’t realize the shake-up would prove a bridge too far. Flynn’s aggressive moves alarmed the DIA’s bureaucracy—not least because it threatened their own jobs. The agency was soon mired in chaos. Flynn’s leadership was also questioned, his grand vision undermined by poor management. Just a year and a half after his term began, Flynn was informed that he was being replaced. He was forced into retirement, leaving the Army after thirty-three years of service.

In 2010, Mohamed Bouazizi, a 26-year-old Tunisian, touched off the next outbreak of web-powered freedom. Each morning for ten years, he had pushed a cart to the city marketplace, selling fruit to support his widowed mother and five siblings. Every so often, he had to navigate a shakedown from the police—the kind of petty corruption that had festered under the two-decade-long rule of dictator Zine el-Abidine Ben Ali. But on December 17, 2010, something inside Bouazizi snapped. After police confiscated his wares and he was denied a hearing to plead his case, Bouazizi sat down outside the local government building, doused his clothes with paint thinner, and lit a match.

Word of the young man’s self-immolation spread quickly through the social media accounts of Tunisians. His frustration with corruption was something almost every Tunisian had experienced. Dissidents began to organize online, planning protests and massive strikes. Ben Ali responded with slaughter, deploying snipers who shot citizens from rooftops. Rather than retreat, however, some protesters whipped out their smartphones. They captured grisly videos of death and martyrdom. These were shared tens of thousands of times on Facebook and YouTube. The protests transformed into a mass uprising. On January 14, 2011, Ben Ali fled the country.

As it turned out, the Arab Spring didn’t signal the first steps of a global, internet-enabled democratic movement. Rather, it represented a high-water mark. The much-celebrated revolutions began to fizzle and collapse. In Libya and Syria, digital activists would soon turn their talents to waging internecine civil wars. In Egypt, the baby named Facebook would grow up in a country that quickly turned back to authoritarian government, the new regime even more repressive than Mubarak’s.

VKontakte is the most popular social network in Russia. After anti-Putin protesters used VK in the wake of the Arab Spring, the regime began to take a greater interest in it and the company’s young, progressive-minded founder, Pavel Durov. When the man once known as “the Mark Zuckerberg of Russia” balked at sharing user data about his customers, armed men showed up at his apartment. He was then falsely accused of driving his Mercedes over a traffic cop’s foot, a ruse to imprison him. Getting the message, Durov sold his shares in the company to a Putin crony and fled the country.

In 1998, China formally launched its Golden Shield Project, a feat of digital engineering on a par with mighty physical creations like the Three Gorges Dam. The intent was to transform the Chinese internet into the largest surveillance network in history—a database with rec-ords of every citizen, an army of censors and internet police, and automated systems to track and control every piece of information transmitted over the web. The project cost billions of dollars and employed tens of thousands of workers. Its development continues to this day. Notably, the design and construction of some of the key components of this internal internet were outsourced to American companies—particularly Sun Microsystems and Cisco—which provided the experience gained from building vast, closed networks for major businesses.

The most prominent part of the Golden Shield Project is its system of keyword filtering. Should a word or phrase be added to the list of banned terms, it effectively ceases to be. As Chinese internet users leapt from early, static websites to early-2000s blogging platforms to the rise of massive “microblogging” social media services starting in 2009, this system kept pace.

So ubiquitous is the filter that it has spawned a wave of surreal wordplay to try to get around it. For years, Chinese internet users referred to “censorship” as “harmony”—a coy reference to Hu Jintao’s “harmonious society.” To censor a term, they’d say, was to “harmonize” it. Eventually, the censors caught on and banned the use of the word “harmony.” As it happens, however, the Chinese word for “harmony” sounds similar to the word for “river crab.” When a word had been censored, savvy Chinese internet users then took to calling it “river crab’d.”

From the first days of the Chinese internet, authorities have ruled that websites and social media services bear the legal responsibility to squelch any “subversive” content hosted on their networks. The definition of this term can shift suddenly. Following a spate of corruption scandals in 2016, for instance, the government simply banned all online news reporting that did not originate with state media. It became the duty of individual websites to eliminate such stories or suffer the consequences.

Although China saw the emergence of an independent blogging community in the early 2000s, the situation abruptly reversed in 2013 with the ascendancy of President Xi Jinping. That year, China’s top court ruled that individuals could be charged with defamation (and risk a three-year prison sentence) if they spread “online rumors” seen by 5,000 internet users or shared more than 500 times.

The government soon took an even harder line. Charles Xue, a popular Chinese American blogger and venture capitalist, was arrested under suspicious circumstances. He appeared on state television in handcuffs shortly afterward, denouncing his blogging past and arguing for state control of the internet.

Since Xi came to power, tens of thousands of Chinese citizens have been charged with “cybercrimes,” whose definition has expanded from hacking to pretty much anything digital that authorities don’t like. In 2017, for instance, Chinese regulators determined that the creator of a WeChat discussion group wasn’t responsible just for their own speech, but also for the speech of each group member.

By 2008, the 50-Cent Army had swelled to roughly 280,000 members. Today, there are as many as 2 million members, churning out at least 500 million social media postings each year.

In the restive Muslim-minority region of Xinjiang, residents have been forced to install the Jingwang (web-cleansing) app on their smartphones. The app not only allows their messages to be tracked or blocked, but it also comes with a remote-control feature, allowing authorities direct access to residents’ phones and home networks. To ensure that people were installing these “electronic handcuffs,” the police have set up roving checkpoints in the streets to inspect people’s phones for the app.

From its birth, the Soviet Union relied on the clever manipulation and weaponization of falsehood (called dezinformatsiya), both to wage ideological battles abroad and to control its population at home. One story tells how, when a forerunner of the KGB set up an office in 1923 to harness the power of dezinformatsiya, it invented a new word—“disinformation”—to make it sound of French origin instead. In this way, even the origin of the term was buried in half-truths.

In article 29 of its newly democratic constitution, the Russian Federation sought to close the door on the era of state-controlled media and shadowy propaganda campaigns. “Everyone shall have the right to freely look for, receive, transmit, produce and distribute information by any legal way,” the document declared.

Unlike the Soviet Union of the past, or how China and many other regimes operate today, Russia doesn’t prevent political opposition. Indeed, opposition makes things more interesting—just so long as it abides by the unspoken rules of the game. A good opponent for the government is a man like Vladimir Zhirinovsky, an army colonel who premised his political movement on free vodka for men and better underwear for women. He once proposed beating the bird flu epidemic by shooting all the birds from the sky. Zhirinovsky was entertaining, but he also made Putin seem more sensible in comparison. By contrast, Boris Nemtsov was not a “good” opponent. He argued for government reform, investigated charges of corruption, and organized mass protests. In 2015, he was murdered, shot four times in the back as he crossed a bridge. The government prefers caricatures to real threats. Nemtsov was one of at least thirty-eight prominent opponents of Putin who died under dubious circumstances between 2014 and 2017 alone, from radioactive poisonings to tumbling down an elevator shaft.

The outcome has been an illusion of free speech within a newfangled Potemkin village. “The Kremlin’s idea is to own all forms of political discourse, to not let any independent movements develop outside its walls,” writes Peter Pomerantsev, author of Nothing Is True and Everything Is Possible. “Moscow can feel like an oligarchy in the morning and a democracy in the afternoon, a monarchy for dinner and a totalitarian state by bedtime.”

The aim of Russia’s new strategy, and its military essence, was best articulated by Valery Gerasimov, the country’s top-ranking general at the time. He channeled Clausewitz, declaring in a speech reprinted in the Russian military’s newspaper that “the role of nonmilitary means of achieving political and strategic goals has grown. In many cases, they have exceeded the power of force of weapons in their effectiveness.”

These observations, popularly known as the Gerasimov Doctrine, have been enshrined in Russian military theory, even formally written into the nation’s military strategy in 2014.

RT was originally launched with a Russian government budget of $30 million per year in 2005. By 2015, the budget had jumped to approximately $400 million, an investment more in line with the Russian view of the outlet as a “weapons system” of influence. That support, and the fact that its long-serving editor in chief, Margarita Simonyan, simultaneously worked on Putin’s election team, belies any claims of RT’s independence from the Russian government.

The strategy also works to blunt the impact of any news that is harmful to Russia, spinning up false and salacious headlines to crowd out the genuine ones. Recall how Eliot Higgins and Bellingcat pierced the fog of war surrounding the crash of flight MH17, compiling open-source data to show—beyond a reasonable doubt—that Russia had supplied and manned the surface-to-air missile launcher that stole 298 lives. The first response from Russia was a blanket denial of any role in the tragedy, accompanied by an all-out assault on the Wikipedia page that had been created for the MH17 investigation, seeking to erase any mention of Russia.

Recall how Eliot Higgins and Bellingcat pierced the fog of war surrounding the crash of flight MH17, compiling open-source data to show—beyond a reasonable doubt—that Russia had supplied and manned the surface-to-air missile launcher that stole 298 lives. The first response from Russia was a blanket denial of any role in the tragedy, accompanied by an all-out assault on the Wikipedia page that had been created for the MH17 investigation, seeking to erase any mention of Russia. Then came a series of alternative explanations pushed out by the official media network, echoed by allies across the internet. First the Ukrainian government was to blame. Then the Malaysian airline was at fault. (“Questions over Why Malaysia Plane Flew over Ukrainian Warzone,” one headline read, even though the plane flew on an internationally approved route.) And then it was time to play the victim, claiming Russia was being targeted by a Western smear campaign. Mounting evidence of Russia’s involvement in the shootdown proved little deterrent.

Shortly after the release of the Bellingcat exposé showing who had shot the missiles, Russian media breathlessly announced that, actually, a newfound satellite image showed the final seconds of MH17. Furthermore, it could be trusted, as the image had both originated with the Russian Union of Engineers and been confirmed by an independent expert.

The photo was indeed remarkable, showing a Ukrainian fighter jet in the act of firing at the doomed airliner. It was a literal smoking gun. It was also a clear forgery. The photo’s background revealed it had been stitched together from multiple satellite images. It also pictured the wrong type of attack aircraft, while the airliner said to be MH17 was just a bad photoshop job. Then it turned out the engineering expert validating it did not actually have an engineering degree. The head of the Russian Union of Engineers, meanwhile, explained where he’d found it: “It came from the internet.”

All told, Russian media and proxies spun at least a half dozen theories regarding the MH17 tragedy. It hardly mattered that these narratives often invalidated each other. (In addition to the fake fighter jet photos, another set of doctored satellite images and videos claimed to show it hadn’t been a Russian, but rather a Ukrainian, surface-to-air missile launcher in the vicinity of the shootdown, meaning now the airliner had somehow been shot down from both above and below.) The point of this barrage was to instill doubt—to make people wonder how, with so many conflicting stories, one could be more “right” than any other.

Yet, for all the noise generated by Russia’s global media network of digital disinformation sites, there’s an even more effective, parallel effort that lurks in the shadows. Known as “web brigades,” this effort entails a vast online army of paid commenters (among them our charming philosophy major) who push the campaign through individual social media accounts. Unlike the 50-Cent Army of China, however, the Russian version isn’t tasked with spreading positivity. In the words of our philosophy student’s boss, his job was to sow “civil unrest” among Russia’s foes.

Each day, our hapless Russian philosophy major and hundreds of other young hipsters would arrive for work at organizations like the innocuously named Internet Research Agency, located in an ugly neo-Stalinist building in St. Petersburg’s Primorsky District. They’d settle into their cramped cubicles and get down to business, assuming a series of fake identities known as “sockpuppets.” The job was writing hundreds of social media posts per day, with the goal of hijacking conversations and spreading lies, all to the benefit of the Russian government.

The hard work of a sockpuppet takes three forms, best illustrated by how they operated during the 2016 U.S. election. One is to pose as the organizer of a trusted group. @Ten_GOP called itself the “unofficial Twitter account of Tennessee Republicans” and was followed by over 136,000 people (ten times as many as the official Tennessee Republican Party account). Its 3,107 messages were retweeted 1,213,506 times.

The second sockpuppet tactic is to pose as a trusted news source. With a cover photo image of the U.S. Constitution, @tpartynews presented itself as a hub for conservative fans of the Tea Party to track the latest headlines.

Finally, sockpuppets pose as seemingly trustworthy individuals: a grandmother, a blue-collar worker from the Midwest, a decorated veteran, providing their own heartfelt take on current events (and who to vote for).

Perhaps the most pernicious effect of these strategies, however, is how they warp our view of the world around us. It is a latter-day incarnation of the phenomenon explored in Gaslight, a 1938 play that was subsequently turned into a movie. In the story, a husband seeks to convince his new wife that she’s going mad (intending to get her committed to an asylum and steal her hidden jewels). He makes small changes to her surroundings—moving a painting or walking in the attic—then tells her that the things she is seeing and hearing didn’t actually occur. The play’s title comes from the house’s gas lighting, which dims and brightens as he prowls the house late at night. Slowly but surely, he shatters his wife’s sense of reality.

When the social media revolution began in earnest, Silicon Valley evangelists enthused about the possibilities that would result from giving everyone “access to their own printing press.” It would break down barriers and let all opinions be heard. These starry-eyed engineers should have read up on their political philosophy. Nearly two centuries earlier, the French scholar of democracy Alexis de Tocqueville—one of the first foreigners to travel extensively in the new United States of America—pondered the same question. “It is an axiom of political science in the United States,” he concluded, “that the only way to neutralize the influence of newspapers is to multiply their number.” The greater the number of newspapers, he reasoned, the harder it would be to reach public consensus about a set of facts.

When you decide to share a particular piece of content, you are not only influencing the future information environment, you are also being influenced by any information that has passed your way already. In an exhaustive series of experiments, Yale University researchers found that people were significantly more likely to believe a headline (“Pope Francis Shocks World, Endorses Donald Trump for President”) if they had seen a similar headline before. It didn’t matter if the story was untrue; it didn’t even matter if the story was preceded by a warning that it might be fake. What counted most was familiarity. The more often you hear a claim, the less likely you are to assess it critically. And the longer you linger in a particular community, the more its claims will be repeated until they become truisms—even if they remain the opposite of the truth.

In California, the percentage of parents applying a “personal belief exception” to avoid vaccinating their kindergartners quadrupled between 2000 and 2013, and disease transmittal rates among kids soared as a result. Cases of childhood illnesses like whooping cough reached a sixty-year high, while the Disneyland resort was rocked by an outbreak of measles that sickened 147 children. Fighting an infectious army of digital conspiracy theorists, the State of California eventually gave up arguing and passed a law requiring kindergarten vaccinations, which only provided more conspiracy theory fodder.

Social media transports users to a world in which their every view seems widely shared. It helps them find others just like them. After a group is formed, the power of homophily then knits it ever closer together. U.S. Army colonel turned historian Robert Bateman summarizes it pointedly: “Once, every village had an idiot. It took the internet to bring them all together.”

Fact, after all, is a matter of consensus. Eliminate that consensus, and fact becomes a matter of opinion. Learn how to command and manipulate that opinion, and you are entitled to reshape the fabric of the world.

Rather than a free-for-all among millions of people, the battle for attention is actually dominated by a handful of key nodes in the network. Whenever they click “share,” these “super-spreaders” (a term drawn from studies of biologic contagion) are essentially firing a Death Star laser that can redirect the attention of huge swaths of the internet. This even happens in the relatively controlled parts of the web. A study of 330 million Chinese Weibo users, for instance, found a wild skew in influence: fewer than 200,000 users had more than 100,000 followers; only about 3,000 accounts had more than 1 million. When researchers looked more closely at how conversations started, they found that the opinions of these hundreds of millions of voices were guided by a mere 300 accounts.

Our bodies are programmed to consume fats and sugars because they’re rare in nature… In the same way, we’re biologically programmed to be attentive to things that stimulate: content that is gross, violent, or sexual and gossip which is humiliating, embarrassing, or offensive. If we’re not careful, we’re going to develop the psychological equivalent of obesity. We’ll find ourselves consuming content that is least beneficial for ourselves or society as a whole.

In 2016, researchers were stunned to discover that 59 percent of all links posted on social media had never been clicked on by the person who shared them.

Across the board, just one-tenth of professional media coverage focused on the 2016 presidential candidates’ actual policy positions. From the start of the year to the last week before the vote, the nightly news broadcasts of the “big three” networks (ABC, CBS, and NBC) devoted a total of just thirty-two minutes to examining the actual policy issues to be decided in the 2016 election!

The human brain was never equipped to operate in an information environment that moves at the speed of light.

Often, these fake followers are easy to track. In 2016, internet users had a collective chuckle when People’s Daily, the main Chinese propaganda outlet, launched a Facebook page that swiftly attracted 18 million “likes,” despite Facebook being banned in China. This included more than a million “fans” in Myanmar (out of the then 7 million Facebook users in that country), who instantly decided to “like” China.

The attention economy may have been built by humans, but it is now ruled by algorithms—some with agendas all their own.

On the one hand, ISIS was a religious cult that subscribed to a medieval, apocalyptic interpretation of the Quran. It was led by a scholar with a PhD in Islamic theology, its units commanded by men who had been jihadists since the 1980s. But on the other hand, ISIS was largely composed of young millennials. Its tens of thousands of eager recruits, most drawn from Syria, Iraq, and Tunisia, had grown up with smartphones and Facebook. The result was a terrorist group with a seventh-century view of the world that, nonetheless, could only be understood as a creature of the new internet.

“Terrorism is theater,” declared RAND Corporation analyst Brian Jenkins in a 1974 report that became one of terrorism’s foundational studies. Command enough attention and it didn’t matter how weak or strong you were: you could bend populations to your will and cow the most powerful adversaries into submission.

Content that can be readily perceived as quirky or contradictory will gain a disproportionate amount of attention. A single image of an ISIS fighter posing with a jar of Nutella, for instance, was enough to launch dozens of copycat news articles. These three traits—simplicity, resonance, and novelty—determine which narratives stick and which fall flat.

Amusement, shock, and outrage determine how quickly and how far a given piece of information will spread through a social network. Or, in simpler terms, content that can be labeled “LOL,” “OMG,” or “WTF.”

Although the word “troll” conjures images of beasts lurking under bridges and dates back to Scandinavian folklore, its modern internet use actually has its roots in the Vietnam War. American F-4 Phantom fighter jets would linger near North Vietnamese strongholds, taunting them. If eager, inexperienced enemy pilots took the bait and moved to attack, the Americans’ superior engines would suddenly roar into action, and the aces would turn to shoot down their foes. American pilots called this deception “trolling for MiGs.” Early online discussion boards copied both the term and the technique.

The lesson for BuzzFeed, and for all aspiring social media warriors, was to make many small bets, knowing that some of them would pay off big.

Since 2003, the Chinese military has followed an information policy built on the “three warfares”: psychological warfare (manipulation of perception and beliefs), legal warfare (manipulation of treaties and international law), and public opinion warfare (manipulation of both Chinese and foreign populations). Where China is strong, its strengths must be amplified even further in the public imagination; where China is weak, attention must be diverted. China must be seen as a peaceful nation, bullied by powerful adversaries and “reluctantly” responding by building its armies and laying claim to new lands.

Pepe the Frog was born in 2005 to the San Francisco–based artist Matt Furie. One of four teenage monsters in Furie’s Boy’s Club comic series, Pepe was just a cartoon layabout who spent his days “drinkin’, stinkin’, and never thinkin’.”

In a sense, Pepe became the ideal online phenomenon—popular and endlessly adaptable, while remaining too weird and unattractive to ever go fully mainstream.

On Inauguration Day in Washington, DC, buttons and printouts of Pepe were visible in the crowd. Online vendors began selling a hat printed in the same style as those worn by military veterans of Vietnam, Korea, and World War II. It proudly pronounced its wearer as a “Meme War Veteran.”

Had Pepe really been racist? The answer is yes. Had Pepe been an innocent, silly joke? Also, yes. In truth, Pepe was a prism, a symbol continually reinterpreted and repurposed by internet pranksters, Trump supporters, liberal activists, ultranationalists, and everyone who just happened to glimpse him in passing.

“The computers in which memes live are human brains,” Dawkins wrote. Memes are born from human culture and shaped and transmitted by language. Over time, a meme might become increasingly self-referential and complex, spawning clusters of new memes. A meme is “alive” only so long as it exists in the human mind. For a meme to be forgotten means that it goes extinct, the same as a species that can no longer pass on its genetic code.

Making something go viral is hard; co-opting or poisoning something that’s already viral can be remarkably easy.

These two worlds—the leading edge of military theory and the dark kingdom of internet trolls—unsurprisingly found each other online. The union came in the form of Jeff Giesea, a tech consultant who worked as an early and avid organizer for Trump. He was one of the cofounders of MAGA3X, a meme-generating hub for the Trump online army, which described itself as “Freedom’s Secret Weapon.” Giesea felt that the election’s relentless creation and co-option of memes echoed a larger shift in global affairs—one that had caught the United States and most democracies off guard. So he put his thoughts to paper in an article titled “It’s Time to Embrace Memetic Warfare.” The document wasn’t published on a Trump fan site, however, but in the journal of NATO’s Strategic Communications Centre of Excellence.

By 2017, Israel’s hasbara offensive had its first smartphone app, trumpeted by its creators as the “Iron Dome of Truth.” The web ad for the app showed two scantily clad young women crooning in a man’s ear, “You are going to tell the whole world the real truth about Israel!” The app worked by pairing users with different online “missions” and rewarding them with points and badges. In one case, the app urged users to write positive things on comedian Conan O’Brien’s Instagram page during his visit to Israel. In another, it prompted them to “report” a Facebook image that had superimposed the Israeli flag over a picture of a cockroach. It offered a glimpse of war’s future: organized but crowdsourced, directed but distributed.

In early 2014, a policy paper began circulating in the Kremlin, outlining what steps Russia should take if President Viktor Yanukovych, the pro-Russian autocrat who controlled Ukraine, was toppled from power.

Just two weeks later, amidst mounting protests, the unpopular Yanukovych fled the country in what was known as the Euromaidan. As proof of the emerging power of social media, the name of this Ukrainian revolt was taken from a Twitter hashtag (it combined “Europe,” for the demonstrators’ desire to partner with Europe instead of Russia, and “Maidan Nezalezhnosti,” the square in Kiev where they gathered).

“I began to understand that I was caught up in two wars: one fought on the ground with tanks and artillery, and an information war fought… through social media,” he wrote. “And, perhaps counterintuitively, it mattered more who won the war of words and narratives than who had the most potent weaponry.” The result was a violent, confusing, paralyzing mess—precisely as Russia intended.

Because virality is incompatible with complexity, as content trends, any context and details are quickly stripped away. All that remains is the controversy itself, spread unwittingly by people who feel the need to “weigh in” on how fake or nonsensical it sounds.

The door is being slowly opened to a bizarre but not impossible future where the world’s great powers might fall to bloodshed due—in part—to matters getting out of hand online. In this dynamic, one is reminded of how the First World War began. As war clouds gathered over Europe in 1914, the advisors to both the German kaiser and the Russian tsar came to the same curious conclusion. Confiding in their diaries at the time, they wrote that they feared more the anger of their populace if they didn’t go to war than the consequences if they did. They had used the new communications technology of the day to stoke the fires of nationalism for their own purposes, but then found that these forces had moved beyond their control. Fearing that not going to war would cost them their thrones, the monarchs started the war that… cost them their thrones.

If there was a moment that signified the end of Silicon Valley as an explicitly American institution, it came in 2013, when a young defense contractor named Edward Snowden boarded a Hong Kong–bound plane with tens of thousands of top-secret digitized documents. The “Snowden Files,” which would be broadcast through social media, revealed an expansive U.S. spy operation that harvested the metadata of every major social media platform except Twitter. For primarily U.S.-based engineers, this was an extraordinarily invasive breach of trust. As a result, Google, Facebook, and Twitter began publishing “transparency reports” that detailed the number of censorship and surveillance requests from every nation, including the United States. “After Snowden,” explained Scott Carpenter, director of Google’s internal think tank, “Google does not think of itself all the time as an American company, but a global company.”

To paraphrase Winston Churchill, never before has so much, posted by so many, been moderated by so few. When WhatsApp was being used by ISIS to coordinate the first battle for Mosul, the company had just 55 employees for its 900 million users.

Every minute, Facebook users post 500,000 new comments, 293,000 new statuses, and 450,000 new photos; YouTube users, more than 400 hours of video; and Twitter users, more than 300,000 tweets.

Neural networks are a new kind of computing system: a calculating machine that hardly resembles a “machine” at all. Although such networks were theorized as far back as the 1940s, they’ve only matured during this decade as cloud processing has begun to make them practical. Instead of rule-based programming that relies on formal logic (“If A = yes, run process B; if A = no, run process C”), neural networks resemble living brains. They’re composed of millions of artificial neurons, each of which draws connections to thousands of other neurons via “synapses.”

These networks function by means of pattern recognition. They sift through massive amounts of data, spying commonalities and making inferences about what might belong where. With enough neurons, it becomes possible to split the network into multiple “layers,” each discovering a new pattern by starting with the findings of the previous layer.

Neural networks are trained via a process known as “deep learning.” Originally, this process was supervised. A flesh-and-blood human engineer fed the network a mountain of data (10 million images or a library of English literature) and slowly guided the network to find what the engineer was looking for (a “car” or a “compliment”).

In 2012, engineers with the Google Brain project published a groundbreaking study that documented how they had fed a nine-layer neural network 10 million different screenshots from random YouTube videos, leaving it to play with the data on its own. As it sifted through the screenshots, the neural network—just like many human YouTube users—developed a fascination with pictures of cats. By discovering and isolating a set of cat-related qualities, it taught itself to be an effective cat detector. “We never told it during the training, ‘This is a cat,’” explained one of the Google engineers. “It basically invented the concept of a cat.”

Yet this was really no different from the thought process of a human brain. Nobody is programmed from birth with a set, metaphysical definition of a cat. Instead, we learn a set of catlike qualities that we measure against each thing we perceive.

Feed the network enough voice audio recordings, and it will learn to recognize speech. Feed it the traffic density of a city, and it will tell you where to put the traffic lights. Feed it 100 million Facebook likes and purchase histories, and it will predict, quite accurately, what any one person might want to buy or even whom they might vote for.

Neural network–trained chatbots—also known as machine-driven communications tools, or MADCOMs—have no script at all, just the speech patterns deciphered by studying millions or billions of conversations. Instead of contemplating how MADCOMs might be used, it’s easier to ask what one might not accomplish with intelligent, adaptive algorithms that mirror human speech patterns.

In 2016, Microsoft launched Tay, a neural network–powered chatbot that adopted the speech patterns of a teenage girl. Anyone could speak to Tay and contribute to her dataset; she was also given a Twitter account. Trolls swarmed Tay immediately, and she was as happy to learn from them as from anyone else. Tay’s bubbly personality soon veered into racism, sexism, and Holocaust denial. “RACE WAR NOW,” she tweeted, later adding, “Bush did 9/11.” After less than a day, Tay was unceremoniously put to sleep, her fevered artificial brain left to dream of electric frogs.

Just as they can study recorded speech to infer meaning, these networks can also study a database of words and sounds to infer the components of speech—pitch, cadence, intonation—and learn to mimic a speaker’s voice almost perfectly.

One such “speech synthesis” startup, called Lyrebird, shocked the world in 2017 when it released recordings of an eerily accurate, entirely fake conversation between Barack Obama, Hillary Clinton, and Donald Trump. Another company unveiled an editing tool that it described as “Photoshop for audio,” showing how one can tweak or add new bits of speech to an audio file as easily as one might touch up an image.

Matthew Chessen, a senior technology policy advisor at the U.S. State Department, doesn’t mince words about the inevitable MADCOM ascendancy. It will “determine the fate of the internet, our society, and our democracy,” he writes. No longer will humans be reliably in charge of the machines. Instead, as machines steer our ideas and culture in an automated, evolutionary process that we no longer understand, they will “start programming us.”

For generations, science fiction writers have been obsessed with the prospect of an AI Armageddon: a Terminator-style takeover in which the robots scour puny human cities, flamethrowers and beam cannons at the ready. Yet the more likely takeover will take place on social media. If machines come to manipulate all we see and how we think online, they’ll already control the world. Having won their most important conquest—the human mind—the machines may never need to revolt at all.

The Joint Readiness Training Center at Fort Polk holds a special place in military history. It was created as part of the Louisiana Maneuvers, a series of massive training exercises held just before the United States entered World War II. When Hitler and his blitzkrieg rolled over Europe, the U.S. Army realized warfare was operating by a new set of rules. It had to figure out how to transition from a world of horses and telegraphs to one of mechanized tanks and trucks, guided by wireless communications. It was at Fort Polk that American soldiers, including such legendary figures as Dwight D. Eisenhower and George S. Patton, learned how to fight in a way that would preserve the free world.

Since then, Fort Polk has served as a continuous field laboratory where the U.S. Army trains for tomorrow’s battles. During the Cold War, it was used to prepare for feared clashes with the Soviet Red Army and then to acclimatize troops to the jungles of Vietnam. After 9/11, the 72,000-acre site was transformed into the fake province of Kirsham, replete with twelve plywood villages, an opposing force of simulated insurgents, and scores of full-time actors playing civilians caught in the middle: in short, everything the Army thought it needed to simulate how war was changing.

Sean Parker created one of the first file-sharing social networks, Napster, and then became Facebook’s first president. However, he has since become a social media “conscientious objector,” leaving the world that he helped make. Parker laments not just what social media has already done to us, but what it bodes for the next generation. “God only knows what it’s doing to our children’s brains,” he said in 2017.

Dangerous speech falls into one or more of five categories: dehumanizing language (comparing people to animals or as “disgusting” or subhuman in some way); coded language (using coy historical references, loaded memes, or terms popular among hate groups); suggestions of impurity (that a target is unworthy of equal rights, somehow “poisoning” society as a whole); opportunistic claims of attacks on women, but by people with no concern for women’s rights (which allows the group to claim a valorous reason for its hate); and accusation in a mirror (a reversal of reality, in which a group is falsely told it is under attack, as a means to justify preemptive violence against the target).

If we want to stop being manipulated, we must change how we navigate the new media environment. In our daily lives, all of us must recognize that the intent of most online content is to subtly influence and manipulate. In response, we should practice a technique called “lateral thinking.” In a study of information consumption patterns, Stanford University researchers gauged three groups—college undergraduates, history PhDs, and professional fact-checkers—on how they evaluated the accuracy of online information. Surprisingly, both the undergraduates and the PhDs scored low. While certainly intelligent, they approached the information “vertically.” They stayed within a single worldview, parsing the content of only one source. As a result, they were “easily manipulated.”

By contrast, the fact-checkers didn’t just recognize online manipulation more often, they also detected it far more rapidly. The reason was that they approached the task “laterally,” leaping across multiple other websites as they made a determination of accuracy. As the Stanford team wrote, the fact-checkers “understood the Web as a maze filled with trap doors and blind alleys, where things are not always what they seem.” So they constantly linked to other locales and sources, “seeking context and perspective.” In short, they networked out to find the truth. The best way to navigate the internet is one that reflects the very structure of the internet itself.

When in doubt, seek a second opinion—then a third, then a fourth. If you’re not in doubt, then you’re likely part of the problem!

Plato’s Republic, written around 520 BCE, is one of the foundational works of Western philosophy and politics. One of its most important insights is conveyed through “The Allegory of the Cave.” It tells the story of prisoners in a cave, who watch shadows dance across the wall. Knowing only that world, they think the shadows are reality, when actually they are just the reflections of a light they cannot see. (Note this ancient parallel to Zuckerberg’s fundamental notion that Facebook was “a mirror of what existed in real life.”)

The true lesson comes, though, when one prisoner escapes the cave. He sees real light for the first time, finally understanding the nature of his reality. Yet the prisoners inside the cave refuse to believe him. They are thus prisoners not just of their chains but also of their beliefs. They hold fast to the manufactured reality instead of opening up to the truth.

One of the underlying themes of Plato’s cave is that power turns on perception and choice. It shows that if people are unwilling to contemplate the world around them in its actuality, they can be easily manipulated. Yet they have only themselves to blame. They, rather than the “ruler,” possess the real power—the power to decide what to believe and what to tell others. So, too, in The Matrix, every person has a choice. You can pick a red pill (itself now an internet meme) that offers the truth. Or you can pick a blue pill, which allows you to “believe whatever you want to believe.”

In this new world, the same basic law applies to us all: You are now what you share. And through what you choose, you share who you truly are.