These highlights are from the Kindle version of The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future by Kevin Kelly.
All the enduring consequences of computation did not start until the early 1980s, that moment when computers married phones and melded into a robust hybrid.
By its nature, digital network technology rattles international borders because it is borderless. There will be heartbreak, conflict, and confusion in addition to incredible benefits.
Our first impulse when we confront extreme technology surging forward in this digital sphere may be to push back. To stop it, prohibit it, deny it, or at least make it hard to use. (As one example, when the internet made it easy to copy music and movies, Hollywood and the music industry did everything they could to stop the copying. To no avail. They succeeded only in making enemies of their customers.) Banning the inevitable usually backfires. Prohibition is at best temporary, and in the long counterproductive.
The highest mountains are slowly wearing away under our feet, while every animal and plant species on the planet is morphing into something different in ultra slow motion. Even the eternal shining sun is fading on an astronomical schedule, though we will be long gone when it does. Human culture, and biology too, are part of this imperceptible slide toward something new.
Our greatest invention in the past 200 years was not a particular gadget or tool but the invention of the scientific process itself. Once we invented the scientific method, we could immediately create thousands of other amazing things we could have never discovered any other way. This methodical process of constant change and improvement was a million times better than inventing any particular product, because the process generated a million new products over the centuries since we invented it. Get the ongoing process right and it will keep generating ongoing benefits. In our new era, processes trump products.
These forces are trajectories, not destinies. They offer no predictions of where we end up. They tell us simply that in the near future we are headed inevitably in these directions.
The natural inclination toward change is inescapable, even for the most abstract entities we know of: bits.
I now see upgrading as a type of hygiene: You do it regularly to keep your tech healthy. Continual upgrades are so critical for technological systems that they are now automatic for the major personal computer operating systems and some software apps. Behind the scenes, the machines will upgrade themselves, slowly changing their features over time. This happens gradually, so we don’t notice they are “becoming.”
None of us have to worry about these utopia paradoxes, because utopias never work. Every utopian scenario contains self-corrupting flaws. My aversion to utopias goes even deeper. I have not met a speculative utopia I would want to live in. I’d be bored in utopia. Dystopias, their dark opposites, are a lot more entertaining.
Real dystopias are more like the old Soviet Union rather than Mad Max: They are stiflingly bureaucratic rather than lawless. Ruled by fear, their society is hobbled except for the benefit of a few, but, like the sea pirates two centuries ago, there is far more law and order than appears. In fact, in real broken societies, the outrageous outlawry we associate with dystopias is not permitted. The big bandits keep the small bandits and dystopian chaos to a minimum.
Computing pioneer Vannevar Bush outlined the web’s core idea—hyperlinked pages—way back in 1945, but the first person to try to build out the concept was a freethinker named Ted Nelson, who envisioned his own scheme in 1965.
At the suggestion of a computer-savvy friend, I got in touch with Nelson in 1984, a decade before the first websites. We met in a dark dockside bar in Sausalito, California. He was renting a houseboat nearby and had the air of someone with time on his hands. Folded notes erupted from his pockets and long strips of paper slipped from overstuffed notebooks. Wearing a ballpoint pen on a string around his neck, he told me—way too earnestly for a bar at four o’clock in the afternoon—about his scheme for organizing all the knowledge of humanity.
The total number of web pages, including those that are dynamically created upon request, exceeds 60 trillion. That’s almost 10,000 pages per person alive. And this entire cornucopia has been created in less than 8,000 days.
This view is spookily godlike. You can switch your gaze on a spot in the world from map to satellite to 3-D just by clicking. Recall the past? It’s there. Or listen to the daily complaints and pleas of almost anyone who tweets or posts. (And doesn’t everyone?) I doubt angels have a better view of humanity.
Everyone knew writing and reading were dead; music was too much trouble to make when you could sit back and listen; video production was simply out of reach of amateurs in terms of cost and expertise. User-generated creations would never happen at a large scale, or if they happened they would not draw an audience, or if they drew an audience they would not matter. What a shock, then, to witness the near instantaneous rise of 50 million blogs in the early 2000s, with two new blogs appearing every second. And then a few years later the explosion of user-created videos—65,000 per day are posted to YouTube, or 300 video hours every minute, in 2015.
One study a few years ago found that only 40 percent of the web is commercially manufactured. The rest is fueled by duty or passion.
The web will more and more resemble a presence that you relate to rather than a place—the famous cyberspace of the 1980s—that you journey to. It will be a low-level constant presence like electricity: always around us, always on, and subterranean. By 2050 we’ll come to think of the web as an ever-present type of conversation.
There has never been a better day in the whole history of the world to invent something. There has never been a better time with more opportunities, more openings, lower barriers, higher benefit/risk ratios, better returns, greater upside than now. Right now, this minute.
The advantages gained from cognifying inert things would be hundreds of times more disruptive to our lives than the transformations gained by industrialization.
“I believe something like Watson will soon be the world’s best diagnostician—whether machine or human,” says Alan Greene, chief medical officer of Scanadu, a startup that is building a diagnostic device inspired by the Star Trek medical tricorder and powered by a medical AI. “At the rate AI technology is improving, a kid born today will rarely need to see a doctor to get a diagnosis by the time they are an adult.”
One of the early stage AI companies Google purchased is DeepMind, based in London. In 2015 researchers at DeepMind published a paper in Nature describing how they taught an AI to learn to play 1980s-era arcade video games, like Video Pinball. They did not teach it how to play the games, but how to learn to play the games—a profound difference. They simply turned their cloud-based AI loose on an Atari game such as Breakout, a variant of Pong, and it learned on its own how to keep increasing its score. A video of the AI’s progress is stunning. At first, the AI plays nearly randomly, but it gradually improves. After a half hour it misses only once every four times. By its 300th game, an hour into it, it never misses. It keeps learning so fast that in the second hour it figures out a loophole in the Breakout game that none of the millions of previous human players had discovered. This hack allowed it to win by tunneling around a wall in a way that even the game’s creators had never imagined. At the end of several hours of first playing a game, with no coaching from the DeepMind creators, the algorithms, called deep reinforcement machine learning, could beat humans in half of the 49 Atari video games they mastered. AIs like this one are getting smarter every month, unlike human players.
There is almost nothing we can think of that cannot be made new, different, or more valuable by infusing it with some extra IQ. In fact, the business plans of the next 10,000 startups are easy to forecast: Take X and add AI. Find something that can be made better by adding online smartness to it.
Cognified investments? Already happening with companies such as Betterment or Wealthfront. They add artificial intelligence to managed stock indexes in order to optimize tax strategies or balance holdings between portfolios. These are the kinds of things a professional money manager might do once a year, but the AI will do every day, or every hour.
Around 2002 I attended a private party for Google—before its IPO, when it was a small company focused only on search. I struck up a conversation with Larry Page, Google’s brilliant cofounder. “Larry, I still don’t get it. There are so many search companies. Web search, for free? Where does that get you?” My unimaginative blindness is solid evidence that predicting is hard, especially about the future, but in my defense this was before Google had ramped up its ad auction scheme to generate real income, long before YouTube or any other major acquisitions. I was not the only avid user of its search site who thought it would not last long. But Page’s reply has always stuck with me: “Oh, we’re really making an AI.”
“AI is akin to building a rocket ship. You need a huge engine and a lot of fuel. The rocket engine is the learning algorithms but the fuel is the huge amounts of data we can feed to these algorithms.”
Cloud computing empowers the law of increasing returns, sometimes called the network effect, which holds that the value of a network increases much faster as it grows bigger. The bigger the network, the more attractive it is to new users, which makes it even bigger and thus more attractive, and so on. A cloud that serves AI will obey the same law. The more people who use an AI, the smarter it gets. The smarter it gets, the more people who use it. The more people who use it, the smarter it gets. And so on. Once a company enters this virtuous cycle, it tends to grow so big so fast that it overwhelms any upstart competitors. As a result, our AI future is likely to be ruled by an oligarchy of two or three large, general-purpose cloud-based commercial intelligences.
Our most important mechanical inventions are not machines that do what humans do better, but machines that can do things we can’t do at all. Our most important thinking machines will not be machines that can think what we think faster, better, but those that think what we can’t think.
AI could just as well stand for “alien intelligence.” We have no certainty we’ll contact extraterrestrial beings from one of the billion earthlike planets in the sky in the next 200 years, but we have almost 100 percent certainty that we’ll manufacture an alien intelligence by then. When we face these synthetic aliens, we’ll encounter the same benefits and challenges that we expect from contact with ET. They will force us to reevaluate our roles, our beliefs, our goals, our identity. What are humans for? I believe our first answer will be: Humans are for inventing new kinds of intelligences that biology could not evolve. Our job is to make machines that think different—to create alien intelligences.
It’s hard to believe you’d have an economy at all if you gave pink slips to more than half the labor force. But that—in slow motion—is what the industrial revolution did to the workforce of the early 19th century. Two hundred years ago, 70 percent of American workers lived on the farm. Today automation has eliminated all but 1 percent of their jobs, replacing them (and their work animals) with machines. But the displaced workers did not sit idle. Instead, automation created hundreds of millions of jobs in entirely new fields.
While the displacement of formerly human jobs gets all the headlines, the greatest benefits bestowed by robots and automation come from their occupation of jobs we are unable to do. We don’t have the attention span to inspect every square millimeter of every CAT scan looking for cancer cells. We don’t have the millisecond reflexes needed to inflate molten glass into the shape of a bottle. We don’t have an infallible memory to keep track of every pitch in Major League baseball and calculate the probability of the next pitch in real time.
We need to let robots take over. Many of the jobs that politicians are fighting to keep away from robots are jobs that no one wakes up in the morning really wanting to do. Robots will do jobs we have been doing, and do them much better than we can. They will do jobs we can’t do at all. They will do jobs we never imagined even needed to be done. And they will help us discover new jobs for ourselves, new tasks that expand who we are. They will let us focus on becoming more human than we were.
A universal law of economics says the moment something becomes free and ubiquitous, its position in the economic equation suddenly inverts. When nighttime electrical lighting was new and scarce, it was the poor who burned common candles. Later, when electricity became easily accessible and practically free, our preference flipped and candles at dinner became a sign of luxury.
As the old joke goes: “Software, free. User manual, $10,000.” But it’s no joke.
Right now getting a full copy of all your DNA is very expensive ($10,000), but soon it won’t be. The price is dropping so fast, it will be $100 soon, and then the next year insurance companies will offer to sequence you for free. When a copy of your sequence costs nothing, the interpretation of what it means, what you can do about it, and how to use it—the manual for your genes, so to speak—will be expensive. This generative can be applied to many other complex services, such as travel and health care.
Deep down, avid audiences and fans want to pay creators. Fans love to reward artists, musicians, authors, actors, and other creators with the tokens of their appreciation, because it allows them to connect with people they admire. But they will pay only under four conditions that are not often met: 1) It must be extremely easy to do; 2) The amount must be reasonable; 3) There’s clear benefit to them for paying; and 4) It’s clear the money will directly benefit the creators.
Spotify is a cloud containing 30 million tracks of music. I can search that ocean of music to locate the most specific, weirdest, most esoteric song possible. While it plays I click a button and find the song’s lyrics displayed. It will make a virtual personal radio station for me from a small selection of my favorite music. I can tweak the station’s playlist by skipping songs or downvoting ones I don’t want to hear again. This degree of interacting with music would have astounded fans a generation ago.
Compare this splendid liquidity of options with the few fixed choices available to me just decades ago. No wonder the fans stampeded to the “free” despite the music industry’s threat to arrest them.
It seems a stretch right now that the most solid and fixed apparatus in our manufactured environment would be transformed into ethereal forces, but the soft will trump the hard. Knowledge will rule atoms. Generative intangibles will rise above the free. Think of the world flowing.
We were People of the Word. Then, about 500 years ago, orality was overthrown by technology. Gutenberg’s 1450 invention of metallic movable type elevated writing into a central position in the culture.
Printing instilled in society a reverence for precision (of black ink on white paper), an appreciation for linear logic (in a string of sentences), a passion for objectivity (of printed fact), and an allegiance to authority (via authors), whose truth was as fixed and final as a book.
America’s roots spring from documents—the Constitution, the Declaration of Independence, and, indirectly, the Bible. The country’s success depended on high levels of literacy, a robust free press, allegiance to the rule of law (found in books), and a common language across a continent. American prosperity and liberty grew out of a culture of reading and writing. We became People of the Book.
But today most of us have become People of the Screen. People of the Screen tend to ignore the classic logic of books or the reverence for copies; they prefer the dynamic flux of pixels.
Screen culture is a world of constant flux, of endless sound bites, quick cuts, and half-baked ideas. It is a flow of tweets, headlines, instagrams, casual texts, and floating first impressions. Notions don’t stand alone but are massively interlinked to everything else; truth is not delivered by authors and authorities but is assembled in real time piece by piece by the audience themselves. People of the Screen make their own content and construct their own truth.
Screens were blamed for an amazing list of societal ills. But of course we all kept watching. And for a while it did seem as if nobody wrote, or could write, and reading scores trended down for decades. But to everyone’s surprise, the cool, interconnected, ultrathin screens on monitors, the new TVs, and tablets at the beginning of the 21st century launched an epidemic of writing that continues to swell. The amount of time people spend reading has almost tripled since 1980. By 2015 more than 60 trillion pages have been added to the World Wide Web, and that total grows by several billion a day. Each of these pages was written by somebody. Right now ordinary citizens compose 80 million blog posts per day.
Some scholars of literature claim that a book is really that virtual place your mind goes to when you are reading. It is a conceptual state of imagination that one might call “literature space.” According to these scholars, when you are engaged in this reading space, your brain works differently than when you are screening. Neurological studies show that learning to read changes the brain’s circuitry. Instead of skipping around distractedly gathering bits, when you read you are transported, focused, immersed.
Think of Wikipedia as one very large book—a single encyclopedia—which of course it is. Most of its 34 million pages are crammed with words underlined in blue, indicating those words are hyperlinked to concepts elsewhere in the encyclopedia. This tangle of relationships is precisely what gives Wikipedia—and the web—its immense force. Wikipedia is the first networked book. In the goodness of time, each Wikipedia page will become saturated with blue links as every statement is cross-referenced. In the goodness of time, as all books become fully digital, every one of them will accumulate the equivalent of blue underlined passages as each literary reference is networked within that book out to all other books. Each page in a book will discover other pages and other books. Thus books will seep out of their bindings and weave themselves together into one large metabook, the universal library.
We’ll come to understand that no work, no idea stands alone, but that all good, true, and beautiful things are ecosystems of intertwined parts and related entities, past and present.
A reporter for TechCrunch recently observed, “Uber, the world’s largest taxi company, owns no vehicles. Facebook, the world’s most popular media owner, creates no content. Alibaba, the most valuable retailer, has no inventory. And Airbnb, the world’s largest accommodation provider, owns no real estate. Something interesting is happening.”
Possession is not as important as it once was. Accessing is more important than ever.
Bitcoin is a fully decentralized, distributed currency that does not need a central bank for its accuracy, enforcement, or regulation. Since it was launched in 2009, the currency has $3 billion in circulation and 100,000 vendors accepting the coins as payment. Bitcoin may be most famous for its anonymity and the black markets it fueled. But forget the anonymity; it’s a distraction. The most important innovation in Bitcoin is its “blockchain,” the mathematical technology that powers it. The blockchain is a radical invention that can decentralize many other systems beyond money.
When I send you one U.S. dollar via a credit card or PayPal account, a central bank has to verify that transaction; at the very least it must confirm I had a dollar to send you. When I send you one bitcoin, no central intermediary is involved. Our transaction is posted in a public ledger—called a blockchain—that is distributed to all other bitcoin owners in the world. This shared database contains a long “chain” of the transaction history of all existing bitcoins and who owns them. Every transaction is open to inspection by anyone. That completeness is pretty crazy; it’s like every person with a dollar having the complete history of all dollar bills as they move around the world. Six times an hour this open distributed database of coins is updated with all the new transactions of bitcoins; a new transaction like ours must be mathematically confirmed by multiple other owners before it is accepted as legitimate. In this way a blockchain creates trust by relying on mutual peer-to-peer accounting. The system itself—which is running on tens of thousands of citizen computers—secures the coin. Proponents like to say that with bitcoin you trust math instead of governments.
Ecosystems are governed by coevolution, which is a type of biological codependence, a mixture of competition and cooperation. In true ecological fashion, supporting vendors who cooperate in one dimension may also compete in others. For instance, Amazon sells both brand-new books from publishers and, via its ecosystem of used-book stores, cheaper used versions. Used-book vendors compete with one another and with the publishers. The platform’s job is to make sure it makes money (and adds value!) whether the parts cooperate or compete.
The web is hyperlinked documents; the cloud is hyperlinked data. Ultimately the chief reason to put things onto the cloud is to share their data deeply. Woven together, the bits are made much smarter and more powerful than they could possibly be alone. There is no single architecture for clouds, so their traits are still rapidly evolving. But in general they are huge. They are so large that the substrate of one cloud can encompass multiple football field–size warehouses full of computers located in scores of cities thousands of miles apart. Clouds are also elastic, meaning they can be enlarged or shrunk almost in real time by adding or dropping computers to their network. And because of their inherent redundant and distributed nature, clouds are among the most reliable machines in existence.
While the enormous clouds of Amazon, Facebook, and Google are distributed, they are not decentralized. The machines are run by enormous companies, not by a funky network of computers run by your funky peers. But there are ways to make clouds that run on decentralized hardware. We know a decentralized cloud can work, because one did during the student protests in Hong Kong in 2014. To escape the obsessive surveillance the Chinese government pours on its citizens’ communications, the Hong Kong students devised a way to communicate without sending their messages to a central cell phone tower or through the company servers of Weibo (the Chinese Twitter) or WeChat (their Facebook) or email. Instead they loaded a tiny app onto their phones called FireChat. Two FireChat-enabled phones could speak to each other directly, via wifi radio, without jumping up to a cell tower. More important, either of the two phones could forward a message to a third FireChat-enabled phone. Keep adding FireChat’d phones and you soon have a full network of phones without towers. Messages that are not meant for one phone are relayed to another phone until they reach their intended recipient. This intensely peer-to-peer variety of network (called a mesh) is not efficient, but it works. That cumbersome forwarding is exactly how the internet operates at one level, and why it is so robust. The result of the FireChat mesh was that the students created a radio cloud that no one owned (and was therefore hard to squelch). Relying entirely on a mesh of their own personal devices, they ran a communications system that held back the Chinese government for months. The same architecture could be scaled up to run any kind of cloud.
Community sharing can unleash astonishing power. Sites like Reddit and Twitter, which let users vote up or retweet the most important items (news bits, web links, comments), can steer public conversation as much, and maybe more, than newspapers or TV networks. Dedicated contributors keep contributing in part because of the wider cultural influence these instruments wield. The community’s collective influence is far out of proportion to the number of contributors. That is the whole point of social institutions: The sum outperforms the parts. Traditional socialism ramped up this dynamic via the nation-state. Now digital sharing is decoupled from government and operates at an international scale.
So far, the biggest online collaboration efforts are open source projects, and the largest of them, such as Apache, manage several hundred contributors—about the size of a village. One study estimates that 60,000 person-years of work have poured into the release of Fedora Linux 9, so we have proof that self-assembly and the dynamics of sharing can govern a project on the scale of a town.
The coercive, soul-smashing system that controls North Korea is dead (outside of North Korea); the future is a hybrid that takes cues from both Wikipedia and the moderate socialism of, say, Sweden. There will be a severe backlash against this drift from the usual suspects, but increased sharing is inevitable. There is an honest argument over what to call it, but the technologies of sharing have only begun. On my imaginary Sharing Meter Index we are still at 2 out of 10. There is a whole list of subjects that experts once believed we modern humans would not share—our finances, our health challenges, our sex lives, our innermost fears—but it turns out that with the right technology and the right benefits in the right conditions, we’ll share everything.
The earliest version of Google overtook the leading search engines of its time by employing the links made by amateur creators of web pages. Each time an ordinary person made a hyperlink on the web, Google calculated that link as a vote of confidence for the linked page and used this vote to give a weight to links throughout the web. So a particular page would get ranked higher for reliability in Google’s search results if the pages that linked to it were also linked to pages that other reliable pages linked to. This weirdly circular evidence was not created by Google but was instead derived from the public links shared by millions of web pages. Google was the first to extract value from the shared search results that customers clicked on. Each click by an ordinary user represented a vote for the usefulness of that page. So merely by using Google, the fans themselves made Google better and more economically valuable.
As innovation expert Larry Keeley once observed: “No one is as smart as everyone.”
A close examination of the governing kernel of, say, Wikipedia, Linux, or OpenOffice shows that these efforts are a bit further from the collectivist nirvana than appears from the outside. While millions of writers contribute to Wikipedia, a smaller number of editors (around 1,500) are responsible for the majority of the editing. Ditto for collectives that write code. A vast army of contributions is managed by a much smaller group of coordinators. As Mitch Kapor, founding chair of the Mozilla open source code factory, observed, “Inside every working anarchy, there’s an old-boy network.”
Organizations built to create products rather than platforms often need strong leaders and hierarchies arranged around timescales: Lower-level work focuses on hourly needs; the next level on jobs that need to be done today. Higher levels focus on weekly or monthly chores, and levels above (often in the CEO suite) need to look out ahead at the next five years. The dream of many companies is to graduate from making products to creating a platform. But when they do succeed (like Facebook), they are often not ready for the required transformation in their role; they have to act more like governments than companies in keeping opportunities “flat” and equitable, and hierarchy to a minimum.
The exhilarating frontier today is the myriad ways in which we can mix large doses of out-of-controlness with small elements of top-down control. Until this era, technology was primarily all control, all top down. Now it can contain both control and messiness. Never before have we been able to make systems with as much messy quasi-control in them. We are rushing into an expanding possibility space of decentralization and sharing that was never accessible before because it was not technically possible. Before the internet there was simply no way to coordinate a million people in real time or to get a hundred thousand workers collaborating on one project for a week. Now we can, so we are quickly exploring all the ways in which we can combine control and the crowd in innumerable permutations.
We live in a golden age now. The volume of creative work in the next decade will dwarf the volume of the last 50 years. More artists, authors, and musicians are working than ever before, and they are creating significantly more books, songs, films, documentaries, photographs, artworks, operas, and albums every year. Books have never been cheaper, and more available, than today. Ditto for music, movies, games, and every kind of creative content that can be digitally copied. The volume and variety of creative works available have skyrocketed. More and more of civilization’s past works—in all languages—are no longer hidden in rare-book rooms or locked up in archives, but are available a click away no matter where you live. The technologies of recommendation and search have made it super easy to locate the most obscure work. If you want 6,000-year-old Babylonian chants accompanied by the lyre, there they are.
Occasionally, unexpectedly popular fan-financed Kickstarter projects may pile on an additional $1 million above the goal. The highest grossing Kickstarter campaign raised $20 million for a digital watch from its future fans. Approximately 40 percent of all projects succeed in reaching their funding goal.
But by far the most potent future role for crowdsharing is in fan base equity. Rather than invest into a product, supporters invest into a company. The idea is to allow fans of a company to purchase shares in the company. This is exactly what you do when you buy shares of stock on the stock market. You are part of a crowdsourced ownership. Each of your shares is some tiny fraction of the whole enterprise, and the collected money raised by public shares is used to grow the business. Ideally, the company is raising money from its own customers, although in reality big pension and hedge funds are the bulk buyers. Heavy regulation and intense government oversight of public companies offer some guarantee to the average stock buyer, making it so anyone with a bank account can buy stock. But risky startups, solo creators, crazy artists, or a duo in their garage would not withstand the kind of paperwork and layers of financial bureaucracy ordinarily applied to public companies. Every year a precious few well-funded companies will attempt an initial public offering (IPO), but only after highly paid lawyers and accountants scour the business in an expensive due diligence scrub. An open peer-to-peer scheme that enabled anyone to offer to the public ownership shares in their company (with some regulation) would revolutionize business. Just as we have seen tens of thousands of new products that would not have existed except by crowdfunding techniques, the new methods of equity sharing would unleash tens of thousands of innovative businesses that could not be born otherwise. The sharing economy would now include ownership sharing.
The largest, fastest growing, most profitable companies in 2050 will be companies that will have figured out how to harness aspects of sharing that are invisible and unappreciated today. Anything that can be shared—thoughts, emotions, money, health, time—will be shared in the right conditions, with the right benefits. Anything that can be shared can be shared better, faster, easier, longer, and in a million more ways than we currently realize. At this point in our history, sharing something that has not been shared before, or in a new way, is the surest way to increase its value.
Scientists are required to share their negative results. I have learned that in collaborative work when you share earlier in the process, the learning and successes come earlier as well. These days I live constantly connected. The bulk of what I share, and what is shared with me, is incremental—constant microupdates, tiny improved versions, minor tweaks—but those steady steps forward feed me. There is no turning the sharing off for long. Even the silence will be shared.
There has never been a better time to be a reader, a watcher, a listener, or a participant in human expression. An exhilarating avalanche of new stuff is created every year. Every 12 months we produce 8 million new songs, 2 million new books, 16,000 new films, 30 billion blog posts, 182 billion tweets, 400,000 new products. With little effort today, hardly more than a flick of the wrist, an average person can summon the Library of Everything. You could, if so inclined, read more Greek texts in the original Greek than the most prestigious Greek nobleman of classical times. The same regal ease applies to ancient Chinese scrolls; there are more available to you at home than to emperors of China past. Or Renaissance etchings, or live Mozart concertos, so rare to witness in their time, so accessible now. In every dimension, media today is at an all-time peak of glorious plentitude.
Using standard MP3 compression, the total volume of recorded music for humans would fit onto one 20-terabyte hard disk. Today a 20-terabyte hard disk sells for $2,000. In five years it will sell for $60 and fit into your pocket. Very soon you’ll be able to carry around all the music of humankind in your pants.
It is 10 times easier today to make a simple video than 10 years ago. It is a hundred times easier to create a small mechanical part and make it real than a century ago. It is a thousand times easier today to write and publish a book than a thousand years ago.
My day in the near future will entail routines like this: I have a pill-making machine in my kitchen, a bit smaller than a toaster. It stores dozens of tiny bottles inside, each containing a prescribed medicine or supplement in powdered form. Every day the machine mixes the right doses of all the powders and stuffs them all into a single personalized pill (or two), which I take. During the day my biological vitals are tracked with wearable sensors so that the effect of the medicine is measured hourly and then sent to the cloud for analysis. The next day the dosage of the medicines is adjusted based on the past 24-hour results and a new personalized pill produced. Repeat every day thereafter. This appliance, manufactured in the millions, produces mass personalized medicine.
Way back in 1971 Herbert Simon, a Nobel Prize–winning social scientist, observed, “In an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention.” Simon’s insight is often reduced to “In a world of abundance, the only scarcity is human attention.”
Since it is the last scarcity, wherever attention flows, money will follow.
The average hardcover book takes 4.3 hours to read and $23 to buy. Therefore the average consumer cost for that reading duration is $5.34 per hour. A music CD is, on average, listened to dozens of times over its lifetime, so its retail price is divided by its total listening time to arrive at its hourly rate. A two-hour movie in a theater is seen only once, so its per hour rate is half the ticket price. These rates can be thought of as mirroring how much we, as the audience, value our attention.
TV news was once an ephemeral stream of stuff that was never meant to be recorded or analyzed—merely inhaled. Now it is rewindable. When we scroll back news, we can compare its veracity, its motives, its assumptions. We can share it, fact-check it, and mix it. Because the crowd can rewind what was said earlier, this changes the posture of politicians, of pundits, of anyone making a claim.
Remixing—the rearrangement and reuse of existing pieces—plays havoc with traditional notions of property and ownership. If a melody is a piece of property you own, like your house, then my right to use it without permission or compensation is very limited. But digital bits are notoriously nontangible and nonrival, as explained earlier. Bits are closer to ideas than to real estate. As far back as 1813, Thomas Jefferson understood that ideas were not really property, or if they were property they differed from real estate. He wrote, “He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me.” If Jefferson gave you his house at Monticello, you’d have his house and he wouldn’t. But if he gave you an idea, you’d have the idea and he’d still have the idea. That weirdness is the source of our uncertainty about intellectual property today.
The dumbest objects we can imagine today can be vastly improved by outfitting them with sensors and making them interactive. We had an old standard thermostat running the furnace in our home. During a remodel we upgraded to a Nest smart thermostat, designed by a team of ex-Apple execs and recently bought by Google. The Nest is aware of our presence. It senses when we are home, awake or asleep, or on vacation. Its brain, connected to the cloud, anticipates our routines, and over time builds up a pattern of our lives so it can warm up the house (or cool it down) just a few minutes before we arrive home from work, turn it down after we leave, except on vacations or on weekends, when it adapts to our schedule. If it senses we are unexpectedly home, it adjusts itself. All this watching of us and interaction optimizes our fuel bill.
Recently I joined some drone hobbyists who meet in a nearby park on Sundays to race their small quadcopters. With flags and foam arches they map out a course over the grass for their drones to race around. The only way to fly drones at this speed is to get inside them. The hobbyists mount tiny eyes at the front of their drones and wear VR goggles to peer through them for what is called a first-person view (FPV). They are now the drone. As a visitor I don an extra set of goggles that piggyback on their camera signals and so I find myself sitting in the same pilots’ seats and see what each pilot sees. The drones dart in, out, and around the course obstacles, chasing each other’s tails, bumping into other drones, in scenes reminiscent of a Star Wars pod race. One young guy who’s been flying radio control model airplanes since he was a boy said that being able to immerse himself into the drone and fly from inside was the most sensual experience of his life. He said there was almost nothing more pleasurable than actually, really free flying. There was no virtuality. The flying experience was real.
The convergence of maximum interaction plus maximum presence is found these days in free-range video games. For the past several years I’ve been watching my teenage son play console video games. I am not twitchy enough myself to survive more than four minutes in a game’s alterworld, but I find I can spend an hour just watching the big screen as my son encounters dangers, shoots at bad guys, or explores unknown territories and dark buildings. Like a lot of kids his age, he’s played the classic shooter games like Call of Duty, Halo, and Uncharted 2, which have scripted scenes of engagement. However, my favorite game as a voyeur is the now dated game Red Dead Redemption. This is set in the vast empty country of the cowboy West. Its virtual world is so huge that players spend a lot of time on their horses exploring the canyons and settlements, searching for clues, and wandering the land on vague errands. I’m happy to ride alongside as we pass through frontier towns in pursuit of his quests. It’s a movie you can roam in. The game’s open-ended architecture is similar to the very popular Grand Theft Auto, but it’s a lot less violent. Neither of us knows what will happen or how things will play out.
There are no prohibitions about where you can go in this virtual place. Want to ride to the river? Fine. Want to chase a train down the tracks? Fine. How about ride up alongside the train and then hop on and ride inside the train? OK! Or bushwhack across sagebrush wilderness from one town to the next? You can ride away from a woman yelling for help or—your choice—stop to help her. Each act has consequences. She may need help or she may be bait for a bandit. One reviewer speaking of the interacting free will in the game said: “I’m sincerely and pleasantly surprised that I can shoot my own horse in the back of the head while I’m riding him, and even skin him afterward.” The freedom to move in any direction in a seamless virtual world rendered with the same degree of fidelity as a Hollywood blockbuster is intoxicating.
It’s all interactive details. Dawns in the territory of Red Dead Redemption are glorious, as the horizon glows and heats up. Weather forces itself on the land, which you sense. The sandy yellow soil darkens with appropriate wet splotches as the rain blows down in bursts. Mist sometimes drifts in to cover a town with realistic veiling, obscuring shadowy figures. The pink tint of each mesa fades with the clock. Textures pile up. The scorched wood, the dry brush, the shaggy bark—every pebble or twig—is rendered in exquisite minutiae at all scales, casting perfect overlapping shadows that make a little painting. These nonessential finishes are surprisingly satisfying. The wholesale extravagance is compelling.
The game lives in a big world. A typical player might take around 15 or so hours to zoom through once, while a power player intent on achieving all the game rewards would need 40 to 50 hours to complete it. At every step you can choose any direction to take the next step, and the next, and next, and yet the grass under your feet is perfectly formed and every blade detailed, as if its authors anticipated you would tread on this microscopic bit of the map. At any of a billion spots you can inspect the details closely and be rewarded, but most of this beauty will never be seen. This warm bath of freely given abundance triggers a strong conviction that this is “natural,” that this world has always been, and that it is good. The overall feeling inside one of these immaculately detailed, stunningly interactive worlds stretching to the horizons is of being immersed in completeness. Your logic knows this can’t be true, but as on the plank over the pit, the rest of you believes it. This realism is just waiting for the full immersion of VR interaction.
The most valuable asset that Facebook owns is not its software platform but the fact that it controls the “true name” identities of a billion people, which are verified from references of the true identities of friends and colleagues. That monopoly of a persistent identity is the real engine of Facebook’s remarkable success.
In the spring of 2007 I was hiking with Alan Greene, a doctor friend of mine, in the overgrown hills behind my house in northern California. As we slowly climbed up the dirt path to the summit, we discussed a recent innovation: a tiny electronic pedometer that slipped into the laces of a shoe to record each step, then saved the data to an iPod for later analysis. We could use this tiny device to count the calories as we climbed and to track our exercise patterns over time. We began to catalog other available ways to measure our activities. A week later, I took the same hike with Gary Wolf, a writer for Wired magazine, who was curious about the social implications of these emerging self-tracking devices. There were only a dozen existing ones, but we both could see clearly that tracking technology would explode as sensors steadily got smaller. What to call this cultural drift? Gary pointed out that by relying on numbers instead of words we were constructing a “quantified self.” So in June 2007 Gary and I announced on the internets that we would host a “Quantified Self” Meetup, open to absolutely anyone who thought they were quantifying themselves. We left the definition wide open to see who would show up. More than 20 people arrived at my studio in Pacifica, California, for this first event.
The diversity of what they were tracking astounded us: They measured their diet, fitness, sleep patterns, moods, blood factors, genes, location, and so on in quantifiable units. Some were making their own devices. One guy had been self-tracking for five years in order to maximize his strength, stamina, concentration, and productivity. He was using self-tracking in ways we had not imagined. Today there are 200 Quantified Self Meetup groups around the world, with 50,000 members.
We didn’t evolve to sense our blood pressure or glucose levels. But our technology can. For instance, a new self-tracking device, the Scout from Scanadu, is the size of an old-timey stopwatch. By touching it to your forehead, it will measure your blood pressure, variable heart rate, heart performance (ECG), oxygen level, temperature, and skin conductance all in a single instant. Someday it will also measure your glucose levels.
The point of lifelogging is to create total recall. If a lifelog records everything in your life, then it could recover anything you experienced even if your meaty mind may have forgotten it. It would be like being able to google your life, if in fact your life were being indexed and fully saved. Our biological memories are so spotty that any compensation would be a huge win.
Imagine how public health would change if we continuously monitored blood glucose in real time. Imagine how your behavior would change if you could, in near real time, detect the presence or absence of biochemicals or toxins in your blood picked up from your environment. (You might conclude: “I’m not going back there!”)
As Gary Wolf said, “Recording in a diary is considered admirable. Recording in a spreadsheet is considered creepy.”
Those who embrace the internet’s tendency to copy and seek value that can’t be easily copied (through personalization, embodiment, authentication, etc.) tend to prosper, while those who deny, prohibit, and try to thwart the network’s eagerness to copy are left behind to catch up later.
Consumers say they don’t want to be tracked, but in fact they keep feeding the machine with their data, because they want to claim their benefits.
Two economists at UC Berkeley tallied up the total global production information and calculated that new information is growing at 66 percent per year. This rate hardly seems astronomical compared with the 600 percent increase in iPods shipped in 2005. But that kind of burst is short-lived and not sustainable over decades (iPod production tanked in 2009). The growth of information has been steadily increasing at an insane rate for at least a century. It is no coincidence that 66 percent per year is the same as doubling every 18 months, which is the rate of Moore’s Law. Five years ago humanity stored several hundred exabytes of information. That is the equivalent of each person on the planet having 80 Library of Alexandrias. Today we average 320 libraries each.
Every second of every day we globally manufacture 6,000 square meters of information storage material—disks, chips, DVDs, paper, film—which we promptly fill up with data. That rate—6,000 square meters per second—is the approximate velocity of the shock wave radiating from an atomic explosion. Information is expanding at the rate of a nuclear explosion, but unlike a real atomic explosion, which lasts only seconds, this information explosion is perpetual, a nuclear blast lasting many decades.
If today’s social media has taught us anything about ourselves as a species, it is that the human impulse to share overwhelms the human impulse for privacy.
While anonymity can be used to protect heroes, it is far more commonly used as a way to escape responsibility. That’s why most of the brutal harassment on Twitter, Yik Yak, Reddit, and other sites is delivered anonymously. A lack of responsibility unleashes the worst in us.
A zillion neurons give you a smartness a million won’t. A zillion data points will give you insight that a mere hundred thousand don’t. A zillion chips connected to the internet create a pulsating, vibrating unity that 10 million chips can’t. A zillion hyperlinks will give you information and behavior you could never expect from a hundred thousand links. The social web runs in the land of zillionics. Artificial intelligence, robotics, and virtual realities all require mastery of zillionics. But the skills needed to manage zillionics are daunting.
Navigating zillions of bits, in real time, will require entire new fields of mathematics, completely new categories of software algorithms, and radically innovative hardware. What wide-open opportunities!
Entirely new industries have sprung up in the last two decades based on the idea of unbundling. The music industry was overturned by technological startups that enabled melodies to be unbundled from songs and songs unbundled from albums. Revolutionary iTunes sold single songs, not albums. Once distilled and extracted from their former mixture, musical elements could be reordered into new compounds, such as shareable playlists. Big general-interest newspapers were unbundled into classifieds (Craigslist), stock quotes (Yahoo!), gossip (BuzzFeed), restaurant reviews (Yelp), and stories (the web) that stood and grew on their own. These new elements can be rearranged—remixed—into new text compounds, such as news updates tweeted by your friend. The next step is to unbundle classifieds, stories, and updates into even more elemental particles that can be rearranged in unexpected and unimaginable ways. Sort of like smashing information into ever smaller subparticles that can be recombined into a new chemistry. Over the next 30 years, the great work will be parsing all the information we track and create—all the information of business, education, entertainment, science, sport, and social relations—into their most primeval elements. The scale of this undertaking requires massive cycles of cognition. Data scientists call this stage “machine readable” information, because it is AIs and not humans who will do this work in the zillions. When you hear a term like “big data,” this is what it is about.
I am looking forward to having my mind changed a lot in the coming years. I think we’ll be surprised by how many of the things we assumed were “natural” for humans are not really natural at all. It might be fairer to say that what is natural for a tribe of mildly connected humans will not be natural for a planet of intensely connected humans. “Everyone knows” that humans are warlike, but I would guess organized war will become less attractive, or useful, over time as new means of social conflict resolution arise at a global level.
Certainty itself is no longer as certain as it once was. When I am connected to the Screen of All Knowledge, to that billion-eyed hive of humanity woven together and mirrored on a billion pieces of glass, truth is harder to find. For every accepted piece of knowledge I come across, there is, within easy reach, a challenge to the fact. Every fact has its antifact. The internet’s extreme hyperlinking will highlight those antifacts as brightly as the facts. Some antifacts are silly, some borderline, and some valid. This is the curse of the screen: You can’t rely on experts to sort them out because for every expert there is an equal and opposite anti-expert. Thus anything I learn is subject to erosion by these ubiquitous antifactors.
Knowledge, which is related, but not identical, to information, is exploding at the same rate as information, doubling every two years. The number of scientific articles published each year has been accelerating even faster than this for decades. Over the last century the annual number of patent applications worldwide has risen in an exponential curve.
Previous discoveries helped us to recently realize that 96 percent of all matter and energy in our universe is outside of our vision. The universe is not made of the atoms and heat we discovered last century; instead it is primarily composed of two unknown entities we label “dark”: dark energy and dark matter. “Dark” is a euphemism for ignorance. We really have no idea what the bulk of the universe is made of. We find a similar proportion of ignorance if we probe deeply into the cell, or the brain. We don’t know nothin’ relative to what could be known. Our inventions allow us to spy into our ignorance.
IBM’s Watson proved that for most kinds of factual reference questions, an AI can find answers fast and accurately. Part of the increasing ease in providing answers lies in the fact that past questions answered correctly increase the likelihood of another question. At the same time, past correct answers increase the ease of creating the next answer, and increase the value of the corpus of answers as a whole. Each question we ask a search engine and each answer we accept as correct refines the intelligence of the process, increasing the engine’s value for future questions. As we cognify more books and movies and the internet of things, answers become ubiquitous. We are headed to a future where we will ask several hundred questions per day.
A good question is like the one Albert Einstein asked himself as a small boy—“What would you see if you were traveling on a beam of light?” That question launched the theory of relativity, E=MC2, and the atomic age.
A good question is not concerned with a correct answer. A good question cannot be answered immediately. A good question challenges existing answers. A good question is one you badly want answered once you hear it, but had no inkling you cared before it was asked. A good question creates new territory of thinking. A good question reframes its own answers. A good question is the seed of innovation in science, technology, art, politics, and business. A good question is a probe, a what-if scenario. A good question skirts on the edge of what is known and not known, neither silly nor obvious. A good question cannot be predicted. A good question will be the sign of an educated mind. A good question is one that generates many other good questions. A good question may be the last job a machine will learn to do. A good question is what humans are for.
Our society is moving away from the rigid order of hierarchy toward the fluidity of decentralization. It is moving from nouns to verbs, from tangible products to intangible becomings. From fixed media to messy remixed media. From stores to flows. And the value engine is moving from the certainties of answers to the uncertainties of questions.
Thousands of years from now, when historians review the past, our ancient time here at the beginning of the third millennium will be seen as an amazing moment. This is the time when inhabitants of this planet first linked themselves together into one very large thing. Later the very large thing would become even larger, but you and I are alive at that moment when it first awoke. Future people will envy us, wishing they could have witnessed the birth we saw.
What to call this very large masterpiece? Is it more alive than machine? At its core 7 billion humans, soon to be 9 billion, are quickly cloaking themselves with an always-on layer of connectivity that comes close to directly linking their brains to each other. A hundred years ago H. G. Wells imagined this large thing as the world brain. Teilhard de Chardin named it the noosphere, the sphere of thought. Some call it a global mind, others liken it to a global superorganism since it includes billions of manufactured silicon neurons. For simple convenience and to keep it short, I’m calling this planetary layer the holos. By holos I include the collective intelligence of all humans combined with the collective behavior of all machines, plus the intelligence of nature, plus whatever behavior emerges from this whole. This whole equals holos. The scale of what we are becoming is simply hard to absorb. It is the largest thing we have made. Let’s take just the hardware, for example. Today there are 4 billion mobile phones and 2 billion computers linked together into a seamless cortex around the globe. Add to them all the billions of peripheral chips and affiliated devices from cameras to cars to satellites. Already in 2015 a grand total of 15 billion devices have been wired up into one large circuit. Each of these devices contains 1 billion to 4 billion transistors themselves, so in total the holos operates with a sextillion transistors (10 with 21 zeros). These transistors can be thought of as the neurons in a vast brain. The human brain has roughly 86 billion neurons, or a trillion times fewer than the holos. In terms of magnitude, the holos already significantly exceeds our brains in complexity. And our brains are not doubling in size every few years. The holos mind is.
Look at a satellite photograph of the earth at night to get a glimpse of this very large organism. Brilliant clusters of throbbing city lights trace out organic patterns on the dark land. The cities gradually dim at their edges to form thin long lighted highways connecting other distant city clusters. The routes of lights outward are dendritic, treelike patterns. The image is deeply familiar. The cities are ganglions of nerve cells; the lighted highways are the axons of nerves, reaching to a synaptic connection. Cities are the neurons of the holos. We live inside this thing.
A “singularity” is a term borrowed from physics to describe a frontier beyond which nothing can be known. There are two versions in pop culture: a hard singularity and a soft singularity. The hard version is a future brought about by the triumph of a superintelligence. When we create an AI that is capable of making an intelligence smarter than itself, it can in theory make generations of ever smarter AIs.
A soft singularity is more likely. In this future scenario AIs don’t get so smart that they enslave us (like evil versions of smart humans); rather AI and robots and filtering and tracking and all the technologies I outline in this book converge—humans plus machines—and together we move to a complex interdependence.