These highlights are from the Kindle version of Rise of the Robots: Technology and the Threat of a Jobless Future by Martin Ford.

As of 2013, a typical production or nonsupervisory worker earned about 13 percent less than in 1973 (after adjusting for inflation), even as productivity rose by 107 percent and the costs of big-ticket items like housing, education, and health care have soared.

The lost decade of the 2000s is especially astonishing when you consider that the US economy needs to create roughly a million jobs per year just to keep up with growth in the size of the workforce.

Income inequality has since soared to levels not seen since 1929, and it has become clear that the productivity increases that went into workers’ pockets back in the 1950s are now being retained almost entirely by business owners and investors.

It is an era that will be defined by a fundamental shift in the relationship between workers and machines. That shift will ultimately challenge one of our most basic assumptions about technology: that machines are tools that increase the productivity of workers. Instead, machines themselves are turning into workers, and the line between the capability of labor and capital is blurring as never before.

While lower-skill occupations will no doubt continue to be affected, a great many college-educated, white-collar workers are going to discover that their jobs, too, are squarely in the sights as software automation and predictive algorithms advance rapidly in capability.

The upshot of all this is that acquiring more education and skills will not necessarily offer effective protection against job automation in the future. As an example, consider radiologists, medical doctors who specialize in the interpretation of medical images. Radiologists require a tremendous amount of training, typically a minimum of thirteen years beyond high school. Yet, computers are rapidly getting better at analyzing images. It’s quite easy to imagine that someday, in the not too distant future, radiology will be a job performed almost exclusively by machines.

Virtually every industry in existence is likely to become less labor-intensive as new technology is assimilated into business models—and that transition could happen quite rapidly. At the same time, the new industries that emerge will nearly always incorporate powerful labor-saving technology right from their inception. Companies like Google and Facebook, for example, have succeeded in becoming household names and achieving massive market valuations while hiring only a tiny number of people relative to their size and influence. There’s every reason to expect that a similar scenario will play out with respect to nearly all the new industries created in the future.

The virtuous feedback loop between productivity, rising wages, and increasing consumer spending will collapse.

Two sectors in particular—higher education and health care—have, so far, been highly resistant to the kind of disruption that is already becoming evident in the broader economy. The irony is that the failure of technology to transform these sectors could amplify its negative consequences elsewhere, as the costs of health care and education become ever more burdensome.

In Silicon Valley the phrase “disruptive technology” is tossed around on a casual basis. No one doubts that technology has the power to devastate entire industries and upend specific sectors of the economy and job market. The question I will ask in this book is bigger: Can accelerating technology disrupt our entire system to the point where a fundamental restructuring may be required if prosperity is to continue?

Electric-car company Tesla’s new plant in Fremont, California, uses 160 highly flexible industrial robots to assemble about 400 cars per week.

According to the International Federation of Robotics, global shipments of industrial robots increased by more than 60 percent between 2000 and 2012, with total sales of about $28 billion in 2012. By far the fastest-growing market is China, where robot installations grew at about 25 percent per year between 2005 and 2012.

The technology that powers the Industrial Perception robot’s ability to see in three dimensions offers a case study in the ways that cross-fertilization can drive bursts of innovation in unexpected areas. It might be argued that the robot’s eyes can trace their origin to November 2006, when Nintendo introduced its Wii video game console. Nintendo’s machine included an entirely new type of game controller: a wireless wand that incorporated an inexpensive device called an accelerometer. The accelerometer was able to detect motion in three dimensions and then output a data stream that could be interpreted by the game console. Video games could now be controlled through body movements and gestures. The result was a dramatically different game experience. Nintendo’s innovation smashed the stereotype of the nerdy kid glued to a monitor and a joystick, and opened a new frontier for games as active exercise.

Microsoft, however, aimed to leapfrog Nintendo and come up with something entirely new. The Kinect add-on to the Xbox 360 game console eliminated the need for a controller wand entirely. To accomplish this, Microsoft built a webcam-like device that incorporates three-dimensional machine vision capability based in part on imaging technology created at a small Israeli company called PrimeSense. The Kinect sees in three dimensions by using what is, in essence, sonar at the speed of light: it shoots an infrared beam at the people and objects in a room and then calculates their distance by measuring the time required for the reflected light to reach its infrared sensor. Players could now interact with the Xbox game console simply by gesturing and moving in view of the Kinect’s camera.

The history of computing shows pretty clearly that once a standard operating system, together with inexpensive and easy-to-use programming tools, becomes available, an explosion of application software is likely to follow. This has been the case with personal computer software and, more recently, with iPhone, iPad, and Android apps. Indeed, these platforms are now so saturated with application software that it can be genuinely difficult to conceive of an idea that hasn’t already been implemented. It’s a good bet that the field of robotics is poised to follow a similar path; we are, in all likelihood, at the leading edge of an explosive wave of innovation that will ultimately produce robots geared toward nearly every conceivable commercial, industrial, and consumer task. That explosion will be powered by the availability of standardized software and hardware building blocks that will make it a relatively simple matter to assemble new designs without the need to reinvent the wheel.

In a September 2013 article, Stephanie Clifford of the New York Times told the story of Parkdale Mills, a textile factory in Gaffney, South Carolina. The Parkdale plant employs about 140 people. In 1980, the same level of production would have required more than 2,000 factory workers. Within the Parkdale plant, “only infrequently does a person interrupt the automation, mainly because certain tasks are still cheaper if performed by hand.

The US textile industry was decimated in the 1990s as production moved to low-wage countries, especially China, India, and Mexico. About 1.2 million jobs—more than three-quarters of domestic employment in the textile sector—vanished between 1990 and 2012. The last few years, however, have seen a dramatic rebound in production. Between 2009 and 2012, US textile and apparel exports rose by 37 percent to a total of nearly $23 billion.7 The turnaround is being driven by automation technology so efficient that it is competitive with even the lowest-wage offshore workers.

Indeed, there is now a significant “reshoring” trend under way, and this is being driven both by the availability of new technology and by rising offshore labor costs, especially in China where typical factory workers saw their pay increase by nearly 20 percent per year between 2005 and 2010. In April 2012, the Boston Consulting Group surveyed American manufacturing executives and found that nearly half of companies with sales exceeding $10 billion were either actively pursuing or considering bringing factories back to the United States.

Manufacturing jobs in the United States currently account for well under 10 percent of total employment. As a result, manufacturing robots and reshoring are likely to have a fairly marginal impact on the overall job market.

The story will be very different in developing countries like China, where employment is far more focused in the manufacturing sector. In fact, advancing technology has already had a dramatic impact on Chinese factory jobs; between 1995 and 2002 China lost about 15 percent of its manufacturing workforce, or about 16 million jobs.9 There is strong evidence to suggest that this trend is poised to accelerate. In 2012, Foxconn—the primary contract manufacturer of Apple devices—announced plans to eventually introduce up to a million robots in its factories.

Increased automation is also likely to be driven by the fact that the interest rates paid by large companies in China are kept artificially low as a result of government policy. Loans are often rolled over continuously, so that the principal is never repaid. This makes capital investment extremely attractive even when labor costs are low and has been one of the primary reasons that investment now accounts for nearly half of China’s GDP.11 Many analysts believe that this artificially low cost of capital has caused a great deal of mal-investment throughout China, perhaps most famously the construction of “ghost cities” that appear to be largely unoccupied.

Indonesia. In June 2013, athletic-shoe manufacturer Nike announced that rising wages in Indonesia had negatively impacted its quarterly financial numbers. According to the company’s chief financial officer, the long-term solution to that problem is going to be “engineering the labor out of the product.”12 Increased automation is also seen as a way to deflect criticism regarding the sweatshop-like environments that often exist in third-world garment factories.

San Francisco start-up company Momentum Machines, Inc., has set out to fully automate the production of gourmet-quality hamburgers. Whereas a fast food worker might toss a frozen patty onto the grill, Momentum Machines’ device shapes burgers from freshly ground meat and then grills them to order—including even the ability to add just the right amount of char while retaining all the juices. The machine, which is capable of producing about 360 hamburgers per hour, also toasts the bun and then slices and adds fresh ingredients like tomatoes, onions, and pickles only after the order is placed. Burgers arrive assembled and ready to serve on a conveyer belt. While most robotics companies take great care to spin a positive tale when it comes to the potential impact on employment, Momentum Machines co-founder Alexandros Vardakostas is very forthright about the company’s objective: “Our device isn’t meant to make employees more efficient,” he said. “It’s meant to completely obviate them.”13 * The company estimates that the average fast food restaurant spends about $135,000 per year on wages for employees who produce hamburgers and that the total labor cost for burger production for the US economy is about $9 billion annually.14 Momentum Machines believes its device will pay for itself in less than a year, and it plans to target not just restaurants but also convenience stores, food trucks, and perhaps even vending machines. The company argues that eliminating labor costs and reducing the amount of space required in kitchens will allow restaurants to spend more on high-quality ingredients, enabling them to offer gourmet hamburgers at fast food prices.

McDonald’s alone employs about 1.8 million workers in 34,000 restaurants worldwide.

In 2011, McDonald’s launched a high-profile initiative to hire 50,000 new workers in a single day and received over a million applications—a ratio that made landing a McJob more of a statistical long shot than getting accepted at Harvard. While fast food employment was once dominated by young people looking for a part-time income while in school, the industry now employs far more mature workers who rely on the jobs as their primary income. Nearly 90 percent of fast food workers are twenty or older, and the average age is thirty-five.17 Many of these older workers have to support families—a nearly impossible task at a median wage of just $8.69 per hour.

Japan’s Kura sushi restaurant chain has already successfully pioneered an automation strategy. In the chain’s 262 restaurants, robots help make the sushi while conveyor belts replace waiters. To ensure freshness, the system keeps track of how long individual sushi plates have been circulating and automatically removes those that reach their expiration time. Customers order using touch panel screens, and when they are finished dining they place the empty dishes in a slot near their table. The system automatically tabulates the bill and then cleans the plates and whisks them back to the kitchen. Rather than employing store managers at each location, Kura uses centralized facilities where managers are able to remotely monitor nearly every aspect of restaurant operations. Kura’s automation-based business model allows it to price sushi plates at just 100 yen (about $1), significantly undercutting its competitors.

Vending machines make it possible to dramatically reduce three of the most significant costs incurred in the retail business: real estate, labor, and theft by customers and employees.

In 2010, David Dunning was the regional operations supervisor responsible for overseeing the maintenance and restocking of 189 Redbox movie rental kiosks in the Chicago area.27 Redbox has over 42,000 kiosks in the United States and Canada, typically located at convenience stores and supermarkets, and rents about 2 million videos per day.28 Dunning managed the Chicago-area kiosks with a staff of just seven. Restocking the machines is highly automated; in fact, the most labor-intensive aspect of the job is swapping the translucent movie advertisements displayed on the kiosk—a process that typically takes less than two minutes for each machine. Dunning and his staff divide their time between the warehouse, where new movies arrive, and their cars and homes, where they are able to access and manage the machines via the Internet. The kiosks are designed from the ground up for remote maintenance. For example, if a machine jams it will report this immediately, and a technician can log in with his or her laptop computer, jiggle the mechanism, and fix the problem without the need to visit the site. New movies are typically released on Tuesdays, but the machines can be restocked at any time prior to that; the kiosk will automatically make the movies available for rental at the right time. That allows technicians to schedule restocking visits to avoid traffic.

Cloud robotics is sure to be a significant driver of progress in building more capable robots, but it also raises important concerns, especially in the area of security. Aside from its uncomfortable similarity to “Skynet,” the controlling machine intelligence in the Terminator movies starring Arnold Schwarzenegger, there is the much more practical and immediate issue of susceptibility to hacking or cyber attack. This will be an especially significant concern if cloud robotics someday takes on an important role in our transportation infrastructure. For example, if automated trucks and trains eventually move food and other critical supplies under centralized control, such a system might create extreme vulnerabilities. There is already great concern about the vulnerability of industrial machinery, and of vital infrastructure like the electrical grid, to cyber attack. That vulnerability was demonstrated by the Stuxnet worm that was created by the US and Israeli governments in 2010 to attack the centrifuges used in Iran’s nuclear program. If, someday, important infrastructure components are dependent on centralized machine intelligence, those concerns could be raised to an entirely new level.

In the late nineteenth century, nearly half of all US workers were employed on farms; by 2000 that fraction had fallen below 2 percent.

The Australian Centre for Field Robotics (ACFR) at the University of Sydney is focused on employing advanced agricultural robotics to help position Australia as a primary supplier of food for Asia’s exploding population—in spite of the country’s relative paucity of arable land and fresh water. ACFR envisions robots that continuously prowl fields taking soil samples around individual plants and then injecting just the right amount of water or fertilizer.37 Precision application of fertilizer or pesticides to individual plants, or even to specific fruits growing on a tree, could potentially reduce the use of these chemicals by up to 80 percent, thereby dramatically decreasing the amount of toxic runoff that ultimately ends up fouling rivers, streams, and other bodies of water.

On the morning of Sunday, March 31, 1968, the Reverend Martin Luther King, Jr., stood in the elaborately carved limestone pulpit at Washington National Cathedral. The building—one of the largest churches in the world and over twice the size of London’s Westminster abbey—was filled to capacity with thousands of people packed into the nave and transept, looking down from the choir loft, and squeezed into doorways.

There can be no gainsaying of the fact that a great revolution is taking place in the world today. In a sense it is a triple revolution: that is, a technological revolution, with the impact of automation and cybernation; then there is a revolution in weaponry, with the emergence of atomic and nuclear weapons of warfare; then there is a human rights revolution, with the freedom explosion that is taking place all over the world.

The phrase “triple revolution” referred to a report written by a group of prominent academics, journalists, and technologists that called itself the Ad Hoc Committee on the Triple Revolution.

The report predicted that “cybernation” (or automation) would soon result in an economy where “potentially unlimited output can be achieved by systems of machines which will require little cooperation from human beings.”3 The result would be massive unemployment, soaring inequality, and, ultimately, falling demand for goods and services as consumers increasingly lacked the purchasing power necessary to continue driving economic growth. The Ad Hoc Committee went on to propose a radical solution: the eventual implementation of a guaranteed minimum income made possible by the “economy of abundance” such widespread automation could create, and which would “take the place of the patchwork of welfare measures” that were then in place to address poverty.

In 1949, at the request of the New York Times, Norbert Wiener, an internationally renowned mathematician at the Massachusetts Institute of Technology, wrote an article describing his vision for the future of computers and automation.5 Wiener had been a child prodigy who entered college at age eleven and completed his PhD when he was just seventeen; he went on to establish the field of cybernetics and made substantial contributions in applied mathematics and to the foundations of computer science, robotics, and computer-controlled automation. In his article—written just three years after the first true general-purpose electronic computer was built at the University of Pennsylvania*—Wiener argued that “if we can do anything in a clear and intelligible way, we can do it by machine” and warned that this could ultimately lead to “an industrial revolution of unmitigated cruelty” powered by machines capable of “reducing the economic value of the routine factory employee to a point at which he is not worth hiring at any price.”

The nearly perfect historical correlation between increasing productivity and rising incomes broke down: wages for most Americans stagnated and, for many workers, even declined; income inequality soared to levels not seen since the eve of the 1929 stock market crash; and a new phrase—“jobless recovery”—found a prominent place in our vocabulary. In all, we can enumerate at least seven economic trends that, taken together, suggest a transformative role for advancing information technology.

The year 1973 was an eventful one in the history of the United States.

For that was the year a typical American worker’s pay reached its peak. Measured in 2013 dollars, a typical worker—that is, production and nonsupervisory workers in the private sector, representing well over half the American workforce—earned about $767 per week in 1973. The following year, real average wages began a precipitous decline from which they would never fully recover. A full four decades later, a similar worker earns just $664, a decline of about 13 percent.

The story is modestly better if we look at median household incomes. Between 1949 and 1973, US median household incomes roughly doubled, from about $25,000 to $50,000. Growth in median incomes during this period tracked nearly perfectly with per capita GDP. Three decades later, median household income had increased to about $61,000, an increase of just 22 percent. That growth, however, was driven largely by the entry of women into the workforce. If incomes had moved in lockstep with economic growth—as was the case prior to 1973—the median household would today be earning well in excess of $90,000, over 50 percent more than the $61,000 they do earn.

The decline in labor force participation has been accompanied by an explosion in applications for the Social Security disability program, which is intended to provide a safety net for workers who suffer debilitating injuries. Between 2000 and 2011, the number of applications more than doubled, from about 1.2 million per year to nearly 3 million per year.28 As there is no evidence of an epidemic of workplace injuries beginning around the turn of the century, many analysts suspect that the disability program is being misused as a kind of last-resort—and permanent—unemployment insurance program.

Between 1993 and 2010 over half of the increase in US national income went to households in the top 1 percent of the income distribution. Since then, things have only gotten worse.

According to the Central Intelligence Agency’s analysis, income inequality in America is roughly on a par with that of the Philippines and significantly exceeds that of Egypt, Yemen, and Tunisia.38 Studies have also found that economic mobility, a measure of the likelihood that the children of the poor will succeed in moving up the income scale, is significantly lower in the United States than in nearly all European nations. In other words, one of the most fundamental ideas woven into the American ethos—the belief that anyone can get ahead through hard work and perseverance—really has little basis in statistical reality.

In the United States, to a greater degree than in any other advanced democracy, politics is driven almost entirely by money. Wealthy individuals and the organizations they control can mold government policy through political contributions and lobbying, often producing outcomes that are clearly at odds with what the public actually wants.

A four-year college degree has come to be almost universally viewed as an essential credential for entry into the middle class. As of 2012, average hourly wages for college graduates were more than 80 percent higher than the wages of high school graduates.40 The college wage premium is a reflection of what economists call “skill biased technological change” (SBTC).* The general idea behind SBTC is that information technology has automated or deskilled much of the work handled by less educated workers, while simultaneously increasing the relative value of the more cognitively complex tasks typically performed by college graduates. Graduate and professional degrees convey still higher incomes, and in fact, since the turn of the century, things are looking quite a bit less rosy for young college graduates who don’t also have an advanced degree. According to one analysis, incomes for young workers with only a bachelor’s degree declined nearly 15 percent between 2000 and 2010, and the plunge began well before the onset of the 2008 financial crisis.

As of July 2013, fewer than half of American workers who were between the ages of twenty and twenty-four and not enrolled in school had full-time jobs. Among non-students aged sixteen to nineteen only about 15 percent were working full-time.

The propensity for the economy to wipe out solid middle-skill, middle-class jobs, and then to replace them with a combination of low-wage service jobs and high-skill, professional jobs that are generally unattainable for most of the workforce, has been dubbed “job market polarization.” Occupational polarization has resulted in an hourglass-shaped job market where workers who are unable to land one of the desirable jobs at the top end up at the bottom.

The golden era from 1947 to 1973 was characterized by significant technological progress and strong productivity growth. This was before the age of information technology; the innovations during this period were primarily in areas like mechanical, chemical, and aerospace engineering. Think, for example, of how airplanes evolved from employing internal combustion engines driving propellers to much more reliable and better-performing jet engines. This period exemplified what is written in all those economics textbooks: innovation and soaring productivity made workers more valuable—and allowed them to command higher wages.

Although it may appear that virtually everything sold at Walmart is made in China, most American consumer spending stays in the United States. A 2011 analysis by Galina Hale and Bart Hobijn, two economists at the Federal Reserve Bank of San Francisco, found that 82 percent of the goods and services Americans purchase are produced entirely in the United States; this is largely because we spend the vast majority of our money on nontradable services. The total value of imports from China amounted to less than 3 percent of US consumer spending.

In 1950, the US financial sector represented about 2.8 percent of the overall economy. By 2011 finance-related activity had grown more than threefold to about 8.7 percent of GDP. The compensation paid to workers in the financial sector has also exploded over the past three decades, and is now about 70 percent more than the average for other industries.52 The assets held by banks have ballooned from about 55 percent of GDP in 1980 to 95 percent in 2000, while the profits generated in the financial sector have more than doubled from an average of about 13 percent of all corporate profits in the 1978–1997 timeframe to 30 percent in the period between 1998 and 2007.

The primary complaint leveled against the financialization of the economy is that much of this activity is geared toward rent seeking. In other words, the financial sector is not creating real value or adding to the overall welfare of society; it is simply finding ever more creative ways to siphon profits and wealth from elsewhere in the economy. Perhaps the most colorful articulation of this accusation came from Rolling Stone’s Matt Taibbi in his July 2009 takedown of Goldman Sachs that famously labeled the Wall Street firm “a great vampire squid wrapped around the face of humanity, relentlessly jamming its blood funnel into anything that smells like money.”

Automated trading algorithms are now responsible for nearly two-thirds of stock market trades, and Wall Street firms have built huge computing centers in close physical proximity to exchanges in order to gain trading advantages measured in tiny fractions of a second. Between 2005 and 2012, the average time to execute a trade dropped from about 10 seconds to just 0.0008 seconds,56 and robotic, high-speed trading was heavily implicated in the May 2010 “flash crash” in which the Dow Jones Industrial Average plunged nearly a thousand points and then recovered for a net gain, all within the space of just a few minutes.

In a recent analysis, Martin Grötschel of the Zuse Institute in Berlin found that, using the computers and software that existed in 1982, it would have taken a full eighty-two years to solve a particularly complex production planning problem. As of 2003, the same problem could be solved in about a minute—an improvement by a factor of around 43 million. Computer hardware became about 1,000 times faster over the same period, which means that improvements in the algorithms used accounted for approximately a 43,000-fold increase in performance.

Computers are getting dramatically better at performing specialized, routine, and predictable tasks, and it seems very likely that they will soon be poised to outperform many of the people now employed to do these things.

Many experts would say that, in terms of general intelligence, today’s best technology barely outperforms an insect. And yet, insects do not make a habit of landing jet aircraft, booking dinner reservations, or trading on Wall Street. Computers now do all these things, and they will soon begin to aggressively encroach in a great many other areas.

The main idea behind comparative advantage is that you should always be able to find a job, provided you specialize in the thing at which you are “least bad” relative to other people. By doing so, you offer others the chance to also specialize and thereby earn a higher income. In Tom’s case, least bad meant cooking. Jane is luckier (and a lot richer) because her least bad gig is something she is truly great at, and that talent happens to have a very high market value. Throughout economic history, comparative advantage has been the primary driver of ever more specialization and trade between individuals and nations.

The Internet has spawned enormously profitable and influential corporations with startlingly diminutive workforces. In 2012, Google, for example, generated a profit of nearly $14 billion while employing fewer than 38,000 people.9 Contrast that with the automotive industry. At peak employment in 1979, General Motors alone had nearly 840,000 workers but earned only about $11 billion—20 percent less than what Google raked in. And, yes, that’s after adjusting for inflation.

The evidence shows pretty clearly that the income realized from online activities nearly always tends to follow a winner-take-all distribution. While the Internet may, in theory, equalize opportunity and demolish entry barriers, the actual outcomes it produces are almost invariably highly unequal.

Mobile phones have indeed been shown to improve living standards, but this has been documented primarily in developing countries that lack other communications infrastructure. By far the most celebrated success story involves sardine fishermen in Kerala, a region along the southwest coast of India. In a 2007 research paper, economist Robert Jensen described how mobile phones allowed the fishermen to determine which villages offered the best markets for their fish.13 Before the advent of wireless technology, targeting a particular village was a guess that often resulted in a mismatch between supply and demand. However, with their new phones, the fishermen knew exactly where the buyers were, and this has resulted in a better functioning market with more stable prices and far less waste.

It should be kept in mind, as well, that much of the basic research that enabled progress in the IT sector was funded by American taxpayers. The Defense Advanced Research Projects Agency (DARPA) created and funded the computer network that ultimately evolved into the Internet.* Moore’s Law has come about, in part, because of university-led research funded by the National Science Foundation. The Semiconductor Industry Association, the industry’s political action committee, actively lobbies for increased federal research dollars. Today’s computer technology exists in some measure because millions of middle-class taxpayers supported federal funding for basic research in the decades following World War II. We can be reasonably certain that those taxpayers offered their support in the expectation that the fruits of that research would create a more prosperous future for their children and grandchildren. Yet, the trends we looked at in the last chapter suggest we are headed toward a very different outcome.

In 2010, the Northwestern University researchers who oversaw the team of computer science and journalism students who worked on StatsMonkey raised venture capital and founded a new company, Narrative Science, Inc., to commercialize the technology. The company hired a team of top computer scientists and engineers; then it tossed out the original StatsMonkey computer code and built a far more powerful and comprehensive artificial intelligence engine that it named “Quill.” Narrative Science’s technology is used by top media outlets, including Forbes, to produce automated articles in a variety of areas, including sports, business, and politics. The company’s software generates a news story approximately every thirty seconds, and many of these are published on widely known websites that prefer not to acknowledge their use of the service. At a 2011 industry conference, Wired writer Steven Levy prodded Narrative Science co-founder Kristian Hammond into predicting the percentage of news articles that would be written algorithmically within fifteen years. His answer: over 90 percent.

The insights gleaned from big data typically arise entirely from correlation and say nothing about the causes of the phenomenon being studied. An algorithm may find that if A is true, B is likely also true. But it cannot say whether A causes B or vice versa—or if perhaps both A and B are caused by some external factor.

Google’s system is not yet competitive with the efforts of skilled human translators, but it offers bidirectional translation between more than five hundred language pairs. That represents a genuinely disruptive advance in communication capability: for the first time in human history, nearly anyone can freely and instantly obtain a rough translation of virtually any document in any language.

Artificial neural networks were first conceived and experimented with in the late 1940s and have long been used to recognize patterns. However, the last few years have seen a number of dramatic breakthroughs that have resulted in significant advances in performance, especially when multiple layers of neurons are employed—a technology that has come to be called “deep learning.” Deep learning systems already power the speech recognition capability in Apple’s Siri and are poised to accelerate progress in a broad range of applications that rely on pattern analysis and recognition.

The predictions that can be extracted from data will increasingly be used to substitute for human qualities such as experience and judgment. As top managers increasingly employ data-driven decision making powered by automated tools, there will be an ever-shrinking need for an extensive human analytic and management infrastructure. Whereas today there is a team of knowledge workers who collect information and present analysis to multiple levels of management, eventually there may be a single manager and a powerful algorithm. Organizations are likely to flatten. Layers of middle management will evaporate.

In November 2013, IBM announced that its Watson system would move from the specialized computers that hosted the system for the Jeopardy! matches to the cloud. In other words, Watson would now reside in massive collections of servers connected to the Internet. Developers would be able to link directly to the system and incorporate IBM’s revolutionary cognitive computing technology into custom software applications and mobile apps. This latest version of Watson was also more than twice as fast as its Jeopardy!-playing predecessor.

Cycle Computing, a small company that specializes in large-scale computing, was able to solve a complex problem that would have taken over 260 years on a single computer in just 18 hours by utilizing tens of thousands of the computers that power Amazon’s cloud service. The company estimates that prior to the advent of cloud computing, it would have cost as much as $68 million to build a supercomputer capable of taking on the problem. In contrast, it’s possible to rent 10,000 servers in the Amazon cloud for about $90 per hour.

The massive facilities that host cloud computing services benefit from enormous economies of scale, and the administrative functions that once kept armies of skilled IT workers busy are now highly automated. Facebook, for example, employs a smart software application called “Cyborg” that continuously monitors tens of thousands of servers, detects problems, and in many cases can perform repairs completely autonomously. A Facebook executive noted in November 2013 that the Cyborg system routinely solves thousands of problems that would otherwise have to be addressed manually, and that the technology allows a single technician to manage as many as 20,000 computers.

In 2011, the Washington Post’s Michael Rosenwald reported that a colossal, billion-dollar data center built by Apple, Inc., in the town of Maiden, North Carolina, had created only fifty full-time positions.

As Netscape co-founder and venture capitalist Marc Andreessen famously said, “Software is eating the world.” More often than not, that software will be hosted in the cloud. From that vantage point it will eventually be poised to invade virtually every workplace and swallow up nearly any white-collar job that involves sitting in front of a computer manipulating information.

Most of us quite naturally tend to associate the concept of creativity exclusively with the human brain, but it’s worth remembering that the brain itself—by far the most sophisticated invention in existence—is the product of evolution. Given this, perhaps it should come as no surprise that attempts to build creative machines very often incorporate genetic programming techniques. Genetic programming essentially allows computer algorithms to design themselves through a process of Darwinian natural selection. Computer code is initially generated randomly and then repeatedly shuffled using techniques that emulate sexual reproduction.

Highly educated and skilled professionals such as lawyers, radiologists, and especially computer programmers and information technology workers have already felt a significant impact. In India, for example, there are armies of call center workers and IT professionals, as well as tax preparers versed in the US tax code and attorneys specifically trained not in their own country’s legal system but in American law, and standing ready to perform low-cost legal research for US firms engaged in domestic litigation. While the offshoring phenomenon may seem completely unrelated to the jobs lost to computers and algorithms, the precise opposite is true: offshoring is very often a precursor of automation, and the jobs it creates in low-wage nations may prove to be short-lived as technology advances. What’s more, advances in artificial intelligence may make it even easier to offshore jobs that can’t yet be fully automated.

As we’ve seen, one of the tenets of the big data approach to management is that insights gleaned from algorithmic analysis can increasingly substitute for human judgment and experience. Even before advancing artificial intelligence applications reach the stage where full automation is possible, they will become powerful tools that encapsulate ever more of the analytic intelligence and institutional knowledge that give a business its competitive advantage. A smart young offshore worker wielding such tools might soon be competitive with far more experienced professionals in developed countries who command very high salaries.

In 2013, researchers at the University of Oxford’s Martin School conducted a detailed study of over seven hundred US job types and came to the conclusion that nearly 50 percent of jobs will ultimately be susceptible to full machine automation.

In mid-2013, Chinese authorities acknowledged that only about half of the country’s current crop of college graduates had been able to find jobs, while more than 20 percent of the previous year’s graduates remained unemployed—and those figures are inflated when temporary and freelance work, as well as enrollment in graduate school and government-mandated “make work” positions, are regarded as full employment.

Yet another observation is that, in many cases, those workers who seek a machine collaboration job may well be in for a “be careful what you wish for” epiphany. As one example, consider the current trends in legal discovery. When corporations engage in litigation, it becomes necessary to sift through enormous numbers of internal documents and decide which ones are potentially relevant to the case at hand. The rules require these to be provided to the opposing side, and there can be substantial legal penalties for failing to produce anything that might be pertinent. One of the paradoxes of the paperless office is that the sheer number of such documents, especially in the form of emails, has grown dramatically since the days of typewriters and paper. To deal with this overwhelming volume, law firms are employing new techniques.

The first approach involves full automation. So-called e-Discovery software is based on powerful algorithms that can analyze millions of electronic documents and automatically tease out the relevant ones. These algorithms go far beyond simple key-word searches and often incorporate machine learning techniques that can isolate relevant concepts even when specific phrases are not present.

Between 2003 and 2012, the median income of US college graduates with bachelor’s degrees fell from nearly $52,000 to just over $46,000, measured in 2012 dollars. During the same period, total student loan debt tripled from about $300 billion to $900 billion.

An April 2013 analysis by the Economic Policy Institute found that at colleges in the United States, the number of new graduates with engineering and computer science degrees exceeds the number of graduates who actually find jobs in these fields by 50 percent. The study concludes that “the supply of graduates is substantially larger than the demand for them in industry.” It is becoming increasingly clear that a great many people will do all the right things in terms of pursuing an advanced education, but nonetheless fail to find a foothold in the economy of the future.

Algorithmic grading, despite the controversy that attaches to it, is virtually certain to become more prevalent as schools continue to seek ways to cut costs. In situations where a large number of essays need to be graded, the approach has obvious advantages. Aside from speed and lower cost, an algorithmic approach offers objectivity and consistency in cases where multiple human graders would otherwise be required.

EdX, a consortium of elite universities founded to offer free online courses, announced in early 2013 that it will make its essay-grading software freely available to any educational institutions that want to use it.

In 2013, edX—the MOOC consortium founded by Harvard and MIT—began offering ID-verified certificates to students who pay an additional fee and take the class under the watchful eye of a webcam. Such certificates can be presented to potential employers but generally cannot be used for academic credit.

There may even be an opportunity for a venture-backed firm to step into the testing and credential issuing role while completely bypassing the messy and expensive business of offering classes. Self-motivated students would be free to use any available resources—including MOOCs, self-study, or more traditional classes—to achieve competency, and then could pass an assessment test administered by the firm for credit. Such tests might be quite rigorous, in effect creating a filter roughly comparable to the admissions processes at more selective colleges. If such a start-up company were able to build a solid reputation for granting credentials only to highly competent graduates, and if—perhaps most critically—it could build strong relationships with high-profile employers so that its graduates were sought after, it would have a clear potential to upend the higher-education industry.

If the MOOC disruption is yet to unfold, it will slam into an industry that brings in nearly half a trillion dollars in annual revenue and employs over three and a half million people.14 In the years between 1985 and 2013, college costs soared by 538 percent, while the general consumer price index increased only 121 percent. Even medical costs lagged far behind higher education, increasing about 286 percent over the same period.15 Much of that cost is being funded with student loans, which now amount to at least $1.2 trillion in the United States. About 70 percent of US college students borrow, and the average debt at graduation is just under $30,000.16 Keep in mind that only about 60 percent of college students in bachelor’s degree programs graduate within six years, leaving the remainder to pay off any accumulated debt without the benefit of a degree.

The United States has over 2,000 four-year colleges and universities. If you include institutions that grant two-year degrees, the number grows to over 4,000. Of these, perhaps 200–300 might be characterized as selective. The number of schools with national reputations, or that might be considered truly elite, is, of course, far smaller. Imagine a future where college students can attend free online courses taught by Harvard or Stanford professors and subsequently receive a credential that would be acceptable to employers or graduate schools. Who, then, would be willing to go into debt in order to pay the tuition at a third- or fourth-tier institution?

The very fact that schools like Harvard and Stanford are willing to give that education away for free is evidence that these institutions are primarily in the business of conveying credentials rather than knowledge. Elite credentials do not scale in the same way as, say, a digital music file; they are more like limited-edition art prints or paper money created by a central bank. Give away too many and their value falls. For this reason, I suspect that truly top-tier colleges will remain quite wary of providing meaningful credentials.

Higher education is one of two major US industries that has, so far, been relatively immune to the impact of accelerating digital technology. Nonetheless, innovations like MOOCs, automated grading algorithms, and adaptive learning systems offer a relatively promising path toward eventual disruption.

In 1960, health care represented less than 6 percent of the US economy. By 2013 it had nearly tripled, having grown to nearly 18 percent, and per capita health care spending in the United States had soared to a level roughly double that of most other industrialized countries.

Automated systems can also provide a viable second opinion. A very effective—but expensive—way to increase cancer detection rates is to have two radiologists read every mammogram image separately and then reach a consensus on any potential anomalies identified by either doctor. This “double reading” strategy results in significantly improved cancer detection and also dramatically reduces the number of patients who have to be recalled for further testing. A 2008 study published in the New England Journal of Medicine found that a machine can step into the role of the second doctor. When a radiologist is paired with a computer-aided detection system, the results are just as good as having two doctors separately interpret the images.

The pharmacy at the University of California Medical Center in San Francisco prepares about 10,000 individual doses of medication every day, and yet a pharmacist never touches a pill or a medicine bottle. A massive automated system manages thousands of different drugs and handles everything from storing and retrieving bulk pharmaceutical supplies to dispensing and packaging individual tablets. A robotic arm continuously picks pills from an array of bins and places them in small plastic bags. Every dose goes into a separate bag and is labeled with a barcode that identifies both the medication and the patient who should receive it. The machine then arranges each patient’s daily meds in the order that they need to be taken and binds them together. Later, the nurse who administers the medication will scan the barcodes on both the dosage bag and the patient’s wrist band. If they don’t match, or if the medication is being given at the wrong time, an alarm sounds. Three other specialized robots automate the preparation of injectable medicines; one of these robots deals exclusively with highly toxic chemotherapy drugs. The system virtually eliminates the possibility of human error by cutting humans almost entirely out of the loop.

In 1963, the Nobel laureate economist Kenneth Arrow wrote a paper detailing the ways in which medical care stands apart from other goods and services. Among other things, Arrow’s paper highlighted the fact that medical costs are extremely unpredictable and often very high, so that consumers can neither pay for them out of ongoing income nor effectively plan ahead as they might for other major purchases. Medical care can’t be tested before you buy it; it’s not like visiting the wireless store and trying out all the smart phones. In emergencies, of course, the patient may be unconscious or about to die. And, in any case, the whole business is so complex and requires so much specialized knowledge that a normal person can’t reasonably be expected to make such decisions. Health care providers and patients simply don’t come to the table as anything approaching equals, and as Arrow pointed out, “both parties are aware of this informational inequality, and their relation is colored by this knowledge.”26 The bottom line is that the high cost, unpredictability, and complexity of major medical and hospitalization services make some kind of insurance model essential for the health care industry.

It is also critical to understand that health care spending is highly concentrated among a tiny number of very sick people. A 2012 report by the National Institute for Health Care Management found that just 1 percent of the population—the very sickest people—accounted for over 20 percent of total national health care spending. Nearly half of all spending, about $623 billion in 2009, went to the sickest 5 percent of the population. In fact, health care spending is subject to the same kind of inequality as income in the United States.

The fact that Medicare is relatively effective at controlling most patient-related costs, while spending far less than private insurers on administration and overhead, underlies the argument for simply expanding the program to include everyone and, in effect, creating a single-payer system. This has been the path followed by a number of other advanced countries—all of which spend far less on health care than the United States and typically have better outcomes according to metrics like life expectancy and infant mortality. While a single-payer system, managed by the government, has both logic and evidence to support it, there is no escaping the reality that in the United States the whole idea is ideologically toxic to roughly half the population. Putting such a system in place would also presumably result in the demise of nearly the entire private health insurance sector; that does not seem likely given the enormous political influence wielded by the industry.

“Most pharmacists are employed only because the law says that there has to be a pharmacist present to dispense drugs.” That, at least for the moment, is probably something of an exaggeration. Job prospects for newly minted pharmacists have worsened significantly over the past decade, and things may well get worse. A 2012 analysis identifies a “looming joblessness crisis for new pharmacy graduates” and suggests that the unemployment rate could reach 20 percent.

YouTube was founded in 2005 by three people. Less than two years later, the company was purchased by Google for about $1.65 billion. At the time of its acquisition, YouTube employed a mere sixty-five people, the majority of them highly skilled engineers. That works out to a valuation of over $25 million per employee. In April 2012, Facebook acquired photo-sharing start-up Instagram for $1 billion. The company employed thirteen people. That’s roughly $77 million per worker. Fast-forward another two years to February 2014 and Facebook once again stepped up to the plate, this time purchasing mobile messaging company WhatsApp for $19 billion. WhatsApp had a workforce of fifty-five—giving it a valuation of a staggering $345 million per employee.

Soaring per-employee valuations are a vivid demonstration of the way accelerating information and communications technology can leverage the efforts of a tiny workforce into enormous investment value and revenue.

In 2009, there were about 11 million automobile accidents in the United States, and about 34,000 people were killed in collisions. Globally, about one and a quarter million people are killed on roads each year.13 The National Transportation Safety Board estimates that 90 percent of accidents occur primarily because of human error.

Excepting perhaps electricity, there is no other single innovation that has been more central to the development of the American middle class—and the established fabric of society in nearly all developed countries—than the automobile. The true driverless vehicle has the potential to completely upend the way we think about and interact with cars. It could also vaporize millions of solid middle-class jobs and destroy untold thousands of businesses.

To visualize the most extreme possible implications of Reuther’s warning, consider a thought experiment. Imagine that Earth is suddenly invaded by a strange extraterrestrial species.

A single very wealthy person may buy a very nice car, or perhaps even a dozen such cars. But he or she is not going to buy thousands of automobiles. The same is true for mobile phones, laptop computers, restaurant meals, cable TV subscriptions, mortgages, toothpaste, dental checkups, or any other consumer good or service you might imagine. In a mass-market economy, the distribution of purchasing power among consumers matters a great deal.

In 1992, the top 5 percent of US households in terms of income were responsible for about 27 percent of total consumer spending. By 2012, that percentage had risen to 38 percent. Over the same two decades, the share of spending attributed to the bottom 80 percent of American consumers fell from about 47 percent to 39 percent.

John Maynard Keynes may have said it best, writing nearly eighty years ago in The General Theory of Employment, Interest and Money, the book that arguably founded economics as a modern field of study: “Too large a proportion of recent ‘mathematical’ economics are merely concoctions, as imprecise as the initial assumptions they rest on, which allow the author to lose sight of the complexities and interdependencies of the real world in a maze of pretentious and unhelpful symbols.”

If advancing technology (or some other factor) causes wages to stagnate or even fall, then from management’s perspective labor will—at least for a time—become more attractive relative to machines. Consider the fast food industry. In Chapter 1, I speculated that this sector may soon be ripe for disruption as advanced robotic technology is introduced. But this suggests a basic question: Why hasn’t the industry already incorporated more automation? After all, putting together hamburgers and tacos hardly seems to be on the forefront of precision manufacturing. The answer, at least in part, is that technology has indeed already had a dramatic impact. While machines have not yet completely substituted for fast food workers on a large scale, technology has deskilled the jobs and made the workers largely interchangeable. Fast food workers are integrated into a mechanized assembly-line process with little training required.* This is why the industry is able to tolerate high turnover rates and workers with minimal skill levels. The effect has been to keep these jobs firmly anchored in the minimum-wage category.

In an April 2011 report, economists Andrew G. Berg and Jonathan D. Ostry of the International Monetary Fund studied a variety of advanced and emerging economies and came to the conclusion that income inequality is a vital factor affecting the sustainability of economic growth.21 Berg and Ostry point out that economies rarely see steady growth that continues for decades. Instead, “periods of rapid growth are punctuated by collapses and sometimes stagnation—the hills, valleys, and plateaus of growth.” The thing that sets successful economies apart is the duration of the growth spells. The economists found that higher inequality was strongly correlated with shorter periods of economic growth. Indeed, a 10-percentage-point decrease in inequality was associated with growth spells that lasted 50 percent longer. Writing on the IMF’s blog, the economists warned that extreme income inequality in the United States has clear implications for the country’s future growth prospects: “Some dismiss inequality and focus instead on overall growth—arguing, in effect, that a rising tide lifts all boats.” However, “when a handful of yachts become ocean liners while the rest remain lowly canoes, something is seriously amiss.”

The cost of land, housing, and insurance, for example, are tied to general asset values, which are in turn dependent on the overall standard of living. This is the reason that developing countries like Thailand don’t allow foreigners to buy land; doing so might result in prices being bid up to the point where housing would become unaffordable for the country’s citizens.

The 2008 financial crisis was precipitated when borrowers who had taken out subprime loans began to default en masse in 2007. While the number of subprime loans soared during the period from 2000 to 2007, at their peak they still constituted only about 13.5 percent of the new mortgages issued in the United States.23 The impact of those defaults was, of course, dramatically amplified by the banks’ use of complex financial derivatives.

The most frightening long-term scenario of all might be if the global economic system eventually manages to adapt to the new reality. In a perverse process of creative destruction, the mass-market industries that currently power our economy would be replaced by new industries producing high-value products and services geared exclusively toward a super-wealthy elite. The vast majority of humanity would effectively be disenfranchised. Economic mobility would become nonexistent. The plutocracy would shut itself away in gated communities or in elite cities, perhaps guarded by autonomous military robots and drones.

The 2013 movie Elysium, in which the plutocrats migrate to an Eden-like artificial world in Earth orbit, does a pretty good job of bringing this dystopian vision of the future to life.

As of January 2014, the youth unemployment rates in two of Europe’s most rapidly graying countries, Italy and Spain, were both at catastrophic levels: 42 percent in Italy and a stunning 58 percent in Spain.

In an analysis published in February 2014, MIT economist James Poterba found that a remarkable 50 percent of American households aged sixty-five to sixty-nine have retirement account balances of $5,000 or less.

The rise of capitalism in China resulted in the demise of the “iron rice bowl,” under which state-owned industries provided pensions. Retirees now have to fend largely for themselves or rely on their children, but the collapsing fertility rate has led to the infamous “1-2-4” problem in which a single working-age adult will eventually have to help support two parents and four grandparents.

The lack of a social safety net for older citizens is probably one important driver of China’s astonishingly high savings rate, which has been estimated to be as much as 40 percent.

Personal consumption amounts to only about 35 percent of China’s economy—roughly half the level in the United States. Instead, Chinese economic growth has been powered primarily by manufacturing exports together with an astonishingly high level of investment. In 2013, the share of China’s GDP attributable to investment in things like factories, equipment, housing, and other physical infrastructure surged to 54 percent, up from about 48 percent a year earlier.34 Nearly everyone agrees that this is fundamentally unsustainable. After all, investments have to eventually pay for themselves, and that happens as a result of consumption: factories have to produce goods that are profitably sold, new housing has to be rented, and so forth. The need for China to restructure its economy in favor of domestic spending has been acknowledged by the government and widely discussed for years, and yet virtually no tangible progress has been made.

It’s often remarked that China faces the danger of growing old before it grows rich, but what I think is less generally acknowledged is that China is in a race not just with demographics but also with technology. As we saw in Chapter 1, Chinese factories are already moving aggressively to introduce robots and automation. Some factories are reshoring to advanced countries or moving to even lower-wage countries like Vietnam.

As incomes rise, households typically spend a larger fraction of their incomes on services, thereby helping to create jobs outside the factory sector. The United States had the luxury of building a strong middle class during its “Goldilocks” period following World War II, when technology was progressing rapidly, but still fell far short of substituting completely for workers. China is faced with performing a similar feat in the robotic age—when machines and software will increasingly threaten jobs not just in manufacturing but also in the service sector itself.

According to one study, about 22 million factory jobs disappeared worldwide between 1995 and 2002. Over the same seven-year period, manufacturing output increased 30 percent.

In May 2014, Cambridge University physicist Stephen Hawking penned an article that set out to sound the alarm about the dangers of rapidly advancing artificial intelligence. Hawking, writing in the UK’s The Independent along with co-authors who included Max Tegmark and Nobel laureate Frank Wilczek, both physicists at MIT, as well as computer scientist Stuart Russell of the University of California, Berkeley, warned that the creation of a true thinking machine “would be the biggest event in human history.” A computer that exceeded human-level intelligence might be capable of “outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand.” Dismissing all this as science fiction might well turn out to be “potentially our worst mistake in history.”

The rise of companies like Google, Facebook, and Amazon has propelled a great deal of progress. Never before have such deep-pocketed corporations viewed artificial intelligence as absolutely central to their business models—and never before has AI research been positioned so close to the nexus of competition between such powerful entities. A similar competitive dynamic is unfolding among nations. AI is becoming indispensable to militaries, intelligence agencies, and the surveillance apparatus in authoritarian states.* Indeed, an all-out AI arms race might well be looming in the near future.

The first application of the term “singularity” to a future technology-driven event is usually credited to computer pioneer John von Neumann, who reportedly said sometime in the 1950s that “ever accelerating progress . . . gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”5 The theme was fleshed out in 1993 by San Diego State University mathematician Vernor Vinge, who wrote a paper entitled “The Coming Technological Singularity.” Vinge, who is not given to understatement, began his paper by writing that “within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.”

In astrophysics, a singularity refers to the point within a black hole where the normal laws of physics break down. Within the black hole’s boundary, or event horizon, gravitational force is so intense that light itself is unable to escape its grasp. Vinge viewed the technological singularity in similar terms: it represents a discontinuity in human progress that would be fundamentally opaque until it occurred. Attempting to predict the future beyond the Singularity would be like an astronomer trying to see inside a black hole.

One of the obvious implications of a potential intelligence explosion is that there would be an overwhelming first-mover advantage. In other words, whoever gets there first will be effectively uncatchable. This is one of the primary reasons to fear the prospect of a coming AI arms race. The magnitude of that first-mover advantage also makes it very likely that any emergent AI would quickly be pushed toward self-improvement—if not by the system itself, then by its human creators. In this sense, the intelligence explosion might well be a self-fulfilling prophesy. Given this, I think it seems wise to apply something like Dick Cheney’s famous “1 percent doctrine” to the specter of advanced artificial intelligence: the odds of its occurrence, at least in the foreseeable future, may be very low—but the implications are so dramatic that it should be taken seriously.

The fundamental ideas that underlie nanotechnology trace their origin back at least to December 1959, when the legendary Nobel laureate physicist Richard Feynman addressed an audience at the California Institute of Technology. Feynman’s lecture was entitled “There’s Plenty of Room at the Bottom” and in it he set out to expound on “the problem of manipulating and controlling things on a small scale.” And by “small” he meant really small. Feynman declared that he was “not afraid to consider the final question as to whether, ultimately—in the great future—we can arrange the atoms the way we want; the very atoms, all the way down!” Feynman clearly envisioned a kind of mechanized approach to chemistry, arguing that nearly any substance could be synthesized simply by putting “the atoms down where the chemist says, and so you make the substance.”

In the late 1970s, K. Eric Drexler, then an undergraduate at the Massachusetts Institute of Technology, picked up Feynman’s baton and carried it, if not to the finish line, then at least through the next lap. Drexler imagined a world in which nano-scale molecular machines were able to rapidly rearrange atoms, almost instantly transforming cheap and abundant raw material into nearly anything we might want to produce. He coined the term “nanotechnology” and wrote two books on the subject. The first, Engines of Creation: The Coming Era of Nanotechnology, published in 1986, achieved popular success and was the primary force that thrust nanotechnology into the public sphere.

The very idea of molecular machines may seem completely farcical until you take in the fact that such devices exist and, in fact, are integral to the chemistry of life. The most prominent example is the ribosome—essentially a molecular factory contained within cells that reads the information encoded in DNA and then assembles the thousands of different protein molecules that form the structural and functional building blocks of all biological organisms.

Among some techno-optimists, the prospect of molecular manufacturing is associated strongly with the concept of an eventual “post-scarcity” economy in which nearly all material goods are abundant and virtually free. Services are likewise assumed to be provided by advanced AI. In this technological utopia, resource and environmental constraints would be eliminated by universal, molecular recycling and abundant clean energy. The market economy might cease to exist, and (as on Star Trek) there would be no need for money.

An abundance of evidence suggests that many of the students now attending American colleges are academically unprepared for or, in some cases, simply ill-suited to college-level work. Of these, a large share will fail to graduate but very often will nonetheless walk away with daunting student loan burdens. Of those who do graduate, as many as half will fail to land a job that actually requires a college degree, whatever the job description might say. Overall, about 20 percent of US college graduates are considered overeducated for their current occupation, and average incomes for new college graduates have been in decline for more than a decade. In Europe, where many countries provide students with college educations that are free or nearly so, roughly 30 percent of graduates are overqualified for their jobs.2 In Canada, the number is about 27 percent.3 In China, a remarkable 43 percent of the workforce is overeducated.

The reality is that awarding more college degrees does not increase the fraction of the workforce engaged in the professional, technical, and managerial jobs that most graduates would like to land. Instead, the result very often is credential inflation; many occupations that once required only a high school diploma are now open only to those with a four-year college degree, the master’s becomes the new bachelor’s, and degrees from nonelite schools are devalued. We are running up against a fundamental limit both in terms of the capabilities of the people being herded into colleges and the number of high-skill jobs that will be available for them if they manage to graduate. The problem is that the skills ladder is not really a ladder at all: it is a pyramid, and there is only so much room at the top.

Historically, the job market has always looked like a pyramid in terms of worker skills and capabilities. At the top, a relatively small number of highly skilled professionals and entrepreneurs have been responsible for most creativity and innovation. The vast majority of the workforce has always been engaged in work that is, on some level, relatively routine and repetitive. As various sectors of the economy have mechanized or automated, workers have transitioned from routine jobs in one sector to routine jobs in another. The person who would have worked on a farm in 1900, or in a factory in 1950, is today scanning bar codes or stocking shelves at Walmart.

While a basic income has been embraced by economists and intellectuals on both sides of the political spectrum, the idea has been advocated especially forcefully by conservatives and libertarians. Friedrich Hayek, who has become an iconic figure among today’s conservatives, was a strong proponent of the idea. In his three-volume work Law, Legislation and Liberty, published between 1973 and 1979, Hayek suggested that a guaranteed income would be a legitimate government policy designed to provide insurance against adversity, and that the need for this type of safety net is the direct result of the transition to a more open and mobile society where many individuals can no longer rely on traditional support systems.

There is, however, yet another class of common risks with regard to which the need for government action has until recently not been generally admitted. . . . The problem here is chiefly the fate of those who for various reasons cannot make their living in the market . . . that is, all people suffering from adverse conditions which may affect anyone and against which most individuals cannot alone make adequate protection but in which a society that has reached a certain level of wealth can afford to provide for all. The assurance of a certain minimum income for everyone, or a sort of floor below which nobody need fall even when he is unable to provide for himself, appears not only to be a wholly legitimate protection against a risk common to all, but a necessary part of the Great Society in which the individual no longer has specific claims on the members of the particular small group into which he was born.

A proposal for a guaranteed income would today almost certainly be attacked as a liberal mechanism for attempting to bring about “equal outcomes.” Hayek himself explicitly rejected this, however, writing that “it is unfortunate that the endeavor to secure a uniform minimum for all who cannot provide for themselves has become connected with the wholly different aims of securing a ‘just’ distribution of incomes.” For Hayek, a guaranteed income had nothing to do with equality or “just distribution”—it was about insurance against adversity as well as efficient social and economic function.

Rather than having government intrude into personal economic decisions, or get into the business of directly providing products and services, the idea is to give everyone the means to go out and participate in the market. It is fundamentally a market-oriented approach to providing a minimal safety net, and its implementation would make other less efficient mechanisms—the minimum wage, food stamps, welfare, and housing assistance—unnecessary.

The United States has a staggering 2.4 million people locked up in jails and prisons—a per capita incarceration rate more than three times that of any other country and more than ten times that of advanced nations like Denmark, Finland, and Japan. As of 2008, about 60 percent of these people were nonviolent offenders, and the annual per capita cost of housing them was about $26,000.

The income provided should be relatively minimal: enough to get by, but not enough to be especially comfortable. There is also a strong argument for initially setting the income level even lower than this and then gradually increasing it over time after studying the impact of the program on the workforce.

Conservative social scientist Charles Murray’s 2006 book In Our Hands: A Plan to Replace the Welfare State argues that a guaranteed income would be likely to make non-college-educated men more attractive marriage partners. This group has been the hardest hit by the impact of both technology and factory offshoring on the job market.

To visualize the problem, I find it useful to think of markets as renewable resources. Imagine a consumer market as a lake full of fish. When a business sells products or services into the market, it catches fish. When it pays wages to its employees, it tosses fish back into the lake. As automation progresses and jobs disappear, fewer fish get returned to the lake. Again, keep in mind that nearly all major industries are dependent on catching large numbers of moderately sized fish. Increasing inequality will result in a small number of very large fish, but from the point of view of most mass-market industries these aren’t worth a whole lot more than normal-sized fish. (The billionaire is not going to buy a thousand smart phones, cars, or restaurant meals.)

In the case of our consumer market, we don’t want to limit the number of virtual fish that businesses can catch. Instead, we want to make sure the fish get replenished. A guaranteed income is one very effective way to do this. The income gets purchasing power directly into the hands of lower- and middle-income consumers.

A term often used in place of “guaranteed income” is “citizen’s dividend,” which I think effectively captures the argument that everyone should have at least a minimal claim on a nation’s overall economic prosperity.

In 1975, the University of Chicago economist Sam Peltzman published a study showing that regulations designed to improve automobile safety had failed to result in a significant reduction in highway fatalities. The reason, he argued, was that drivers simply compensated for the perceived increase in safety by taking more risks.

This “Peltzman effect” has since been demonstrated in a wide range of areas. Children’s playgrounds, for example, have become much safer. Steep slides and high climbing structures have been removed and cushioned surfaces have been installed. Yet, studies have shown that there has been no meaningful reduction in playground-related emergency room visits or broken bones.15 Other observers have noted the same phenomenon with respect to skydiving: the equipment has gotten dramatically better and safer, but the fatality rate remains roughly the same as skydivers compensate with riskier behavior.

The Peltzman effect is typically invoked by conservative economists in support of an argument against increased government regulation. However, I think there is every reason to believe that this risk compensation behavior extends into the economic arena. People who have a safety net will be willing to take on more economic risk. If you have a good idea for a new business, it seems very likely that you would be more willing to quit a secure job and make the leap into entrepreneurship if you knew you had access to a guaranteed income.

The state of Alaska has paid a modest annual dividend funded by oil revenue since 1976; in recent years, the payments have typically been between $1,000 and $2,000 per person. Both adults and children are eligible, so the amount can be significant for families.

One of the greatest political and psychological barriers to the implementation of a guaranteed income would be simple acceptance of the fact that some fraction of recipients will inevitably take the money and drop out of the workforce. Some people will choose to play video games all day—or, worse, spend the money on alcohol or drugs. Some recipients might pool their incomes, crowding into housing or perhaps even forming “slacker communes.” As long as the income is fairly minimal and the incentives are designed correctly, the percentage of people making such choices would likely be very low. In absolute numbers, however, they could be quite significant—and quite visible. All of this, of course, would be very hard to reconcile with the general narrative of the Protestant work ethic. Those opposed to the idea of a guaranteed income would likely have little trouble finding disturbing anecdotes that would serve to undermine public support for the policy.

While our value system is geared toward celebrating production, it’s important to keep in mind that consumption is also an essential economic function. The person who takes the income and drops out will become a paying customer for the hardworking entrepreneur who sets up a small business in the same neighborhood. And that businessperson will, of course, receive the same basic income.

A guaranteed income, unlike a job, would be mobile. Some people would be very likely to take their income and move away from expensive areas in search of a lower cost of living. There might be an influx of new residents into declining cities like Detroit. Others would choose to leave cities altogether. A basic income program might help revitalize many of the small towns and rural areas that are losing population because jobs have evaporated. Indeed, I think the potentially positive economic impact on rural areas might be one factor that could help make a guaranteed income policy attractive to conservatives in the United States.

If the United States were to give every adult between the ages of twenty-one and sixty-five, as well as those over sixty-five who are not receiving Social Security or a pension, an unconditional annual income of $10,000, the total cost would be somewhere in the vicinity of $2 trillion.

The total cost would then be offset by reducing or eliminating numerous federal and state anti-poverty programs, including food stamps, welfare, housing assistance, and the Earned Income Tax Credit. (The EITC is discussed in further detail below.) These programs add up to as much as $1 trillion per year.

In other words, a $10,000 annual basic income would probably require around $1 trillion in new revenue, or perhaps significantly less if we instead chose some type of guaranteed minimum income. That number would be further reduced, however, by increased tax revenues resulting from the plan. The basic income itself would be taxable, and it would likely push many households out of Mitt Romney’s infamous “47 percent” (the fraction of the population who currently pay no federal income tax). Most lower-income households would spend most of their basic income, and that would result directly in more taxable economic activity. Given that advancing technology is likely to drive us toward higher levels of inequality while undermining broad-based consumption, a guaranteed income might well result in a significantly higher rate of economic growth over the long run—and that, of course, would mean much higher tax revenue.

In seems inevitable that personal income taxes would also have to increase, and one of the best ways to do this is to make the system more progressive. One of the implications of increasing inequality is that ever more taxable income is rising to the very top. Our taxation scheme should be restructured to mirror the income distribution. Rather than simply raising taxes across the board or on the highest existing tax bracket, a better strategy would be to introduce several new higher tax brackets designed to capture more revenue from those taxpayers with very high incomes—perhaps a million or more dollars per year.

While the establishment of a guaranteed income will probably remain politically unfeasible for the foreseeable future, there are a number of other things that might prove helpful in the nearer term.

Foremost among these policies is the critical need for the United States to invest in public infrastructure. There is an enormous pent-up requirement to repair and refurbish things like roads, bridges, schools, and airports. This maintenance will have to be performed eventually; there is no getting around it, and the longer we wait the more it will ultimately cost. The federal government can currently borrow money at interest rates remarkably close to zero, while unemployment among construction workers remains at double-digit rates.

We eventually will have to move away from the idea that workers support retirees and pay for social programs, and instead adopt the premise that our overall economy supports these things. Economic growth, after all, has significantly outpaced the rate at which new jobs have been created and wages have been rising.

The political environment in the United States has become so toxic and divisive that agreement on even the most conventional economic policies seems virtually impossible. Given this, it’s easy to dismiss any talk of more radical interventions like a guaranteed income as completely pointless. There is an understandable temptation to focus exclusively on smaller, possibly more feasible, policies that might nibble at the margins of our problems, while leaving any discussion of the larger challenges for some indeterminate point in the future. This is dangerous because we are now so far along on the arc of information technology’s progress. We are getting onto the steep part of the exponential curve. Things will move faster, and the future may arrive long before we are ready.

The premise that even modestly higher marginal tax rates on top incomes will somehow destroy the impetus for entrepreneurship and investment is simply unsupportable. The fact that both Apple and Microsoft were founded in the mid-1970s—a period when the top tax bracket stood at 70 percent—offers pretty good evidence that entrepreneurs don’t spend a lot of time worrying about top tax rates. Likewise, at the bottom, the motivation to work certainly matters, but in a country as wealthy as the United States, perhaps that incentive does not need to be so extreme as to elicit the specters of homelessness and destitution. Our fear that we will end up with too many people riding in the economic wagon, and too few pulling it, ought to be reassessed as machines prove increasingly capable of doing the pulling.

In May 2014, payroll employment in the United States finally returned to its pre-recession peak, bringing to an end an epic jobless recovery that spanned more than six years. Even as total employment recovered, however, there was general agreement that the quality of those jobs was significantly diminished. The crisis had wiped out millions of middle-class jobs, while the positions created over the course of the recovery were disproportionately in low-wage service industries. A great many were in fast food and retail occupations—areas that, as we have seen, seem very likely to eventually be impacted by advances in robotics and self-service automation. Both long-term unemployment and the number of people unable to find full-time work remain at elevated levels.

Lurking behind the headline employment figure was another number that carried with it an ominous warning for the future. In the years since the onset of the financial crisis, the population of working-age adults in the United States had increased by about 15 million people.19 For all those millions of entrants into the workforce, the economy had created no new opportunities at all. As John Kennedy said, “To even stand still we have to move very fast.” That was possible in 1963. In our time, it may ultimately prove unachievable.

In 1998, workers in the US business sector put in a total of 194 billion hours of labor. A decade and a half later, in 2013, the value of the goods and services produced by American businesses had grown by about $3.5 trillion after adjusting for inflation—a 42 percent increase in output. The total amount of human labor required to accomplish that was . . . 194 billion hours. Shawn Sprague, the BLS economist who prepared the report, noted that “this means that there was ultimately no growth at all in the number of hours worked over this 15-year period, despite the fact that the US population gained over 40 million people during that time, and despite the fact that there were thousands of new businesses established during that time.”