Just a year ago, proposing a concept like universal basic income could practically get me laughed off the stage at a tech industry conference. The idea that everyone should be guaranteed a minimum subsidy from the government seemed to go against every fundamental tenet of creative destruction: Don't reward the obsolete! Force people to evolve! If workers lose their jobs to automation, retrain them for new ones!
From the perspective of Silicon Valley's executives, only a hippie or communist would suggest that people be given a livable wage simply for being alive. But to me, having just published a book about the lopsided returns of the digital economy, universal basic income seemed an obvious solution to a problem first posed in the 1950s by the inventor of cybernetics, Norbert Wiener: What would happen when robots could till the fields, rendering human labor obsolete? Would humans seize the opportunity to lie down in beach chairs and sip lemonade? Or would our economy be thrown into chaos, with humans perpetually competing for work against their tireless mechanical peers?
In a highly automated environment, a guaranteed minimum income for basics like food, housing and healthcare would provide for those incapable of finding jobs. What's more, study after study has shown that a universal basic income doesn't lead to laziness. Rather, the financial safety it affords leads people to take greater creative and entrepreneurial risks.
So I should have been glad last spring when the developers at Uber began to ask me about universal basic income, or UBI. I had just delivered a talk in which I blamed the company for extracting all the value out of the taxicab market, and without any real intent of making it sustainable. In my view, they are using the cab market as a beachhead in a much larger bid to monopolize the transportation industry, just as Amazon used books as a foothold into retail with little regard to the effect on authors and publishers. To my surprise, these developers acknowledged the deleterious effects of their company — then raised UBI as a possible solution. "Wouldn't that let us keep going?" one employee asked.
I've since learned from similar audiences at Facebook and Google that many of the workers and leaders at Silicon Valley's biggest firms have jumped aboard the UBI bandwagon, and with equally self-serving ambitions. By which I mean, they understand the basic math undermining their long-term business plans: If they automate all the jobs, who will be left to buy their services? Even the data that companies such as Google mine from our otherwise free online activities would be worthless if we had no money to spend. The penniless have no consumer behavior to exploit.
While it's gratifying to hear a multi-billionaire like Facebook founder Mark Zuckerberg echo the words in my books as he calls on Harvard's graduating class to explore UBI strategies, in light of the rest of Facebook's priorities and behavior, his request comes off as utterly clueless, and more than a little late. Much like his vow to donate 99% of his shares to charity, Zuckerberg's interest in UBI seems less the result of a comprehensive economic vision than a guilt-inspired effort to compensate for the social impact of his business. (If you have to donate 99% of your winnings, perhaps you took too much to begin with?)
I'd have an easier time accepting Zuckerberg's proposal at face value if his company weren't trying so hard to avoid paying taxes on its massive profits. Where is UBI supposed to come from, after all, if not the profits that Silicon Valley companies have made by cutting out human labor in the first place?
Likewise, the Uber employees I recently spoke with sounded concerned about the many drivers they hoped to replace with robots. They were aware of the irony of Uber drivers being used to train the algorithms that would soon drive cars without human participation, at least, and hoped that UBI could somehow solve the joblessness problem they were creating.
But underlying these second thoughts and compensatory strategies remains a short-sighted faith in the inevitability of technology's conquest over mankind. By focusing on the efficiency of code and algorithms, the technophiles have engendered a business culture that values speed over all else. It's only in this kind of environment that robots win out over humans.
Instead of exploring ways that digital technology might allow for a more thoughtful application of labor and resources, we are doubling down on the industrial values of growth and efficiency. These values have always depended on externalizing the true costs of productivity. Computers may be less expensive and more efficient than humans, but that's only because the materials for their components are mined in Africa, assembled in China, and later disposed of in toxic waste dumps in impoverished countries.
Industrial efficiency has never been good for working people, nor is it the best North Star for a human society and economy that is transitioning from an industrial age to a digital one. We need not accept massive inequality as an inevitable outcome and compensate for it after the fact. We should optimize our digital technologies toward human ends, rather the end of humans.
Douglas Rushkoff is the author of 15 books, most recently "Throwing Rocks at the Google Bus."