Data are the most important resource of the 21st century. The capacity to transform raw data into decision-making recommendations is changing the world in ways that rival the Industrial Revolution.
Much of the data being created and shared are about our personal lives: where we live, where we work, where we go; who we love, who we don’t and who we spend our time with; what we ate for lunch, how much we exercise and which medicines we take; what appliances we use in our homes and which stories grab our attention.
And the companies that collect and analyze these data are generating billions in revenue. According to a report by Oracle and the MIT Technology Review, the most successful data companies, including Amazon, Google and Uber, treat data as an asset — something that can be bartered, sold and monetized.
If companies are banking on your personal data for their business, shouldn’t you get a piece of their profits? To many observers, that seems only fair.
Microsoft Research scientist Jaron Lanier and others argue that users should receive “nanopayments” for data created by them. One Dutch technology pundit, Jathan Sadowski, has even said that a lot of data collection is a “form of theft” — an appropriation “without consent and compensation.”
But it’s pointless to chase nanopayments, and not just because the big technology companies might not like the concept.
Many of the products and services provided by “data refineries” — including Amazon’s recommendations, Google’s traffic predictions and Uber’s surge pricing — depend on information from millions of individuals. One person’s contribution isn’t worth much. If, for instance, Facebook divvied up every cent of its profits among its users, each would receive about $5 for 2016.
Was having unlimited access to a communication and networking platform — one that is constantly evolving, with new features including livestreaming and encryption in Messenger — worth more to you than a venti caramel macchiato? If so, you’ve already been “paid” for your data.
Of course the nanopayments crowd is quick to point out that not all users are equal; some people create more valuable content than others. If only the truly deserving received payment, that might add up to more sizable figures for them.
The question then becomes: Who’s truly deserving?
Let’s say you take a photo of a group of friends at a birthday party and post it on Facebook. Many people in your social network then repost it. The value of the post to Facebook comes from the traffic that the photo inspires as well as the data about relationships and interests embedded in people’s interactions with it. Should you alone get a kickback from Facebook? Or should you split it with everyone tagged in the photo? How about with those who comment, tag or like it? How much does a comment or like depreciate in value over time?
Not only would it be costly to implement algorithms to make these calculations, it’s often not clear what would be fair.
Besides, focusing on the value of our data ignores the benefits we receive. I know a man — let’s call him Joe — who decided to try out Facebook but didn’t want to share data about himself. He signed up under a false name and didn’t make friends. Unsurprisingly, Joe didn’t find anything very useful on Facebook. Because there were no data for Facebook to analyze, its algorithms couldn’t deliver a personalized News Feed.
The value of data comes from improving the decisions we make. You have to give data to get this benefit.
But while Joe missed out on the joys of Facebook, Facebook missed out on essentially nothing. Since most people are not like Joe — most people share their data — Facebook has plenty of information to work with as it refines its services. Again, any one individual’s data have a dollar value of close to zero.
Rather than focusing on how much our information is worth and asking for a paltry financial handout in return, we should demand something far more valuable: the right to experiment with our data and the settings that determine what data refineries show us. This “seat at the controls” will ensure that we can make the best decisions for ourselves, while still benefiting from these companies’ recommendations.
Andreas Weigend was the chief scientist at Amazon. He is now a lecturer at UC Berkeley and the director of the Social Data Lab. He is the author of “Data for the People.”