Editorial: Facebook says it’s open to some privacy regulation. Here’s where to start
The clearest lesson from Facebook Chief Executive Mark Zuckerberg’s 10 hours of testimony on Capitol Hill this week is that members of Congress don’t have much of a grasp on what the privacy problems are online, let alone how to fix them. The recent revelations about Facebook and Cambridge Analytica, however, should make at least one thing abundantly clear to lawmakers: Consumers need more control over how their personal information is used and shared online.
Making his first appearances before Congress, Zuckerberg testified at three Senate and House committee hearings about a variety of hot topics, from Russian influence on the 2016 presidential election to alleged partisan bias by Facebook’s content moderators. He probably would not have made the trip, though, if not for the news that the political data firm Cambridge Analytica had improperly obtained information about 87 million Facebook users from the developer of a popular Facebook app.
Facebook learned about the unauthorized disclosure late in 2015 but did not tell its users about it until this year. That’s problem No. 1. Congress has dragged its feet for years on legislation to compel timely disclosures of data leaks, effectively preventing the market (that is, consumers) from punishing companies that don’t safeguard the personal information they collect. Mandating those disclosures is long overdue.
What’s missing is a baseline set of rules to ensure that all online sites, services and apps reveal what they’re collecting and why.
More broadly, the incident showed how seemingly innocuous personal data could be used in unexpected and non-innocuous ways. Cambridge Analytica developed “psychographic” profiles of millions of individuals, by name, in an effort to help its clients — including Donald Trump’s presidential campaign — sway the results of elections. The company sought to take advantage of users’ susceptibilities to craft political messages and influence their votes.
It’s safe to say that few of the 87 million people whose data Cambridge Analytica obtained understood that using Facebook to share anecdotes, messages, photos and news items with friends would shape how a political candidate would try to win their support. That’s problem No. 2. Sites and services online will tell you — often in tiny type and legalistic language — how they plan to use your data themselves. And if they intend to share or sell your data, they’ll generally tell you that too. But they won’t tell you how those other sites and services will use your information, which means you have no control over it.
Facebook actually may pose somewhat less of a privacy threat than other online companies, given that it insists that it doesn’t sell its users’ data. Instead, it sells advertising space, and it uses what it knows about you to target advertisers’ messages for a fee. More problematic are the data brokers that gather data, then sell it to others to use for … whatever. Ultimately, the threads of information captured about us on sites across the internet get woven into remarkably complete profiles, which can be used to shape what we see online, the prices we pay, the secrets we reveal, the opportunities offered to or withheld from us — in short, as Nuala O’Connor of the Center for Democracy and Technology put it, “our very place in the world.”
Some brokers build their databases with data collected by other companies. Others, such as Evite, a site for sending invitations online to events, and Edmunds, a car review site, sell data collected from their own visitors to marketers for use elsewhere — a twist that few of those sending invites to their kids’ birthday parties or shopping for used cars may fully appreciate.
Even Facebook, though, assembles and digests information about internet users in unexpected ways. Like Google, it spreads a data dragnet far beyond its own site and apps such as Instagram and Messenger, collecting personal data even from people who aren’t on its social network. And until recently, it acquired data from other companies to make the profiles it offered to advertisers even more precise — intrusively so.
Federal law offers protections for medical data collected by doctors and hospitals, and for financial data and information collected from minors. But those safeguards have truck-sized loopholes. For example, the medical information you might enter into a health-focused website isn’t covered by those privacy protections because the law doesn’t apply to such sites. Nor does the prohibition on collecting data from minors stop sites from sharing some types of data about minors that they collect from adults.
What’s missing is a baseline set of rules to ensure that all online sites, services and apps reveal what they’re collecting and why, and give people a meaningful say over whether and with whom their data is shared — in plain English. That sort of transparency and control isn’t a threat to the advertiser-supported internet; users will no doubt continue to trade away some element of their privacy for content and services they value. But that exchange should be fair and fully informed, rather than simply an engine for the multibillion-dollar data-brokering business.
Concerns about privacy cross the political spectrum, and the Cambridge Analytica scandal presents Congress an opportunity to act. If it doesn’t, it can expect to see more states take up measures like the Consumer Right to Privacy Act, a measure that proponents are trying to put on the California ballot in November. Unless Congress wants a patchwork quilt of privacy laws for the internet, it needs to get moving.
Follow the Opinion section on Twitter @latimesopinionand Facebook
More to Read
A cure for the common opinion
Get thought-provoking perspectives with our weekly newsletter.
You may occasionally receive promotional content from the Los Angeles Times.