This coming year may finally mark a turning point in protecting people’s privacy, and you have Facebook to thank for that.
Not because the social media giant is leading the way to much-needed safeguards.
Rather, because the company, after repeated security lapses and scandals, has utterly failed in its self-declared mission of doing right by users — and lawmakers at last are signaling a willingness to act.
“Facebook’s conduct and the response to it this year provided a stark example of a deep disconnect,” said Andrea Matwyshyn, co-director of Northeastern University’s Center for Law, Innovation and Creativity.
She told me data-heavy companies such as Facebook see themselves as little more than neutral aggregators of information with limited responsibility for security.
Consumers, meanwhile, view them as “failing to live up to basic ethical duties as stewards of the information they hold, repeatedly and callously violating privacy and trust in the name of short-term corporate profits,” Matwyshyn said.
Facebook has been heavy into damage control after the New York Times reported recently that the company shared vast amounts of user data with business partners. It even gave some of those partners access to people’s private messages without those people’s knowledge or consent, the report said.
Facebook’s troubles in 2018 were the highest-profile privacy violations in a year that saw millions of Americans once again exposed to fraud, identity theft and other hazards as a result of corporate America’s profit-minded reluctance to take better care of people’s personal information.
Other noteworthy security breaches included the Marriott hotel chain, which this month said information about 500 million customers was exposed, as well as digital break-ins at Google, T-Mobile, Saks Fifth Avenue, Under Armour, Panera Bread and Orbitz.
Ironically, the year began with Facebook Chief Executive Mark Zuckerberg declaring that his company “has a lot of work to do” to improve the site’s integrity. He was referring to Facebook’s growing role as a venue for hate, misinformation and political manipulation.
“We won’t prevent all mistakes or abuse, but we currently make too many errors enforcing our policies and preventing misuse of our tools,” Zuckerberg wrote in January. “If we’re successful this year then we’ll end 2018 on a much better trajectory.”
In March came the Cambridge Analytica mess. We learned that Facebook had made the personal data of 87 million users available, without permission, to a researcher who used the information for political purposes.
In September, the company revealed that hackers had made off with information on 29 million Facebook users, including birth dates and phone numbers.
It was reported in November that Zuckerberg and Facebook Chief Operating Officer Sheryl Sandberg had mapped out an aggressive plan to manage PR crises through deception, misdirection and attacks on critics.
This month, Facebook said the private photos of nearly 7 million users were improperly shared with third-party app developers.
It’s like “The Neverending Story” with these guys.
Businesses that treat people’s personal information as a commodity have long had a credibility problem.
On the one hand, they loudly insist that privacy is a paramount concern. Meanwhile, they do everything possible to exploit the contents of their databases for financial gain. And they typically fail to adequately secure their data, such as by encrypting information under their control.
According to the Privacy Rights Clearinghouse, a San Diego advocacy group, more than 11.5 billion records have been accessed by hackers in nearly 9,000 known security breaches since 2005.
In October, representatives from Amazon, Apple, AT&T, Charter Communications, Google and Twitter appeared before the Senate Commerce Committee to say they support the idea of a federal privacy law that would protect users.
Google’s chief executive, Sundar Pichai, reiterated that stance when he appeared before lawmakers a couple of weeks ago.
It’s a smoke screen.
What the tech and telecom giants are really after is a watered-down federal law that would preempt tougher state laws, such as the one that takes effect in California at the beginning of 2020.
The California Consumer Privacy Act — AB 375 — will allow state residents to find out what kinds of information a business has collected and what it plans to do with it. It also allows consumers to request that a company delete any personal information it holds, and to opt out of the sale of such information.
The teeth: California’s law gives people the right to sue if reasonable security practices aren’t maintained to prevent data breaches.
Big Data wants nothing to do with any of these provisions, and that’s why companies are calling for a federal law that stops well short of the California statute.
“Any preemption of state consumer privacy laws by Congress could hurt Californians in particular, because our laws are stronger here than in most other states, so we hope the leadership from California will protect our state’s interests,” said Emily Rusch, executive director of the California Public Interest Research Group.
In fact, lawmakers should explicitly declare in any federal privacy legislation that states are free to enact stricter measures. If companies are as committed to protecting people’s information as they say, what do they have to fear?
That question applies as well to legislation that Sen. Ron Wyden (D.-Ore.) plans to introduce early in the new year. His Consumer Data Protection Act would impose $5-million fines and up to 20 years in prison for executives who knowingly mislead federal authorities about their security efforts.
It also would strengthen the Federal Trade Commission’s ability to crack down on privacy violations and give consumers more power to control how their personal information is used.
“Big companies are vacuuming up people’s personal information, just scooping it up,” Wyden told me after recently releasing a draft of his bill. “Everything you read, everywhere you go, everything you buy is sucked up in a corporation’s database.”
He’s proposing that any company with revenue topping $1 billion a year, or that stores data on more than 50 million consumers or consumer devices, would have to submit an annual “data protection report” to the FTC detailing all activities related to keeping people’s data under wraps.
A 4%-of-revenue penalty would be imposed on companies found to have deliberately misled the FTC in the report. Scofflaw executives would be slapped with separate $5-million fines.
If an executive enjoys a particularly fat compensation package, the bill says, the $5-million fine would be replaced with 25% of “the largest amount of annual compensation the person received during the previous three-year period.”
Hefty fines and prison sentences would go a long way toward encouraging business leaders to take privacy more seriously. Most experts, however, say such draconian measures have no chance of becoming law, especially with the Senate and White House in business-friendly Republican hands.
Still, the mere fact such a bill could be introduced underlines the severity of the problem — and a willingness among some lawmakers to finally draw the line on privacy violations.
It also shows a growing impatience with Big Data leaders who seem to think words speak louder than actions — and who have little accountability to corporate overseers.
Facebook’s Zuckerberg, for example, is both CEO and chairman of his company, as well as its most powerful shareholder. A tiered stock system grants his shares 10 votes for every one by a typical shareholder.
He controls 60% of Facebook shares’ voting power — so he wins every vote, every time.
After the Cambridge Analytica fiasco came to light, Zuckerberg brazenly announced that “we have a responsibility to protect your data, and if we can’t then we don’t deserve to serve you.”
Millions of hacked Facebook accounts later, he still does, and he apparently can’t, and they don’t.