Advertisement

Op-Ed: We’re being stigmatized by ‘big data’ scores we don’t even know about

Share

The White House recently announced two major privacy initiatives. A proposed Personal Data Notification and Protection Act would force companies to notify customers about data breaches. A Student Data Privacy Act would, if passed, prevent a growing ed-tech sector from using student data for ads.

These are commendable initiatives. Corporations shouldn’t be able to hide their data security failures. Students’ privacy is important, too: They are particularly vulnerable to slick marketing efforts.

But it’s time for policymakers to aim higher. “Big data” creates problems that go well beyond traditional privacy concerns. For example, colleges are now using data to warn professors about at-risk students. Some students arrive in the classroom with a “red light” designation — which they don’t know about, and which is based on calculations they can’t access.

Advertisement

A better student privacy act would focus not only on uses of data outside the education sector, but inside it as well. Students should not be ranked and rated by mysterious computer formulas. They should know when they’ve been marked for special treatment. Algorithms — step-by-step patterns of computation that can sometimes include thousands of variables — can enormously advance our understanding. But when they are deployed to judge people, those affected need a chance to understand exactly what is going on.

It’s not just students who are affected by big data and the algorithms powered by it. Employers love new evaluative software, too. “Sociometric badges” can now monitor an employee’s every conversation, and combine it with performance data. Cutting-edge human resources departments are sorting people the same way colleges are: red (poor candidate), yellow (middling) or green (hire away).

Even more worrying, these evaluations may be based on data from well outside the workplace. There are now thousands of scoring services available for businesses to tap into — and very little regulation, transparency or quality control. Use “inappropriate language” on social media? You may be blackballed at promotion time.

Data-driven decision making is usually framed as a way of rewarding high performers and shaming shirkers. But it’s not so simple. Most of us don’t know that we’re being profiled, or, if we do, how the profiling works. We can’t anticipate, for instance, when an apparently innocuous action — such as joining the wrong group on Facebook — will trigger a red flag on some background checker that renders us in effect unemployable. We’ll likely never know what that action was, either, because we aren’t allowed to see our records.

It’s only complaints, investigations and leaks that give us occasional peeks into these black boxes of data mining. But what has emerged is terrifying. Data brokers can use public records — of marriage, divorce, home purchases or voting — to draw inferences about any of us. And they can sell their conclusions to anyone who wants it.

Naturally, just as we’ve lost control of data, a plethora of new services are offering “credit repair” and “reputation optimization.” But can they really help? Credit scoring algorithms are secret, so it’s hard to know whether today’s “fix” will be tomorrow’s total fail. And no private company can save us from the thousands of other firms intent on mashing up whatever data is at hand to score and pigeonhole us. New approaches are needed.

Advertisement

What might those look like? I take some inspiration from a Virginia law that bars auto insurers from requiring their customers to release event-recorder data from their cars, or from raising their rates if they refuse. That is forward-thinking regulation that is getting ahead of algorithmic monitoring, rather than belatedly reacting to it. Some states have banned employers from demanding their employees’ Facebook passwords. They could go further, requiring employers to share with applicants and current workers the types of outside intelligence they use when making decisions about them.

In general, we need what technology law and policy researcher Meg Leta Jones calls “fair automation practices” to complement the “fair data practices” that President Obama is proposing. We can’t hope to prevent the collection or creation of inappropriate or inaccurate databases. But we can ensure that the use of that data by employers, insurers and other decision makers is made clear to us when we are affected by it. Without such notification, we may be stigmatized by secret digital judgments.

Law can prevent the worst uses of data as well. Even if new federal legislation is hard to imagine in an era of federal gridlock, states have the authority to regulate the terms of hiring, employment and insurance. They ought to step in and strictly limit the use of black-box algorithms in those contexts.

We need the right to keep certain intrusive, powerful players — be they our bosses, banks or insurers — from monitoring every aspect of our lives, or basing decisions on unvetted or secret sources. That is reputation protection we can count on. Without it, we’re consigned to a black-box society — one in which we are monitored ever more closely but have no chance of inspecting, challenging or correcting the supposed data that is collected.

Frank Pasquale is a professor of law at the University of Maryland and author of “The Black Box Society: The Secret Algorithms That Control Money and Information.”

Follow the Opinion section on Twitter @latimesopinion

Advertisement
Advertisement