Advertisement

Talking with Meredith Broussard about ‘Artificial Unintelligence’

Share

Science-fiction writer Arthur C. Clarke famously stated 45 years ago that “any sufficiently advanced technology is indistinguishable from magic.” Today, advanced technology is commonplace. But while some tech innovations might appear as inscrutable as magic, they tend not to work nearly as well, says Meredith Broussard, whose new book, “Artificial Unintelligence: How Computers Misunderstand the World” (MIT Press, $24.95) warns against the blind optimism toward technology — an attitude she calls “technochauvinism” — that she argues has dominated our culture for far too long.

Broussard, an assistant professor at New York University’s Arthur L. Carter Journalism Institute, began her career as a software developer at AT&T Bell Labs before transitioning to data journalism. “Journalists are taught to be skeptical,” she writes in the book’s introduction. “I started to question the promises of tech culture.”

In the aftermath of the Facebook-Cambridge Analytica scandal and the first pedestrian death caused by a self-driving car, this line of questioning, it seems, is now on everybody’s mind.

I spoke with Broussard in New York City. Our conversation has been edited.


How did you come up with the term “technochauvinism”?

I needed a term to encompass the kind of bias that says the technological solution is always better than every other solution. What I’ve realized after being a software developer and being in the high-tech world for many years is that we can’t automatically assume the technological solution is the best one. There’s more nuance.

For example, when you’re teaching a kid to read, a board book is very helpful, because kids don’t have the manual dexterity to use a mouse until elementary school. In order to use a mouse, you need to be able to tell your right from your left, which is something kids are not great at. A board book satisfies the need very well. It also never runs out of power. And you can drool on it.

There wasn’t a word that captured this particular kind of bias, so I invented one.

Does the word have a gendered connotation?

Absolutely, yes, there is a gender bias aspect to technochauvinism. As a woman of color in the high-tech industry, I had a very different experience than my white male colleagues. I tend to see different things. I’m working in a small but growing tradition of women of color critically examining technological systems. In the book, I talk about Latanya Sweeney, a Harvard professor who pointed out the racial bias in online ads. If you had a black-sounding name, you were getting ads and search results for things like bail bonds, whereas if you had a name like Christine, you were getting something neutral. Latanya Sweeney noticed this because her name is Latanya. It’s not Christine.

In the chapter about schools, you write about gaining appreciation for the Gates Foundation’s support for the Common Core State Standards. But you go on to cite the controversy over Common Core as “an example of what happens when engineering solutions are applied to social problems.” Was this an example of technochauvinism?

The whole Common Core situation is a mess for a lot of complicated reasons, not just that there was an element of technochauvinism in some of the microdecisions. When you read the standards, they’re absolutely impenetrable. I’m a college professor, and I can’t really understand what these standards mean. Things that are written to be impenetrable are not actually useful for teachers in the classroom. If I can’t understand it, there are a lot of other people out there who also can’t understand it and are probably not going on the record to say, “I can’t understand this.” If you’re a classroom teacher, and you don’t understand the Common Core standards, who are you going to talk to? Your principal is not going to be able to change anything. You’re trapped. It’s a Kafka -esque situation.

So I don’t know that the Common Core initiative is an example of technochauvinism. I do think there’s a lot of technochauvinism in education. Something like One Laptop per Child — that initiative is definitely technochauvinism. And every effort to give all the students laptops or iPads and replace something — that is usually technochauvinism.

'Artificial Unintelligence'
(The MIT Press)

It’s a great idea to make sure that all of your students have the same technology and that all your students have technology to use in school and at home. I’m a big admirer of the CSforALL [Computer Science for All] initiative. So I am pro-technology, and I am pro-technology in schools. But I do not think that portable technology is a total replacement for anything else. It’s an add-on, not a substitute.

Why specifically is One Laptop per Child technochauvinism?

In the One Laptop per Child initiative, they didn't think about all of the things that go along with implementing the technology. In order to have one laptop per child, you need stuff like electricity. They found out very quickly that in rural schools, kids don’t necessarily have electricity at home. They dealt with that challenge by creating laptops you could charge with a crank. Great. But there were other challenges, like teachers that weren’t sure how to integrate computer-based education into their lessons. There either weren’t lesson plans or the One Laptop per Child people were not shipping lesson plans in local languages. They were assuming that everybody would just take a laptop and figure it out. So what happened? Well, the same thing that always happens. People used the laptops to watch movies, watch TV and watch porn.

There also wasn’t enough individualized tech support, and there’s never enough Wi-Fi in schools. American school buildings tend to be these massive cinder block things. In newer schools that have wallboard, it’s easier for the signal to get through. But if it’s a cinder block wall, it’s harder. You also need to have a lot of tech support, so that when a student’s computer goes down in the middle of class, there is somebody who will come over immediately to fix the problem, so that the whole lesson doesn’t come crashing to a halt. And teachers are not trained in tech support.

All of these logistical issues that technochauvinists did not think through contributed to the failure of One Laptop per Child.

In the book, you say that data journalists are often the ones that do "algorithmic accountability" reporting –– stories that investigate the social consequences of using technology to make decisions. Who do you think is doing particularly good work?

Julia Angwin [formerly of ProPublica] is an inspiration. Michael Keller is leading an algorithmic investigations team at the New York Times now. I’m excited to see what they come up with. The Follower Factory project was a great project led by Mark Hansen, who runs the Brown Institute for Media Innovation. That was a collaborative project that came out of Mark Hansen’s computational journalism class. ICIJ [the International Consortium of Investigative Journalists] had the Panama Papers and the Paradise Papers.

Do you think it will become more of what journalists do in this day and age?

I hope so. [Note: A day after this interview, Angwin announced she was starting a news organization focused on tech investigations with her ProPublica colleague Jeff Larson. “If ever there was a time when journalists needed to work harder to analyze the impacts of technology on society, this is it,” she tweeted.]

One of the things about data journalism is that it’s actually far more expensive and time-consuming than anybody expects. Doing a very high-level computational investigation takes 10 times as long as a regular investigation, and it takes far more people. If you look at something like the Follower Factory project [at the New York Times] or one of Julia Angwin’s projects, there are a lot of people who work on those investigations. A lot. Upward of a dozen people even. These types of investigations are expensive and time-consuming. They’re extremely valuable, and they’re high-impact. But they’re not easy to do.

In “Democracy’s Detectives,” James T. Hamilton lays out the economic case for data journalism. He also looks at what these kinds of projects cost. It could be upward of a million dollars.

If you look at the starting salary for AI researchers, it’s around $300,000 [a year]. The starting salary. I don’t know if there are any journalists making that much money. There’s a huge war for AI talent. News organizations are simply not going to pay that. So it’s going to be really hard for journalism organizations to compete or to catch up.

You write that “Computer systems are proxies for the people who made them.” A lot of these people, you say, were naive about how their technological creations would be used. Like Mark Zuckerberg, who told the New York Times he never imagined he’d be dealing with campaigns interfering in national elections when he first started Facebook in his Harvard dorm room.

I mean, he should have imagined it. I went through the same education, and I certainly thought of it. When you have a system that is based on popularity like Facebook, of course you’re going to have people who cheat. And they’re going to cheat in every conceivable way. People are cheating Facebook and Twitter and Amazon and what have you in ways that we have not even thought of yet. But the people who made these systems fundamentally don’t really care. That’s a problem.

Are you referencing the Facebook memo when you say they don’t care?

Oh, the Boz thing. [In the leaked internal message, Facebook Vice President Andrew “Boz” Bosworth provocatively defended the company’s growth, no matter the cost.] That goes back to the political beliefs of the people who made these systems. There’s a techno-libertarian thing that is really popular in Silicon Valley that has become the default political position [in tech]. That political position derives from the communalists [who advocated living on communes]. When the communes failed, they thought, “We have this new thing called cyberspace, and we’re going to take the same ideology and do that in cyberspace.” It didn’t work there, either. It also willfully ignores the rule of law.​​​​​

What is the role of politicians in all of this?

I think you have to look at the people who are making these systems: their total disdain for the rule of law and their aversion to regulation. I think at this point, we do need more regulation. The whole self-regulation thing has not worked out very well. We’ve tried it. We tried to run cyberspace like a commune. And it failed just as badly as the communes did.

Zhang is a writer and data analyst in New York City.

Advertisement