Can racist tweets help predict hate crimes? L.A. is about to find out

Can police prevent hate crimes by monitoring racist banter on social media?

Researchers will be testing this concept over the next three years in Los Angeles, marking a new frontier in efforts by law enforcement to predict and prevent crimes.

During a three-year experiment, British researchers working with the Santa Monica-based Rand Corp. will be monitoring millions of tweets related to the L.A. area in an effort to identify patterns and markers that prejudice-motivated violence is about to occur in real time.

The researchers then will compare the data against records of reported violent acts. The U.S. Department of Justice is investing $600,000 in research by Cardiff University Social Data Science Lab, which has been at the forefront of predictive social media models

Cardiff University professor Matthew Williams said the research is designed to eventually enable authorities to predict when and where hate crime is likely to occur and deploy law enforcement resources to prevent it.

“The insights provided by our work will help U.S. localities to design policies to address specific hate crime issues unique to their jurisdiction and allow service providers to tailor their services to the needs of victims, especially if those victims are members of an emerging category of hate crime targets.”

His lab’s previous research in the United Kingdom found that Twitter data can be used to identify areas where hate speech is occurring but where no hate crimes have been committed. This can be useful, researchers said, in neighborhoods with many new immigrants, who are unlikely to report the crime because of fear of deportation.

In 2012, an estimated 293,800 nonfatal violent and property hate crimes occurred in the United States, according to the Bureau of Justice Statistics. About 60% of those were not reported, the Justice Department found.

Of course, there is a big difference between someone spouting off on Twitter or Snapchat and an actual hate crime. 

“It is a great idea in the abstract. But it is not the panacea you might think,” said Brian Levin, executive director of Cal State San  Bernardino’s Center for the Study of Hate and Extremism. “The problem is the correlation and reliability. … There are many different forms of social media.”

Levin, who has tracked both Middle Eastern terror groups and local neo-Nazi organizations, also noted that some hate groups don’t advertise their work on social media.

“Local tensions may arise to fly and be absent from social media,” he said. “Some segments of the community shun social media … so examining social media as a predictor can be a bit like having one screwdriver and sometimes it doesn’t work.”

Predictive policing already is in use at the Los Angeles Police Department and other agencies. The LAPD uses a predictive policing algorithm to deploy officers to locations where prior crime patterns strongly suggest similar crimes may occur. As crime during the last two decades has dropped dramatically across the nation and  Los Angeles, police commanders are increasingly looking for any edge they can get in cutting crime.

L.A. County is particularly useful because a huge volume of social media produces massive data sets that increase the accuracy of predictive models over traditional crime analysis and trend-chasing, said Pete Burnap, from Cardiff University’s School of Computer Science and Informatics. 

“Predictive policing is a proactive law enforcement model that has become more common partially due to the advent of advanced analytics such as data mining and machine-learning methods,” he said.

Traditional predictive police modeling has paired historical crime records with geographical locations and then made a probable calculation to predict future crimes. But Twitter and social media-based models work in real time using what people are talking about now. The algorithms look for particular language that is likely to indicate the imminent occurrence of a crime.

British researchers began looking at cyber-hate in the aftermath of the killing of British Army soldier Lee Rigby at the hands of Islamic extremists on a London street in 2013. Analysts collected Twitter data and tested a text classifier that distinguished between hateful and antagonistic responses focusing on race, ethnicity and religion.

The British researchers are building a completely new hate speech algorithm designed specifically for Los Angeles. They said that’s necessary because of the linguistic and cultural difference between L.A. and London.

"We will also gain access to 12 months LAPD recorded hate crime data," he said.

The idea, he added, is to see whether “an increase in hate speech in a given area is also statistically linked to an increase in recorded hate crimes on the streets in the same area," Williams said.

In addition to potentially predicting crimes, the researchers hope their work might shed light on hate crimes that are now not reported.

"We know that official reports of hate crime from police probably underestimate how common hate crime really is — but we don’t really know by how much, or which types of hate crimes are most seriously underreported," said  Meagan Cahill, senior researcher at Rand Corp. said. "Using Twitter data from Los Angeles County as a test case, this research will help create better knowledge about hate crime. And, we hope it will ultimately contribute to more hate crime prevention by police and other agencies alike.”

richard.winton@latimes.com

Follow @lacrimes on Twitter

ALSO

Death of black teen run down in Oregon was a hate crime, prosecutors say

2 charged with hate crimes after black family's home is hit by Molotov cocktails and racist graffiti

Oklahoma officer charged with manslaughter surrenders to authorities, is released on bond

Copyright © 2017, Los Angeles Times
EDITION: California | U.S. & World
73°