# As doctored photos flood the Internet, human vision struggles to keep up

Experts estimate that humans take more than 1 trillion photos a year, and that we’re uploading them to Facebook (let alone the rest of the Internet) at a rate of 4,000 per second.

How many of these images have been altered, doctored or outright faked? We’ll probably never know, new research suggests.

Not only is the human vision system poorly equipped to recognize when a photo has been manipulated, there may not be much we can do to make it work better, the new study concludes.

Don’t believe it? Take a look at the photo at the top of this story, of a man holding a fish. See if you can tell whether it’s an original or if it’s been changed — and if so, how. (The answer is at the end of the story.)

A team from the University of Warwick in England found pictures like this one on the Internet and altered them in various ways. Sometimes they added something to the scene (was the man really wearing a watch?). Sometimes they took something out (did there used to be a boat on the water behind him?). In some cases, shadows were altered or shapes of objects were changed. The researchers also engaged in good old-fashioned airbrushing.

## Faking photos in the name of science

Then they showed their pictures to 707 people ages 14 to 82 who volunteered to test their ability to spot a fake. Subjects were presented with 10 photos and asked whether they believed each image had been digitally altered. If they answered yes, they were then asked to click on the region of the photo that had been changed. (Half of the photos each person saw were originals and half were altered.)

Volunteers contemplated each photo for just under 44 seconds, on average. When they thought they had spotted a fake, it took an average of 10.5 seconds to identify the place where they thought the photo had been changed.

### Spot the fakes:

When it came to detecting fakes, there were only two possible answers: yes or no. That means volunteers guessing randomly would have been right 50% of the time.

But they didn’t seem to guess randomly.

They correctly classified photos as either original or altered 66% of the time, on average. They did a better job spotting originals (72% correctly identified) than the images that had been changed (60% correctly identified).

## ‘Far from perfect’

But the researchers were not exactly impressed by the volunteers’ performance.

“Although subjects’ ability to detect manipulated images was above chance, it was still far from perfect,” they wrote. “Furthermore, even when subjects correctly indicated that a photo had been manipulated, they could not necessarily locate the manipulation.” Indeed, only 45% of the changes were correctly located, on average.

The volunteers were better at noticing that something was amiss when the alterations were “physically implausible,” such as when an object appeared to cast a shadow in the wrong direction. But the precise locations of these impossible alterations were just as hard to pinpoint as changes that were more subtle.

The researchers then repeated their experiment with another 659 volunteers. This time, instead of using pictures that were already online and compressed into the JPEG format, they took (and modified) their own photos and kept them in the higher-resolution PNG format.

In this second experiment, they also asked volunteers to say where a photo had been altered, even when they thought the photo was authentic.

This time, subjects spent an average of nearly 58 seconds deciding whether a photo had been faked, and an average of 11 seconds deciding where the change had been made.

Though they spent more time considering the photos, they did a slightly worse job determining which pictures were originals and which were not.

### Spot the fakes:

Overall, they classified them correctly 62% of the time, on average — worse than the 66% in the first experiment but still better than the 50% that random guessing would have produced. This time around, the subjects were better at identifying manipulated photos (65% correctly identified) than the originals (58% correctly identified).

The second group of volunteers outperformed the first when it came to finding the alterations — on average, they got these right 56% of the time.

In 18% of the trials, volunteers correctly said that a photo had been changed, but they weren’t able to say where. On the flip side, in 10% of cases volunteers incorrectly said a photo was unaltered but then went on to guess the correct location of the alteration.

Unlike in the first experiment, the volunteers in the second experiment were no better at spotting implausible fakes than plausible ones.

One thing that was constant, however, was that the more an image had been altered, the more likely the subjects were to notice it. This was particularly surprising, the researchers wrote, since subjects presented with a manipulated image never saw the original version of the same picture and couldn’t make a direct comparison.

## Are we doomed?

“People’s ability to detect manipulated photos of real-world scenes is extremely limited,” the researchers concluded. “Considering the prevalence of manipulated images in the media, on social networking sites, and in other domains, our findings warrant concern about the extent to which people may be frequently fooled in their daily lives.”

If that seems a little depressing, just wait — it gets worse.

“Future research might also investigate potential ways to improve people’s ability to spot manipulated photos,” the team wrote. “However, our findings suggest that this is not going to be a straightforward task. We did not find any strong evidence to suggest there are individual factors that improve people’s ability to detect or locate manipulations.”

The study was published Tuesday in the journal Cognitive Research: Principles and Implications.

If you’d like to see if you could do better than the study participants, take the test here.

And if you’re still wondering about the photo at the top of the story, it’s a fake. The shadows among the trees were digitally erased.

karen.kaplan@latimes.com