After all three debates, see who our analysts say emerged victorious

This is what happens when you scatter 225 cameras around the Serengeti

'Camera traps' in Tanzania's Serengeti National Park capture wild animals off-guard

Hundreds of thousands of Serengeti “selfies” are giving researchers a candid, and often amusing, picture of what life is like on the African plain.

Zebras munch on dried grass, lion cubs play with their mother, elephants go for a stroll, and birds hitch a ride on the back of a wart hog.


Serengeti study: In the June 13 Section A, a photo caption that accompanied an article about a wildlife study at Serengeti National Park in Tanzania incorrectly identified an animal photographed in that study. It was a Cape buffalo, or African buffalo, not a water buffalo. —

The images were captured by 225 “camera traps” set up throughout Tanzania’s Serengeti National Park. The cameras took a picture when they detected motion nearby. Often, the photos were snapped with the animals staring directly into the lens.

Ecologist Alexandra Swanson deployed the cameras in 2010, when she was a graduate student at the University of Minnesota and wanted to observe how the Serengeti’s predators interacted with other species. She spent years driving from camera to camera, changing their memory cards and batteries every two months.

“When we first started getting these photos, they were just breathtaking,” said Swanson, who is now a research fellow at the University of Oxford in England “You’re seeing these animals at their most authentic.”

Those animals include some of the Serengeti’s most famous fauna, such as lions, cheetahs, baboons and hippopotamuses. The cameras also spied lesser-seen species such as honey badgers, zorillas and aardwolves.

Camera traps allow researchers to observe wildlife in remote locations and to monitor species as they move across vast areas. The results are often as amusing as they are enlightening.

“You just see things you’d never otherwise see — these animals making ridiculous faces, peering into the camera, running toward the camera,” Swanson said.

From a research standpoint, the automatically triggered cameras are appealing because they’re cheap and not very invasive to wildlife (though every once in a while, an elephant might smash one or a hyena might eat one). However, they produce a massive volume of images, making it difficult to process the results.

“A lot of people who use camera traps are often overwhelmed by the number of photos they have to go through,” Swanson said.

In three years, Swanson’s cameras amassed 1.2 million sets of images, which were published and described this week in the journal Scientific Data.

Combing through and analyzing such a large number of images was too much for one person. Swanson initially recruited a dozen undergraduates to help, but still they couldn’t keep up.

Swanson asked fellow ecologist Margaret Kosmala, whose background is in computer science, if there was a way for a computer to analyze the photos.

“I said no,” said Kosmala, who is now a postdoctoral fellow at Harvard University. “Computer vision research isn't actually there yet in terms of identifying animals in pictures.”

But then Kosmala saw the images and was blown away. She thought maybe the researchers could enlist volunteers — hundreds of them — to help.

So the pair teamed up with Zooniverse, a website that hosts citizen-science projects, to create Snapshot Serengeti.

Since its launch in 2012, 30,000 people have logged on to Snapshot Serengeti to help make 10.8 million classifications. The volunteers found animals in more than 300,000 images and identified 40 species.

“We literally couldn't have gone through all of the pictures without the volunteers,” Kosmala said. “This is science that couldn't have happened without them.”

Zooniverse began with an astronomy project called Galaxy Zoo, which asked volunteers to help identify the shapes of galaxies. The site has since expanded to 42 projects across a variety of disciplines, including Penguin Watch and Plankton Portal.

Anyone can participate.

“The platform is designed in a way that whether or not they’re an expert, anyone can make a contribution,” Swanson said. “It doesn’t matter if you have no idea what you’re looking at.”

Snapshot Serengeti presents users with a photo and asks them to choose from 54 types of animals, including birds, reptiles, insects and even humans, to classify the subject. Sometimes there’s nothing in the image.

The user can narrow the search by characteristic, such as color, pattern or horn shape. The site also asks about other details, such as how many animals there are and what they're doing.

To help verify the volunteers’ identifications, the site shows the same image to multiple people.

“If 10 people say it's a zebra, then we know it’s a zebra,” Kosmala said. “If many people say it's a zebra and one says it's a giraffe, then we still know it's a zebra.”

When Swanson and Kosmala compared the citizen scientists’ results with an expert’s identifications, they found their volunteers were correct 97% of the time. That’s impressive, Swanson said, especially when you consider that even experts make mistakes.

Swanson said she hoped the images could be used for further ecological research and education. Scientists can view the images for their own studies on the Serengeti or to compare the region’s wildlife with that in other areas.

The project is also contributing to computer vision research by helping train computers to recognize animals. That requires large sets of images with the subjects already identified.

Some image sets like this already exist, but most of those pictures are well-composed and well-lit — two things that are hard to achieve with automatically triggered cameras. Camera traps often catch critters when they’re out of focus, only partly in the frame or with other animals, and computer algorithms must be trained to account for all of these factors, researchers said.


Copyright © 2016, Los Angeles Times


June 11, 10:33 p.m.: This article has been revised throughout for additional details and for clarity. 

The first version of this article posted June 10 at 5:59 p.m.