How to track poverty from space

This video explains how satellite imagery and machine learning can be combined to map poverty around the world.


You can get a pretty good idea of a country’s wealth by seeing how much it shines at night — just compare the intense brightness of China and South Korea to the dark mass of North Korea that’s sandwiched between them.

But nighttime lights don’t tell you which neighborhoods or villages within a large region are merely poor and which are home to people living in abject poverty. That’s the level of detail policymakers need when they decide where to deploy their economic development programs.

You could get that detail by sending legions of survey-takers into crowded slums and sparsely populated rural areas. But that would be hugely time-consuming and cost tens of millions of dollars or more.


So researchers at Stanford came up with a way to get computers and satellites to do the work for them.

Their computer model, described Thursday in the journal Science, isn’t perfect. But its predictive power is at least as good as — or better than — methods that rely on data from old and out-of-date surveys.

The Stanford approach requires a few key ingredients.

First, you need to have some kind of data that covers every single place where people live. You get bonus points if that data is in the public domain.

You also need a smaller amount of data that you know is pretty accurate.

Finally, you need a powerful computer that can calibrate the trove of “noisy” data to the smaller amount of reliable data.

The Stanford researchers tested their system with five African countries: Nigeria, Tanzania, Uganda, Malawi and Rwanda. They started with nighttime images captured as part of the U.S. Air Force Defense Meteorological Satellite Program. Places that were brighter at night were presumed to be more economically developed than places that were dim.

Then they had their computer program compare the nighttime images to higher-resolution daytime images available via Google Static Maps. The program was able to recognize certain shapes in the daytime pictures that were correlated with economic development.


“Without being told what to look for, our machine learning algorithm learned to pick out of the imagery many things that are easily recognizable to humans — things like roads, urban areas and farmland,” study lead author Neal Jean, a computer science graduate student at Stanford’s School of Engineering, said in a statement.

Other recognizable features included waterways and buildings. The computer even learned to distinguish metal rooftops from those made of grass, thatch or mud, according to the study.

To bring it all together, the Stanford team used statistical methods to determine how the presence (or absence) of items identified in the daytime pictures related to income data collected in surveys. The type of roofing material on a building was directly related to income, for instance. So was a location’s distance from an urban area.

The final computer model was “strongly predictive” of two important measures of poverty — average spending by households and average household wealth. In Rwanda, for instance, the model predicted average household wealth more accurately than data from cellphone records, according to the study. (Another problem with cellphone records: They’re proprietary, and companies aren’t always willing to share them.)

When a computer program churned through satellite data from just one of the five countries, the resulting model worked best in that country. But in some cases, it did a pretty good job of making predictions in other countries as well. That should make it a valuable tool, the study authors wrote, since the method “is straightforward and nearly costless to scale across countries.”

Jean and his colleagues aren’t the only ones excited about the prospect of using satellites and computers to fight poverty.


In an essay that accompanies the study, Joshua Blumenstock of UC Berkeley’s Data Science and Analytics Lab said that making use of daytime satellite data — which contains far more information than nighttime images — can “make it possible to differentiate between poor and ultrapoor regions.” This, in turn, “can help to ensure that resources get to those with the greatest need.”

Follow me on Twitter @LATkarenkaplan and “like” Los Angeles Times Science & Health on Facebook.


How Otzi the Iceman outfitted himself: Fur from brown bears and leather from roe deer

Scientists catch a white dwarf star in the act of exploding into a nova


Did physicists discover a previously unknown fifth force of nature?