Computer Vision / Video Analytics

Deep Learning and Satellite Data Helping Map Poverty

Stanford scientists developed a solution combining high-resolution satellite imagery with deep learning to accurately predict poverty levels at the village level.
Using TITAN X and Tesla K40 GPUs with the cuDNN-accelerated Caffe and Tensorflow deep learning frameworks to train their convolutional neural networks, the researchers used the “nightlight” satellite data to identify features in the higher-resolution daytime imagery that are correlated with economic development, like roads, urban areas, waterways and farmland. They then use the features to predict village-level wealth as measured in surveys across five African countries.
“We have a limited number of surveys conducted in scattered villages across the African continent, but otherwise we have very little local-level information on poverty,” said study co-author Marshall Burke, an assistant professor of Earth system science at Stanford. “At the same time, we collect all sorts of other data in these areas – like satellite imagery – constantly.”

The team found their solution outperformed existing approaches and can help aid organizations and policymakers distribute funds more efficiently and enact and evaluate policies more effectively.
“Our paper demonstrates the power of machine learning in this context,” said study co-author Stefano Ermon, assistant professor of computer science and a fellow by courtesy at Stanford Woods Institute for the Environment. “And since it’s cheap and scalable – requiring only satellite images – it could be used to map poverty around the world in a very low-cost way.”
Read more >>

Discuss (0)

Tags