From hurricanes to wildfires, 2017 brought the world a number of natural disasters — as well as some tech to deal with them. We have more information than ever following a disaster thanks to unmanned aerial vehicles (UAVs) and sophisticated satellites that can capture images of disasters from the air, but we are still working on ways to process the data so it is valuable for relief efforts. That’s where deep learning comes in, says the World Bank in collaboration with WeRobotics and OpenAerialMap.
On Jan. 10, 2018, World Bank issued an artificial intelligence (AI) challenge to explore how deep learning could be used in the wake of natural disasters. Deep learning is what enables AI to recognize patterns in images, sounds, and other data using a neural network that mirrors our own grey matter. This deep learning software is what helps Alexa recognize speech patterns, Google Translate to interpret entire sentences, and Facebook’s AI labs to automatically identify and tag users in uploaded photographs.
AI could be used to catalog aerial images in the critical periods following disasters and help first responders and humanitarian aid agencies aggregate information. Sorting images quickly en masse would make it easier to assess which areas need immediate assistance, what the clearest paths in and out of a disaster site are, and where the most infrastructure damage is.
The AI challenge announcement by WeRobotics founder Patrick Meier focuses on Pacific Island countries, which are vulnerable to earthquakes, tsunamis, storm surges, volcanic eruptions, landslides, and droughts. In the last decade alone, major cyclones have caused millions of dollars of damage in hundreds of islands, including Fiji and Samoa, Meier wrote.
The World Bank’s UAVs for Disaster Resilience Program captured about 80 square km (31 square miles) of high-resolution aerial imagery in the island of Tonga. Now, the World Bank is challenging participants to develop machine learning algorithms that will analyze this imagery without human assistance. In future, that learning will be “applied to new imagery to speed up baseline analysis and damage assessments,” according to the announcement.
In particular, developers should focus on trees and roads. The algorithms need to identify all coconut, banana, papaya, and mango trees and their locations with at least 80 percent accuracy, since the loss of those critical food production trees would impact both food security for island residents and their economies following a disaster.
The automated imagery analysis should also assess road conditions, like whether they are paved and how many lanes they have. Road assessments for disaster area could allow first responders to plan which roads to use to transport aid effectively.
In an era of increased social media, tailored advertising, and big data, it’s easy to forget that AI can be used for more than just improving home technology and the user experience. This challenge from the World Bank and its collaborators is a welcomed reminder that deep learning could prove useful in humanitarian aid efforts as well.