From Algorithms to Automated Post-Disaster Assessments in a Fraction of Time
October 24, 2018

The U.S. military is often a first responder when a disaster happens here in the U.S. and abroad and these missions are incredibly labor intensive, including search-and-rescue and damage assessments. With small teams flying over affected areas to search for survivors and analysts manually combing through thousands of images to identify infrastructure issues, potential survivors and critical structural damage can be overlooked.

Computer vision--the use of algorithms to automatically identify objects from images--holds the potential to automate post-disaster assessments and accelerate search and rescue efforts, ultimately saving lives. The Defense Innovation Unit (DIU) has already begun testing winning algorithms from the xView Challenge, DIU’s public computer vision competition, in the wake of Hurricane Florence, assisting emergency personnel in quickly identifying flooded areas and impassable roads.

Launched in March 2018 in partnership with the National Geospatial-Intelligence Agency (NGA), DIU’s xView Challenge allowed participants to submit and test their algorithms against the xView dataset. The dataset contains overhead imagery covering 1,415 km2 of complex scenes from around the world and includes more than 1.0 million bounding box annotations across 60 object classes.

The public competition attracted more than four thousand submissions from 100 participants from around the world including companies, universities, and individuals. The top performing algorithms were 300 percent more accurate than the government produced baseline. Moreover, Challenge participants’ algorithms advanced computer vision proficiency across four core elements of overhead imagery analysis, reducing the minimum resolution for detection, improving learning efficiency, enabling discovery of more object classes, and improving detection of fine-grained object classes.

When the Challenge concluded in September 2018, DIU identified the top five performing algorithms:

xVIEW Challenge Winners (Rank. Name (Team))

  1. Nick Sergievskiy
  2. Victor Stamatescu (University of Adelaide)
  3. Sudeep Sarkar (University of South Florida)
  4. Leonardo Dal Zovo (Studio Mapp)
  5. Ritwik Gupta (Software Engineering Institute / ETC)

Overall, the xView Challenge succeeded in meeting two key goals:

  1. Build a reusable platform that can host future AI competitions and benchmark AI and machine learning algorithms against government datasets.

The platform that hosted the xView dataset, Nexus, is a first-of-its-kind government capability to objectively test and evaluate AI and machine learning algorithms. Nexus will be used to evaluate the performance of both government and commercially developed algorithms for future xView competitions and similar efforts. This type of platform holds the potential to transform how the DoD acquires and prototypes AI capabilities, moving from subjective to objective capability evaluation.

  1. Advance the state-of-the art in satellite computer vision for humanitarian assistance and disaster relief.

“xView provides a new and potentially important vehicle in developing our AI/ML capabilities for humanitarian assistance and disaster relief (HADR). xView is a one-of-a-kind tool to crowdsource, test, and evaluate algorithms for some of the toughest sensing challenges we face,” said Col Jason Brown, Director of the Chief of Staff of the Air Force Strategic Studies Group. “The Department’s plan is to use xView developed algorithms to develop capabilities to quickly locate personnel in need or at risk and view the status of critical infrastructure in a disaster zone.”

Over the coming months, DIU will continue to work to transition and implement the most successful xView Challenge algorithms as a part of standard humanitarian assistance and disaster response operations. In addition, DIU plans to launch a second xView competition in 2019 focused on robustness and post-disaster change detection.