logo
×

Multi-view deep learning for reliable post-disaster damage classification

Summary: Researchers from Texas A&M University, recently researched ways to enable more reliable automated post-disaster building damage classification using artificial intelligence (AI) and multi-view imagery.

Research challenge: In their research, the current practices and research efforts in adopting AI for post-disaster damage assessment are generally (a) qualitative, lacking refined classification of building damage levels based on standard damage scales, and (b) trained based on aerial or satellite imagery with limited views, which, although indicative, are not completely descriptive of the damage scale.

To enable more accurate and reliable automated quantification of damage levels, the present study proposes the use of more comprehensive visual data in the form of multiple ground and aerial views of the buildings. To have such a spatially-aware damage prediction model, a Multi-view Convolution Neural Network (MV-CNN) architecture, the researchers used a novel approach that combined the information from different views of a damaged building.

Findings: This spatial 3D context damage information will result in more accurate identification of damages and reliable quantification of damage levels. The proposed model was trained and validated on reconnaissance visual dataset containing expert labeled, geotagged images of the inspected buildings following hurricane Harvey.

Their research team developed a model demonstrates reasonably good accuracy in predicting the damage levels and can be used to support more informed and reliable AI-assisted disaster management practices.

How Labelbox was used: The model was trained using pixel level annotations of the buildings using the Labelbox platform. The idea was to recognize the pixels belonging to buildings in an image and filter out the visual information unnecessary for determining the damage state of the building like trees, sky, roads, etc. Data annotation was also performed using Labelbox, and in total, images corresponding to 400 buildings (2000 images) were annotated and assigned labels representing their damage state. The damage labels (0 to 5) were based on the expert assessment reported in the original database. The entire annotated dataset was split into training (80%), validation (10%), and testing sets (10%).

Read the full PDF here.