Researchers from Purdue University are using deep learning to dramatically reduce the time it takes for engineers to assess damage to buildings after disasters.
Engineers need to quickly document the damage to buildings, bridges and pipelines after a disaster.
“These teams of engineers take a lot of photos, perhaps 10,000 images per day, and these data are critical to learn how the disaster affected structures,” said Shirley Dyke, a Purdue University professor of mechanical and civil engineering. “Every image has to be analyzed by people, and it takes a tremendous amount of time for them to go through each image and put a description on it so that others can use it.”
“Unfortunately, there is no way to quickly organize these thousands of images, which are essential to determine how to understand the damage from an event, and the potential for human error is a key drawback,” said doctoral student Chul Min Yeum. “When people look at images for more than one hour they get tired, whereas a computer can keep going.”
Using TITAN X GPUs along with cuDNN, the researchers trained their deep learning network on a large dataset of nearly 8,000 images, with labels on photographs showing building components that were either collapsed or not collapsed, and areas affected by spalling, where concrete chips off structural elements due to “large tensile deformations.”
“This is the first-ever implementation of deep learning for these types of images,” Dyke said. “We are dealing with real-world images of buildings that are damaged in some major way by tornados, hurricanes, floods and earthquakes. Design codes for buildings are often based on or started by lessons that can be derived from these data. So if we could organize more data more quickly, the images could be used to inform design codes.”