Systems Team Aids in Earth Moving Simulation


Seismologists at the San Diego Supercomputer Center (SDSC) at the University of California, San Diego and San Diego State University are using the supercomputing expertise of DK Panda and his team to assist with the software efficiency for their earthquake simulation research. In their work the California researchers have created the to-date largest simulation of an 8.0 earthquake running primarily along the San Andreas fault. The simulation, if real, would affect 25 million people from Yuma, Arizona to Ensenada, New Mexico to Fresno, California.

"The scientific results of this massive simulation are very interesting, and its level of detail has allowed us to observe things that we were not able to see in the past," said Kim Olsen, professor of geological sciences at SDSU, and lead seismologist of the study. "For example, the simulation has allowed us to gain more accurate insight into the nature of the shaking expected from a large earthquake on the San Andreas Fault."

"Petascale [supercomputers able to calculate at more than one quadrillion operations per second] simulations such as this one are needed to understand the rupture and wave dynamics of the largest earthquakes at shaking frequencies required to engineer safe structures," said Thomas Jordan, director of SCEC and Principal Investigator for the project.

Thus far one of the hypotheses coming from the work is that high-rise building are susceptible to low frequency, or roller-coaster-like motion, while smaller structures feel the destruction with shorter, sharp bursts of movement. This information is much needed for design of not just new buildings but for emergency response teams preparing for the aftermath of "The Big One." This building damage idea will be more deeply analyzed later this year.

The resulting paper for this work, Scalable Earthquake Simulation on PetaScale Supercomputers is a finalist for the Gordon Bell Prize, given for outstanding achievement in high-performance computing applications at the Supercomputing Conference (SC '10) to be held in November in New Orleans, Louisiana. The work has been funded through several National Science Foundation grants.

Several members of DK Panda's MVAPICH project team (Sayantan Sur, Sreeram Potluri and Karen Tomko from OSC) are working along this direction together with the SDSC team. "It has been an extremely rewarding experience for our MVAPICH project team to work with the computational scientists from the SDSC team to push the envelope of such MPI-level simulation on modern supercomputers and helping real-world applications. We still have many MPI-level optimizations for this application remaining and plan to push the envelope even further," said DK Panda.

What makes the simulation so record breaking is it sets new standards in terms of the duration of the temblor (six minutes) and the geographical area covered - a rectangular volume approximately 500 miles (810km) long by 250 miles (405 km) wide, by 50 miles (85km) deep. The number of processor cores used - more than 223,000 cores running within a single 24-hour period on the Jaguar Cray XT5 supercomputer at the Oak Ridge National Laboratory (ORNL) in Tennessee - also sets a new record. At 436 billion mesh (or grid) points calculating the earthquake effects buries the previous apex of 1.8 billion set in 2004.

For more information and to see the simulation, follow the link.