Everyone agrees that these are geophysically unsettled times. Lately, the world has been rocked by more than its usual share of the biggest earthquakes ever accurately recorded: the magnitude-9.0 “megaquake” that just struck off Japan; another one that hit off Indonesia 6 years ago; and sandwiched between them, the great magnitude-8.8 Chilean quake of 2010. Before these three, however, nothing like them had been seen for 40 years.
Chile 1960. The largest quake on record, a magnitude 9.5, was part of a 1950–1965 cluster.
CREDIT: AP
Could these three big quakes be physically connected? Could the first of them somehow have touched off a cluster of great earthquakes spanning the Pacific? And if so, has this cluster played itself out? Experts differ. “Our position is this could be continuing,” says seismologist Charles Bufe, scientist emeritus at the U.S. Geological Survey (USGS) in Golden, Colorado. On the basis of statistical testing, he says, “I think we're in an increased hazardous situation for these very large earthquakes” around the world.
But Andrew Michael, a seismologist at USGS in Menlo Park, California, says his own statistical tests tell a different story. “I simply can't find any reason to reject the random hypothesis,” he says. That is, he cannot prove that anything but chance is responsible for huge quakes coming on one another's heels.
Seismologists recognized some time ago that the largest earthquakes are not evenly sprinkled throughout the 110-year-long seismic record. In a 2005 Bulletin of the Seismological Society of America paper, Bufe and his USGS Golden colleague David Perkins, a statistician, assessed a big-quake cluster that ran from 1950 through 1965 (see graph). It included seven of the nine greatest quakes of the 20th century (the big jump in the middle of the graph), among them all three of the century's megaquakes —quakes of magnitude 9.0 or greater. But after 1965, Bufe and Perkins noted, 36 years passed without even a quake of magnitude 8.4 or greater.
Stepping up again.
Two clusters of the biggest quakes appear as steps (center and right) in this plot of cumulative earthquake size.
CREDIT: UPDATED FROM C. AMMON ET AL., SEISMOLOGICAL RESEARCH LETTERS 81, 6 (NOVEMBER/DECEMBER 2010)
In the same 2005 paper, Bufe and Perkins thought they had an inkling of a second cluster getting started. A magnitude-8.4 quake off Peru in 2001 pointed to a coming cluster, they wrote. In a note added just before the journal was printed, they drew attention to the then-recent magnitude-9.1 Sumatra megaquake of December 2004, which was shortly followed by a magnitude-8.7 quake just to the south. The two quakes “confirm that we have entered a new period of … probable temporal clustering of mega-quakes,” they wrote in the note. Sure enough, the great Chile quake followed 6 years later, and then came last month's Japanese Tohoku megaquake (smaller steps on right of graph).
No one knows how even a megaquake could have triggered another large quake on the other side of the Pacific, but Bufe and Perkins don't think they just got lucky. They have now made 100,000 computer runs randomly generating simulated earthquake records to see how often such tight clusterings might crop up purely by chance. “It turns out to be 2% of the time,” Bufe said at a press conference at last week's annual meeting of the Seismological Society of America (SSA) in Nashville. “That is very significant.”
Many seismologists are not so confident. “There's nothing wrong in pointing [clustering] out,” says seismologist Hiroo Kanamori of the California Institute of Technology in Pasadena, but “you can't really do statistics on such a small data set.” And seismologist Richard Aster of the New Mexico Institute of Mining and Technology in Socorro said at the SSA press conference that “if the data are sliced just right, you can get numbers that sound interesting, but there are other methods that are just as appropriate that find no [statistically significant] clustering.”
Michael, who, like Bufe and Aster, presented assessments of clustering at the meeting, says Bufe and Perkins's claim results from “a serious statistical mistake.” He said at the press conference, “We can't run experiments, so we're stuck testing our hypotheses on the same data we developed them on.”
That limitation requires statistical tests that are more general and less closely tied to the existing seismic record than those Bufe and Perkins ran, Michael said. After performing several such tests, he added, “I find the data are very well explained by the random model” over a range of magnitudes. In the case of megaquakes, Michael said, the problem could be the dearth of megaquakes in the record: “Maybe there really is clustering, but there's not enough data yet to prove it. Without a specific physical mechanism to test, the only way out of this is waiting for more earthquakes.”
If Bufe and Perkins are right, Michael may not have long to wait. “The probability of a magnitude-9 or larger event—based on our model—in the next 6 years is 24% if these [past quakes] are random,” Bufe said at the press conference. “If these are clustered, the probability is 63%.”
The dispute is not deterring most researchers. As earthquake physicist Emily Brodsky of the University of California, Santa Cruz, puts it, “It would be naïve of us to assume this is all random and not worth investigating.”