Weather forecasting bests economic, seismic predictions

Under the headline “The Weatherman Is Not a Moron,” the New York Times Magazine this weekend published an adaptation from the forthcoming book The Signal and the Noise: Why So Many Predictions Fail—but Some Don’t, by Nate Silver. Silver is probably most famous for his FiveThirtyEight blog, which deals with political forecasting, but as Silver himself puts it of his new effort, “The book takes a comprehensive look at prediction across 13 fields, ranging from sports betting to earthquake forecasting. Since 2009, I have been traveling the country to meet with experts and practitioners in each of these fields in an effort to uncover common bonds. The book asks an ambitious question: What makes predictions succeed or fail?”

The adaptation printed in the Times Magazine deals specifically with weather forecasting, and Silver urges readers to take the long view in assessing the improving accuracy of weather forecasts. Speaking of the National Weather Service, he writes, “In 1972, the service's high-temperature forecast missed by an average of 6° when made three days in advance. Now it’s down to 3°.” That's a considerable improvement that's missing in other fields, he writes: “In November 2007, economists in the Survey of Professional Forecasters—examining some 45,000 economic-data series—foresaw less than a 1-in-500 chance of an economic meltdown as severe as the one that would begin one month later. Attempts to predict earthquakes have continued to envisage disasters that never happened and failed to prepare us for those, like the 2011 disaster in Japan, that did.”

Silver attributes the improvement of weather forecasters' accuracy to the acknowledgement of their imperfections: “That helped them understand that even the most sophisticated computers, combing through seemingly limitless data, are painfully ill equipped to predict something as dynamic as weather all by themselves. So as fields like economics began relying more on Big Data, meteorologists recognized that data on its own isn’t enough.”

Obviously, Silver is writing for a general audience, and he describes the IBM Bluefire supercomputer in the basement of the National Center for Atmospheric Research in Boulder as resembling a series of “space-age port-a-potties.”

But there is plenty here for an engineering as well as lay readership. Silver does a good job of tracing the history of meteorology from the past (“for centuries, meteorologists relied on statistical tables based on historical averages”) to the “…holy grail of meteorology… dynamic weather prediction.” Along the way, he visits English physicist Lewis Fry Richardson, who in 1916 tried to predict the weather over northern Germany on May 20, 1910 (that's right—six years earlier) based on observations of temperature, barometric pressures, and wind speeds collected before the May 20, 1910, date by the German government. Richardson failed, perhaps because he lacked the estimated 64,000 meteorologists who working simultaneously might have computed an accurate weather forecast in real time, or past time, as the case may be.

Silver goes on to describe the work of MIT mathematician Edward Lorenz, the advent of chaos theory, and the difficulty of precisely determining initial conditions. Silver even provides some easily comprehended examples of the significance of errors in linear vs. exponential arithmetic functions: if your initial conditions are 5+5 but you erroneously key into your calculator 5+6, you'll get 11 instead of 10—you are wrong, but not by too much. But if you try to raise 5 to the 5th power and enter 5 to the 6th instead, you'll come up with 15,625 instead of 3,125.

Assuming that substituting a 6 for a 5 is analogous to inadequately specifying initial conditions, there is not much a computer can do to compensate. However, writes Silver, humans can intervene in flagging the impractical meteorological forecasts that computers can generate. In fact, weather service records indicate that human intervention can improve the accuracy of computer precipitation forecasts by 25%. Silver likens the process to a skilled pool player who can adjust to the dead spots on a familiar table at a local bar.

Of course, humans can introduce their own foibles. Notes Silver, “For many years, the Weather Channel avoided forecasting an exact 50% chance of rain, which might seem wishy-washy to consumers. Instead, it rounded up to 60 or down to 40.” Further, from a customer-satisfaction standpoint, it seems best to lean toward predicting bad weather. Writes Silver, “In what may be the worst-kept secret in the business, numerous commercial weather forecasts are also biased toward forecasting more precipitation than will actually occur. (In the business, this is known as the wet bias.)”

He continues, “People don’t mind when a forecaster predicts rain and it turns out to be a nice day. But if it rains when it isn’t supposed to, they curse the weatherman for ruining their picnic.”

Fascinating. People can enhance accuracy of the predictions of the most powerful computers, but to give the market what it wants they need to hedge their forecasts.

The adaptation published in the Times Magazine raises the question, why is weather forecasting more tolerant of human intervention in compensating for possible erroneous initial conditions than economic or political forecasting? I'm looking forward to reading Silver's complete book when it's officially released September 27.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!