People may think they know how hard the wind is blowing, but science shows that they usually get it wrong. Researchers have known this for years, but a recent study seeks to quantify just how bad humans are at figuring out the speed of wind gusts without the aid of meteorological instruments.
The authors of the study seek to make other researchers more aware of human bias when using Storm Data, one of the largest publicly available storm databases. The study, published in a recent issue of Journal of Applied Meteorology and Climatology, uses a large-scale statistical analysis of information from the database to investigate how much storm reporters’ estimates of wind speeds differ from measurements.
Potential for Bias?
Storm Data is an enormous set of measurements characterizing more than 50 years’ worth of geophysical events, from tidal waves to tornadoes, compiled by the National Centers for Environmental Information (NCEI) in Asheville, N.C. People from blizzard climatologists to insurance company adjustors make use of the data—and, thanks to a thorough collection and vetting process, the storm reports the database generates are generally considered reliable.
However, because Storm Data aims to collect as many data as possible, not all of the information comes from weather stations with calibrated instruments. Many of the entries in the database come from trained—or even untrained—storm reporters. These reporters must rely on environmental cues to make estimates of wind speeds and other measurements.
“When you’re estimating it, there has to be some sort of basis for that estimate…whether a tree limb snapped, or whether there was siding ripped off a house,’” said Brenton MacAloney, the Storm Data program manager at the National Weather Service in Silver Spring, Md. “There has to be something other than, ‘uh, I thought it was something around 50 knots.’” The database asks its reporters to make narrative accounts of the events, to check that their numbers are within the realms of probability.
Still, many researchers have long doubted the accuracy of Storm Data’s human reporters. “Everyone had assumed that [Storm Data was unreliable], but nobody had actually shown it,” said Peter Miller, the lead author on the study and a meteorologist at the University of Georgia in Athens, Ga.
Testing Storm Reporters’ Accuracy
To test the accuracy of human-generated wind gust reports, the researchers compared Storm Data wind speed entries from storm reporters who didn’t use anemometers with wind data from automated weather stations. In the study, which appeared online on 19 April, the authors also used instrumental data from the Global Historical Climatology Network as a comparison for the human-reported gusts.
The researchers focused on windstorms without rain, lightning, or other phenomena that could frighten observers, accidentally inflating their estimates of the storm’s intensity. They also eliminated news media reports from consideration as human observer data, because news reporters might have relied upon instrumental data from local weather stations.
Even with those potential biases removed, the comparisons revealed that storm reporters overestimated the speeds of wind gusts—on average, by about one third of the gusts’ actual speeds.
These inflated estimates could have introduced inaccuracies into any study that relied on Storm Data for climatological or storm modeling information, Miller said.
Overestimation: Consequences and Cures
Inflated estimates can have real consequences for society. If people hear that winds are stronger than they actually are, they may alter their behavior—for example, evacuating their homes unnecessarily during a hurricane.
That leads to another sort of trouble. “People who choose to evacuate eat up resources for people who truly need to evacuate,” said Gregory Webster, a psychologist at the University of Florida in Gainesville, who was not involved with the study. “It causes extra traffic congestion, and can sometimes result in even more severe food hoarding,” when frightened residents buy out all the water and nonperishable food they can find, leaving less for those who might truly need it.
Fixing the problem may prove difficult, said Miller. Many different factors contribute to overestimations—for instance, outdated storm reporter training, he noted.
Storm reporters learn, in training, to use the Beaufort Wind Force Scale, which relies on environmental cues to determine how fast the wind is blowing. According to Miller, the Beaufort Scale is flawed. As an example, the scale indicates that trees blow over at wind speeds of 58 miles per hour and above. However, Miller said his research shows that trees can fall over at much lower speeds, in the low 40-mph range.
“Wind speeds are hard to estimate, especially when you’re talking about trees,” said Storm Data’s MacAloney. “We’re meteorologists, not arborists.”
The value of Storm Data lies in its scope, which includes remote areas without on-the-ground weather stations, MacAloney maintained. Despite their potential biases, human reports from such locations provide vital information such as narratives about highly localized events—for example, tornadoes and hail—that weather stations miss.
The authors of the new study agree, saying that Storm Data remains a valuable resource but needs a better system for flagging wind reports generated from people’s observations alone.
—Elizabeth Deatrick, Writer Intern; email: email@example.com
Correction, 21 June 2016: An earlier version of this article confused the locations of NCEI’s headquarters and Brenton MacAloney’s workplace. The article has been updated to correct these inaccuracies.