The Sun routinely ejects clouds of gas and sends them hurtling through space at several thousand kilometers per hour. At least a few dozen times a year, those clouds head straight for Earth.
These natural events, called coronal mass ejections (CMEs), crop up when the Sun’s magnetic field becomes tangled and, in righting itself, releases a swarm of charged particles called superheated plasma. Sent at just the right angle toward Earth, these plasma clouds can wreak havoc on our electrical grids, satellites, and oil and gas pipelines.
Quebec, Canada, for instance, experienced a blackout related to a solar storm on a winter night in 1989. The province went black after a solar storm sent an electric charge into the ground that shorted the electrical power grid. The outage lasted 12 hours, stranding people in elevators and pedestrian tunnels and closing down airports, schools, and businesses.
Solar storms can threaten our communication and navigation infrastructure. In the past, solar storms interrupted telegraph messages, and future storms could threaten our cellphones, GPS capabilities, and spacecraft.
With the right kind of warning, utility operators, space crews, and communications personnel can prepare and steer clear of certain activities during solar storms. But once a CME event is spotted leaving the Sun, our best models struggle to forecast when exactly it will arrive.
To improve forecasts, a group of scientists is taking a community approach: What if researchers working on CME models around the world could post their forecasts publicly, in real time, before the CME reaches Earth?
The CME Scoreboard, run by the Community Coordinated Modeling Center at NASA Goddard Space Flight Center, does just that. The online portal with 159 registered users acts as a live feed of CME predictions heading for Earth. The portal gives scientists a simple way to compare forecasts, and the log of past predictions presents a valuable data set to assess forecasters’ accuracy and precision.
The AGU Grand Challenges Centennial Collection features the major questions faced by science today. Editors of Space Weather identified CME predictions as one of them, calling the ability to provide them “essential for our society.”
CME forecasting still lags behind our capabilities to forecast weather systems here on Earth, and the paper highlights several reasons why. Leila Mays, coauthor on the paper and science lead for the CME Scoreboard at NASA Goddard Space Flight Center, said that CME forecasts are lacking in two key areas: Measurements of solar activity are sparse, and the exact physical details driving the Sun are still unclear.
Despite the need for improvement, people on Earth still rely on CME forecasts, and scientists have myriad ways to supply them. The National Oceanic and Atmospheric Administration and the United Kingdom’s Met Office both release publicly available CME predictions, and individual research groups build their models from scratch. Forecasting models range from data-driven empirical models to physics-based, equation-driven models.
The models operate independently, perhaps using unique parameters or data inputs, but they all strive for a shared goal: to determine when a CME, or CME’s shock wave, will impact Earth.
The CME Scoreboard serves as a repository for a wide range of these models. Mays said that scientists tracking solar activity will notice when a CME event explodes from the surface of the Sun, setting down a ticking clock for when the plasma will hit Earth (or miss it altogether). This sets off a flurry of activity, with scientists running their models with parameters from the most recent eruption, including the plasma’s speed, direction, and size. With the numbers crunched, they post their best guess and wait to see what unfolds.
Since the CME Scoreboard’s inception in 2013, scientists have posted 814 arrival time predictions. Some predictions narrowly miss the mark, skirting the real arrival time of the CME by a mere hour or two. But others are days away, trailing the arrival by 30 or more hours.
Mays said that the forecasts come from over a hundred users and represent 26 unique prediction methods. She said that the interest in the portal has been strong, which she’s not surprised about. The scoreboard merely gives a platform for ad hoc discussions that researchers were already having, spread across listservs and email chains whenever a new CME would appear.
Pete Riley, a senior research scientist at Predictive Science Inc., knew of the scoreboard but had never contributed. Looking at years of forecasts on the website, he decided to analyze the accuracy and precision of past predictions.
“I felt like having knowledge in the field but not having a horse in the race, so to speak, I’d be able to do a fairly independent evaluation,” Riley told Eos.
His study, published in Space Weather in 2018, is the first analysis of the scoreboard data. Riley and his collaborators compared the difference between the projected arrival times and the actual reported times for 32 models. The analysis showed that the forecasts, on average, predicted the CME arrival with a 10-hour error, and they had a standard deviation of 20 hours. Several models performed the best, he said, but only moderately so, and the few that submitted regularly over the 6 years of data analyzed didn’t seem to be improving their forecasts.
The paper “serves as kind of a ground truth for where we are at currently,” Riley said, as well as laying the foundation for future analysis. Riley made the code accessible so that future forecasts can be tested against the group. Mays said that in the future, the scoreboard may use the information to create a list of automatically updating metrics.
Although more work lies ahead, Riley said that the future looks bright for more accurate predictions. He points to new space missions that will help fill in blind spots, including NASA’s Parker Solar Probe and nanosatellites called CubeSats that individual research groups deploy.
“Space weather is becoming ever more important because as a society, we are so reliant on technology now,” Riley said. With the additional data, he said, “I think it’s promising that in the future we will be able to make predictions more accurate.”
—Jenessa Duncombe (@jrdscience), News Writing and Production Fellow
4 September 2019: This article was updated to clarify the description of the AGU Grand Challenges Centennial Collection and correctly identify the research paper’s journal of publication.