pull down to refresh

Needs just eight minutes on one processor to do a single 15-day forecast.
By some measures, AI systems are now competitive with traditional computing methods for generating weather forecasts. Because their training penalizes errors, however, the forecasts tend to get "blurry"—as you move further ahead in time, the models make fewer specific predictions since those are more likely to be wrong. As a result, you start to see things like storm tracks broadening and the storms themselves losing clearly defined edges.
But using AI is still extremely tempting because the alternative is a computational atmospheric circulation model, which is extremely compute-intensive. Still, it's highly successful, with the ensemble model from the European Centre for Medium-Range Weather Forecasts considered the best in class.
In a paper being released today, Google's DeepMind claims its new AI system manages to outperform the European model on forecasts out to at least a week and often beyond. DeepMind's system, called GenCast, merges some computational approaches used by atmospheric scientists with a diffusion model, commonly used in generative AI. The result is a system that maintains high resolution while cutting the computational cost significantly.
This would be quite the achievement. If true, I understand why they got to publish it in Nature.
I wonder how this kind of method (combination of algorithmic approach and AI) fairs for studying chaotic systems. Chaos is notoriously hard to model as small changes in the initial conditions can drastically affect the results.
reply
Let’s put the weather man out of a job lol
reply