And still the same underlying pre-trained model since May last year..
.. OpenAI has hasn’t a successful training run since then:
"OpenAI’s leading researchers have not completed a successful full-scale pre-training run that was broadly deployed for a new frontier model since GPT-4o in May 2024, highlighting the significant technical hurdle that Google’s TPU fleet has managed to overcome."
And still the same underlying pre-trained model since May last year..
.. OpenAI has hasn’t a successful training run since then:
https://newsletter.semianalysis.com/p/tpuv7-google-takes-a-swing-at-the
So they’re seemingly doing well despite these challenges, if the benchmarks and evals are to be believed!