pull down to refresh

Every frontier model knows more mathematics than any human who has ever lived; every model can solve exam questions faster and more accurately than any human.

I'm disappointed to read that kind of statement from a mathematician.

"knows"? really?

They have learned the linguistic patterns of mathematical language, and produce bullshit more rapidly than either humans or automatic verifiers can evaluate or verify it. However, as far as I've seen, there is not much research being done about fundamentally reflecting mathematical knowledge in some way that could be "exported" from the model's LLM blob, and e.g. plotted as a commutative diagram.

Yes, you could produce an SVG of that diagram by poking an LLM until it barfs one. However that would be the result of lots of "reasoning" steps performing a breadth-first search through the noisy vocabiulary, rather than a direct conversion of a subgraph from some abstract ideomorphic representation into the rendering format.