pull down to refresh

It's from openAI's own website, so take it with a grain of salt, man (pun intended).

A central aspect of the work concerns methodology. The final formula, Eq. (39) in the preprint, was first conjectured by GPT‑5.2 Pro. The human authors worked out the amplitudes for integer
n
n up to
n
=
6
n=6 by hand, obtaining very complicated expressions shown in Eqs. (29)--(32), which correspond to a “Feynman diagram expansion” whose complexity grows superexponentially in n. GPT‑5.2 Pro was able to greatly reduce the complexity of these expressions, providing the much simpler forms in Eqs. (35)--(38). From these base cases, it was then able to spot a pattern and posit a formula valid for all
n
n.
An internal scaffolded version of GPT‑5.2 then spent roughly 12 hours reasoning through the problem, coming up with the same formula and producing a formal proof of its validity. The equation was subsequently verified analytically to solve the Berends-Giele recursion relation, a standard step-by-step method for building multi-particle tree amplitudes from smaller building blocks. It was also checked against the soft theorem, which constrains how amplitudes behave when a particle becomes soft.

Cool stuff, i guess.

102 sats \ 4 replies \ @optimism 5h

I just keep on wondering, since the only good news coming out of OpenAI lately is hard science related:

Does having the LLM best tuned for science keep OpenAI in place to be more valuable than Anthropic and xAI whose products are tuned for respectively coding and undressing women?

reply
100 sats \ 1 reply \ @gmd 4h

most of this science / math stuff seems pretty obscure and not all that lucrative...

reply
63 sats \ 0 replies \ @nkmg1c 1h

Obscure discoveries can lead to practical applications though. It's still impressive that it did work that might be expected of a grad student, at least.

reply

If that's their business model, they haven't chosen the market with the largest number of users, that's for sure~~

I read recently that a lot of the math stuff news from openAI was not really new science. E.g. with the erdos problems, it turned out to be problems with forgotten solutions that chatgpt had unearthed again.

A recent paper this week with never before published problems was posted to challenge chatgpt at truly tackling new science. Will have to look for it. Or i guess I'll just wait to be notified of the outcome of this challenge.

reply

I think that it's the thing they don't get meaningful competition on simply because there is no one with that kind of funding that wants to solve science problems.

What's ironic is that that outcome, though pretty much undesired judging by other efforts that seem to be haunted by failures, is more in line with the non-profit structure it started as than the financial atrocity it is at the moment.

As much as I dislike Sam Altman, I still hope they're going to make some comebacks.

reply