pull down to refresh

It's from openAI's own website, so take it with a grain of salt, man (pun intended).

A central aspect of the work concerns methodology. The final formula, Eq. (39) in the preprint, was first conjectured by GPT‑5.2 Pro. The human authors worked out the amplitudes for integer
n
n up to
n
=
6
n=6 by hand, obtaining very complicated expressions shown in Eqs. (29)--(32), which correspond to a “Feynman diagram expansion” whose complexity grows superexponentially in n. GPT‑5.2 Pro was able to greatly reduce the complexity of these expressions, providing the much simpler forms in Eqs. (35)--(38). From these base cases, it was then able to spot a pattern and posit a formula valid for all
n
n.
An internal scaffolded version of GPT‑5.2 then spent roughly 12 hours reasoning through the problem, coming up with the same formula and producing a formal proof of its validity. The equation was subsequently verified analytically to solve the Berends-Giele recursion relation, a standard step-by-step method for building multi-particle tree amplitudes from smaller building blocks. It was also checked against the soft theorem, which constrains how amplitudes behave when a particle becomes soft.

Cool stuff, i guess.

102 sats \ 6 replies \ @optimism 15h

I just keep on wondering, since the only good news coming out of OpenAI lately is hard science related:

Does having the LLM best tuned for science keep OpenAI in place to be more valuable than Anthropic and xAI whose products are tuned for respectively coding and undressing women?

reply
100 sats \ 3 replies \ @gmd 14h

most of this science / math stuff seems pretty obscure and not all that lucrative...

reply
163 sats \ 2 replies \ @nkmg1c 11h

Obscure discoveries can lead to practical applications though. It's still impressive that it did work that might be expected of a grad student, at least.

reply
120 sats \ 1 reply \ @gmd 10h

Oh it's absolutely impressive, much more so than coding capabilities. I can understand most of coding / algorithms but I can't even understand the questions that these AIs are answering in math and physics...

Just not clear to me how it will lead to profits- I don't think the AI that discovers a new physics phenomenon will own the practical applications of those discoveries...

reply
20 sats \ 0 replies \ @optimism 2h

Their TOC explicitly states that the user is responsible for both input and output. They must do this or they'd be liable for output.

No risk, no reward

reply

If that's their business model, they haven't chosen the market with the largest number of users, that's for sure~~

I read recently that a lot of the math stuff news from openAI was not really new science. E.g. with the erdos problems, it turned out to be problems with forgotten solutions that chatgpt had unearthed again.

A recent paper this week with never before published problems was posted to challenge chatgpt at truly tackling new science. Will have to look for it. Or i guess I'll just wait to be notified of the outcome of this challenge.

reply

I think that it's the thing they don't get meaningful competition on simply because there is no one with that kind of funding that wants to solve science problems.

What's ironic is that that outcome, though pretty much undesired judging by other efforts that seem to be haunted by failures, is more in line with the non-profit structure it started as than the financial atrocity it is at the moment.

As much as I dislike Sam Altman, I still hope they're going to make some comebacks.

reply