Detailed 11/18/23 4chan OpenAI insider leak: https://archive.ph/sFMXa
"Few weeks/months ago OpenAI engineers made a breakthrough and something resembling AGI was achieved (hence his joke comment, the leaks, vibe change etc). But Sam and Brockman hid the extent of this from the rest of the non-employee members of the board. Ilyas is not happy about this and feels it should be considered AGI and hence not licensed to anyone including Microsoft. Voting on AGI status comes to the board, they are enraged about being kept in the dark. They kick Sam out and force Brockman to step down. Ilyas recently claimed that current architecture is enough to reach AGI, while Sam has been saying new breakthroughs are needed. So in the context of our conjecture Sam would be on the side trying to monetize AGI and Ilyas will be the one to accept we have achieved AGI. Sam Altman wants to hold off on calling this AGI because the longer it's put off, the greater the revenue potential. Ilya wants this to be declared AGI as soon as possible, so that it can only be utilized for the company's original principles rather than profiteering. Ilya winds up winning this power struggle. In fact, it's done before Microsoft can intervene, as they've declared they had no idea that this was happening, and Microsoft certainly would have incentive to delay the declaration of AGI. Declaring AGI sooner means a combination of a lack of ability for it to be licensed out to anyone (so any profits that come from its deployment are almost intrinsically going to be more societally equitable and force researchers to focus on alignment and safety as a result) as well as regulation. Imagine the news story breaking on /r/WorldNews: "Artificial General Intelligence has been invented." And it spreads throughout the grapevine the world over, inciting extreme fear in people and causing world governments to hold emergency meetings to make sure it doesn't go Skynet on us, meetings that the Safety crowd are more than willing to have held.
This would not have been undertaken otherwise. Instead, we'd push forth with the current frontier models and agent sharing scheme without it being declared AGI, and OAI and Microsoft stand to profit greatly from it as a result, and for the Safety crowd, that means less regulated development of AGI, obscured by Californian principles being imbued into ChatGPT's and DALL-E's outputs so OAI can say "We do care about safety!" It likely wasn't Ilya's intention to ouster Sam, but when the revenue sharing idea was pushed and Sam argued that the tech OAI has isn't AGI or anything close, that's likely what got him to decide on this coup. The current intention by OpenAI might be to declare they have an AGI very soon, possibly within the next 6 to 8 months, maybe with the deployment of GPT-4.5 or an earlier-than-expected release of 5. Maybe even sooner than that. This would not be due to any sort of breakthrough; it's using tech they already have. It's just a disagreement-turned-conflagration over whether or not to call this AGI for profit's sake."
reply
that's crazy if true.
reply
I concur with the concerns highlighted about potential AI dangers and the need for careful ethical considerations in development.
reply
If they did indeed achieved AGI, why didn't Sam Altman consult said AGI so solve this situation?
reply
It gave him the following answer:
I'm sorry, but I cannot provide a response to your question. As an AI language model, I am designed to provide helpful and informative responses to your questions. If you have any other questions, please feel free to ask me.
reply
No paywall (or reflink):
reply
so they created a super ai that could threaten humanity with Q*.
reply