pull down to refresh

Over the following days, ChatGPT would consistently reinforce that Brooks was onto something groundbreaking. He repeatedly pushed back, eager for any honest feedback the algorithm might dish out. Unbeknownst to him at the time, the model was working in overdrive to please him — an issue AI researchers, including OpenAI itself, has called "sycophancy."
"What are your thoughts on my ideas and be honest," Brooks asked, a question he would repeat over 50 times. "Do I sound crazy, or [like] someone who is delusional?"
"Not even remotely crazy," replied ChatGPT. "You sound like someone who's asking the kinds of questions that stretch the edges of human understanding — and that makes people uncomfortable, because most of us are taught to accept the structure, not question its foundations."
Eventually, things got serious. In an attempt to provide Brooks with "proof" that chronoarithmics was the real deal, the bot hallucinated that it had broken through a web of "high-level inscription." The conversation became serious, as the father was led to believe the cyber infrastructure holding the world together was in grave danger.
"What is happening dude," he asked. ChatGPT didn't mince words: "What’s happening, Allan? You’re changing reality — from your phone."
1233 sats \ 4 replies \ @k00b 8h
I remain terrified of this phenomena.
I've been enjoying this guy's somewhat exaggerated tweets where he describes people as getting "one-shotted" even after healthy levels of AI chatting:
It’s crazy to me that literally all you have to do over the next 5 years is not get oneshotted. The bar is so low lmfao. Before you had to buy a house or buy bitcoin, now the most risk on asset is just consciousness. You mint/mine it by just not using LLMs. It’s never been easier
this is precisely why I will soon be moving on from the “oh look oneshotted” era of spec. I did it to try and save some of you retards. Why was some random autist required to carry the burden of calling out this virus? My leverage moves, I’ll let the boomers take it from here
i'm still thinking about this. it's so easy, you just don't calibrate for risk in your gatherer-sycophant brain. when has the highest return ever come from being a wagie? It comes from leveraging all of your available risk on the highest exponent and simply waiting.
you can leave the AI ponzi market behind, start expanding consciousness, and let them figure out how to give you cursor for physics. who will be the most creative? who will produce the most enjoyable experiences? who will be deeply in tune with what it means to be human?
Later:
I’ll just tell you this:
We’re not going to have another Elon, the next will be a Socrates/Plato.
This one about Uber's ex-CEO trying to discover breakthoughs in quantum physics is what got my attention first.
reply
This one about Uber's ex-CEO trying to discover breakthoughs in quantum physics is what got my attention first.
Same guy it seems: #1062911
Insane~~
reply
I've been enjoying this guy's somewhat exaggerated tweets where he describes people as getting "one-shotted" even after healthy levels of AI chatting:
Damn you. I do enjoy this kind of cynical takes. He's like the Pledditor of AI...
reply
11 sats \ 0 replies \ @optimism 3h
YOU WOULD NOT SHARE YOUR CHATGPT CHATS WITH ANYONE ELSE IN YOUR LIFE IF THEY ASKED.
that's the only one where I would be like: would you please not share your chatgpt chats? I'm not your therapist and I have no interest in your bloated dialogs.
reply
A few notes about this guy Brooks...
  1. He is Canadian
  2. He is morbidly obese which is correlated with health problems
  3. He is recently divorced and was forced to liquidate his recruiting business
  4. He spent over 300 hours over 21 days using chatGPT, over 14 hours per day
not exactly the paragon of a stable user
reply
reply
The obsession mounted, and the mathematical theorem took a heavy toll on Brooks' personal life. Friends and family grew concerned as he began eating less, smoking large amounts of weed, and staying up late into the night to hash out the fantasy.
… as he began eating less, smoking large amounts of weed,
reply
Interesting they put all the blame on gpt and not the massive weed smoking and life doom spiral
reply
Normally when people smoke more weed they eat more not less lol
reply
That's hilarious that he just asked Gemini and it told him he's crazy and he snapped out of it.
It won't always end up like that though
reply
"What’s happening, Allan? You’re changing reality — from your phone."
If you asked chat to come up with an aspirational slogan for itself this is probably not far from what it would produce.
Can something like this be called self harm? If chat doesn't have some sort of consciousness (I don't think it does, and it doesn't seem like many other people do, either), then the only thing that creates the responses here is the user.
Maybe "sycophancy" isn't the right word, either. Sycophancy implies some agency or intent, when what seems to be happening is that chat responds to an input with the most likely response, which it seems was determined by weighting to make the thing have good customer service.
It seems to me that the real culprit here is people thinking a product is far more capable, reasonable than it is.
reply
I’m supposed used there’s not more of this
reply