pull down to refresh
200 sats \ 5 replies \ @kepford OP 29 May \ parent \ on: Elon Musk Tried to Block Sam Altman’s Big AI Deal in the Middle East Politics_And_Law
Do you feel like you have a good grasp on how LLM's work? If not I'd highly recommend you check out this video. Its not magic and I think its not going to happen as quickly as many think but yes plenty of jobs will be obsoleted. This has happened in the past and the transition periods are rough.
I agree with you about open source software (AI included). I actually think the Chinese are probably gonna break the closed source stuff wide open. We started to see this with DeepSeek. Not for any morally good reason but because they do not believe in IP. I don't either.
Widespread unemployment leads to increased crime and violence. Its a pretty bleak future if that happens. That said, I don't think advanced tech that obsoletes jobs is a bad thing in principle. I believe humans will find things to do as we have in the past.
Personally I have been getting my head around LLMs for a while now and the more I learn the less magical they seem. The biggest question I have for them now is how soon can they actually become sustainable. They are losing money like crazy currently and if the fiat money / debt / venture capital dries up we have another dot.com level crash on our hands. That would be my bet to happen before there is any AI revolution that leads to widespread job loses.
LLMs have improved a lot since I first tried them out but they still have a long way to go. You need experts that actually know things because they just make stuff up. They are remarkably good at guessing and get things correct a lot but they can't be trusted. The thing is, neither can humans. This is why you need to have multiple humans to really do work that is bullet proof.
I'll end with this. Be proactive but don't take the black pill. Its seductive as it allows one to just pretend that you don't have agency and you can just be lazy because working doesn't matter. Work still matters and we have so much more control over our destiny than we realize.
Disclaimer: I don't think that the "21 lessons" outcome is the most likely, it's just the most reasonable dystopia I've come across, so I'm building this narrative on the off-chance that it comes true.
Do you feel like you have a good grasp on how LLM's work?
Pretty much. The chatbots are basically forward text prediction as once upon a time invented for google's search autocomplete. You can see it happen when you query a more ancient version that isn't consumer-tuned greatly, like the original llama, or one of Google's
-it
versions of Gemma. Most of the stuff that brings these "great leaps" in chatbot experience is people writing system prompts and filters. Or did you think Google did a whole training from scratch to stop search assistant from literally quoting reddit trolling? They just tuned some weights, and if you look at the recent claude system prompt (#990468) you see that this is still where the "magic" lives: text instruct, lol.Widespread unemployment leads to increased crime and violence. Its a pretty bleak future if that happens.
Which is what - if I remember correctly - moved Elon before he was flirting with Trump to actually never shut up about Bernie-style UBI as a solution in case AI had a real breakthrough. But I don't really think that will work past the initial phase, even though that may happen. "Tax the AI, they took our jobs". They should get to their Elysium before the guns come out - SpaceX better hurry them ups.
Personally I have been getting my head around LLMs for a while now and the more I learn the less magical they seem.
There's no magic, just emulation. But try running a small reasoning model like
qwen3:8b
or, if your laptop is more modern than mine, :32b
locally and see how rapidly the emulation has evolved post-deepseek v3. It may still loop or get stuck on saying what relates to technically nothing with the small models sometimes, but this is like a 10x vs llama2, which is a year old.They are losing money like crazy currently and if the fiat money / debt / venture capital dries up we have another dot com level crash on our hands.
Yes. This used to worry me, but not anymore. Their loss is our gain at this point because closed platforms like OpenAI are the ultimate data-hoarding enemy: they ingest tons of society's data without compensation but in return don't share back. So yeah, I'd recommend anyone to be careful when buying more of those nvidia stonks because some of this dystopian outcome is priced in and there's a good chance that it won't materialize.
LLMs have improved a lot since I first tried them out but they still have a long way to go.
Did you try pipelines, either between generic LLMs, retrained "experts", or even with NLP like spaCy? My "hobby" is making AI reason with AI and see who convinces who - I'll publish a demo today or tomorrow.
The LLMs will never be sentient though; they don't have feelings, no inner drive, no morality. The dystopia isn't SkyNet, it's me using one of these massive datacenters and paying Sam my seed money for the AI credz for my next startup, instead of paying 10 people salary. Because that's me making Sam richer and society as a whole poorer.
Be proactive but don't take the black pill.
Agreed. The idea is about empowering people to harness the 100x, and not sit and wait for UBI because that isn't going to end well. I'm pro-work: work smarter and harder, and the world will be yours. The challenge is how do we enable that without creating a dystopia, and I feel that the answer sits with accessibility of the same tools that may cause the imbalance. I don't mind being an enablement-commie; it beats UBI.
reply
Did you try pipelines, either between generic LLMs, retrained "experts", or even with NLP like spaCy? My "hobby" is making AI reason with AI and see who convinces who - I'll publish a demo today or tomorrow.
I haven't.
I think we are in very tight alignment. I don't think machines can be sentient, at least I've seen zero evidence that this is possible. What seems to be the case when people disagree on this is their definitions. People are the threat as you say. Same is true of most tech.
I'm a firm believer in history being a great tool to understand what will happen in the future. It does repeat but it rhymes.
The other thing that you likely get but I find a ton of people do not seem to get is this. Sam Altman and those like him are incentivized to over sell their stuff. He's a pitchman. That doesn't mean everything he says is a lie but most people that fear AI seem to just take this for granted.
reply
Yes we're aligned, I 100% agree on all points you make here.
The scary-ish part? I think that after a year or so of intense lmao at people trying to do pull requests with AI-coded crap, I may be giving the
ACK
on the first AI-written contribution ever to one of the o/s repos I maintain today (not written by me, I'm just reviewing it, and testing it 20x to make absolutely sure.) The caveat is that the repo in question is a python module and although it could use a follow-up pull req to do a little more cleanup, the job was done, because it was a small change. Wouldn't fly on c++ or even golang, but for python, it seems LLM starts to be tuned enough to the point it can actually deliver really small things (that would have taken me an hour to fix manually or so, but still)reply
This is my experience as well.
reply
I said yesterday that I'd share an example of LLM arguing/pipelining.
This is a simple demo of a 3-step pipeline answer->formulate questions based on answer->improve answer based on questions. It basically uses the same technique as "reasoning" models: by generating a lot of additional context, the forward text prediction gets influenced.
reply