pull down to refresh
@optimism
stacking since: #879734longest cowboy streak: 46
I said yesterday that I'd share an example of LLM arguing/pipelining.
This is a simple demo of a 3-step pipeline answer->formulate questions based on answer->improve answer based on questions. It basically uses the same technique as "reasoning" models: by generating a lot of additional context, the forward text prediction gets influenced.
I know this doesn't answer your bounty question but I thought it was good to touch on this.
The reason why AIs have problems is because they apply windowing to images for them to fit into their small resolutions (think 512x512). Perhaps there's a way to resize the images before analysis, make the AI define a mask and then recalculate that mask to the original dimensions and apply it.
I don't know how to verify this.
- Run through each claim and extract named entities.
- Spy on each named entity. Infiltrate their lives. Become their confidante. But do not get brainwashed. Keep records, take pictures, record conversations.
- Be a good spy and let the rest of us know what you find, including all your evidence. Don't hold back.
Right so you want to measure productivity / output that is controlled by a particular government. It's a bit circumvent but makes sense if there's no direct data.
This is probably why the spooks factbook lists this, as it shows you which regimes you want to control. Real life game of thrones.
Interesting.
Honest question though: why would you rank GDP at PPP at a national level, without generalizing per capita? Like what does that mean in practical terms? What are we comparing?
The bad news, as always, is that I don't know what the best fix for this is.
Fine-grained access tokens
allow you to limit repositories to an explicit set:But that's not really enough because it lacks per-repo settings. Every 3rd party tool you give an access token to is potentially malicious, even when it's open-source and you run it locally only, like
gh
. Remember supply chain attacks; can you really trust anyone else's software?The most secure way of using 3rd party apps on GitHub that I know of (other than "don't") is:
- Create a specific account for each linked application and lock it down using permissions RBAC in each repo's settings, and then
- Tightly managing access tokens for that account; short expiry.
Letting tools work under your account, especially since GH does awful things like pgp signing merge commits with their key on your behalf which then shows as "verified", is rather risky anyway.
Yes we're aligned, I 100% agree on all points you make here.
The scary-ish part? I think that after a year or so of intense lmao at people trying to do pull requests with AI-coded crap, I may be giving the
ACK
on the first AI-written contribution ever to one of the o/s repos I maintain today (not written by me, I'm just reviewing it, and testing it 20x to make absolutely sure.) The caveat is that the repo in question is a python module and although it could use a follow-up pull req to do a little more cleanup, the job was done, because it was a small change. Wouldn't fly on c++ or even golang, but for python, it seems LLM starts to be tuned enough to the point it can actually deliver really small things (that would have taken me an hour to fix manually or so, but still)Disclaimer: I don't think that the "21 lessons" outcome is the most likely, it's just the most reasonable dystopia I've come across, so I'm building this narrative on the off-chance that it comes true.
Do you feel like you have a good grasp on how LLM's work?
Pretty much. The chatbots are basically forward text prediction as once upon a time invented for google's search autocomplete. You can see it happen when you query a more ancient version that isn't consumer-tuned greatly, like the original llama, or one of Google's
-it
versions of Gemma. Most of the stuff that brings these "great leaps" in chatbot experience is people writing system prompts and filters. Or did you think Google did a whole training from scratch to stop search assistant from literally quoting reddit trolling? They just tuned some weights, and if you look at the recent claude system prompt (#990468) you see that this is still where the "magic" lives: text instruct, lol.Widespread unemployment leads to increased crime and violence. Its a pretty bleak future if that happens.
Which is what - if I remember correctly - moved Elon before he was flirting with Trump to actually never shut up about Bernie-style UBI as a solution in case AI had a real breakthrough. But I don't really think that will work past the initial phase, even though that may happen. "Tax the AI, they took our jobs". They should get to their Elysium before the guns come out - SpaceX better hurry them ups.
Personally I have been getting my head around LLMs for a while now and the more I learn the less magical they seem.
There's no magic, just emulation. But try running a small reasoning model like
qwen3:8b
or, if your laptop is more modern than mine, :32b
locally and see how rapidly the emulation has evolved post-deepseek v3. It may still loop or get stuck on saying what relates to technically nothing with the small models sometimes, but this is like a 10x vs llama2, which is a year old.They are losing money like crazy currently and if the fiat money / debt / venture capital dries up we have another dot com level crash on our hands.
Yes. This used to worry me, but not anymore. Their loss is our gain at this point because closed platforms like OpenAI are the ultimate data-hoarding enemy: they ingest tons of society's data without compensation but in return don't share back. So yeah, I'd recommend anyone to be careful when buying more of those nvidia stonks because some of this dystopian outcome is priced in and there's a good chance that it won't materialize.
LLMs have improved a lot since I first tried them out but they still have a long way to go.
Did you try pipelines, either between generic LLMs, retrained "experts", or even with NLP like spaCy? My "hobby" is making AI reason with AI and see who convinces who - I'll publish a demo today or tomorrow.
The LLMs will never be sentient though; they don't have feelings, no inner drive, no morality. The dystopia isn't SkyNet, it's me using one of these massive datacenters and paying Sam my seed money for the AI credz for my next startup, instead of paying 10 people salary. Because that's me making Sam richer and society as a whole poorer.
Be proactive but don't take the black pill.
Agreed. The idea is about empowering people to harness the 100x, and not sit and wait for UBI because that isn't going to end well. I'm pro-work: work smarter and harder, and the world will be yours. The challenge is how do we enable that without creating a dystopia, and I feel that the answer sits with accessibility of the same tools that may cause the imbalance. I don't mind being an enablement-commie; it beats UBI.
I think the worst case (but potentially realistic) I've seen expressed for AI right now was in Harari's "21 lessons for the 21st century" that basically says AI will make human labor obsolete to a very large extent because we're being beaten on both manual and cognitive labor now. It also - to my memory - predicts that there will be irrelevant plebs and gazillionaires. I think that it's amazing that this was written in 2018 and that we can recognize some of the patterns described play out in real time.
Unless you are Sam A or Elon behind your SN nym, you may need to harness your own AI tech, so that you don't become irrelevant. In the past I've personally always prided myself in relentless focus and productivity but if I want to keep up, I'm going to need help now. I won't be able to challenge companies that move 10000x faster than me using open source tools. But if I can combine a 100x speedup on my end with the existing relentlessness, I may be able to still beat them sometimes.
I had an interesting conversation over dinner last night related to this: how can we provide contra power to us, the plebs. We don't need a data center full of GPUs each: a Mac mini or equivalent should be plenty. And for specific tasks we can instruct / retrain smaller but modern models that may run on inference tuned hardware - hopefully devices like iMX-8 get better and cheaper rapidly. I'm considering setting something up to test this. Probably in software development first, because low hanging fruit, like re-train an AI on a single repo and see what can be done.
Bottom line, if that worst case scenario plays out, I think that the problem isn't politics in general, or any figurehead. Soon no one will care about a bunch of unemployed people. The real issue is closed source AIs. Luckily we have Zuck (lol) and technically Google too (double lol) disrupting all the other overlords with opened up models we can sovereignly run. I hope that there will be many more open source models released; then each of us has a shot at remaining relevant.
What's more important to me is that the network itself stays decentralized and uncensorable
Indeed, but this is still an assumption, not proven. And it's censorship-resistant, not uncensorable. The difference being that at some point censorship can happen, but there are good incentives in place that it will be countered.
Also see Eric's chapter on Axiom of Resistance, which basically states that we cannot stop believing this, because if we did, we wouldn't be talking about bitcoin but something else, like fiat or CBDCs.
The interesting question to me is: are the masses actually thinking about this principle at all and have they ever? Or are they just greedy fiat clowns that will gladly throw themselves into the arms of totalitarianism as long as the overlord pretends to be on their side?
If I'm honest, I kind of fear it's the latter.
But taxes often fall short so yeah... borrows the difference from evil bankers that charge interest. Then borrow more to pay the interest. When you grow up we haven't paid off anything and this will be our legacy.
She's like the government.
Wait wat?
The RSA encryption that secures your wallet
Which wallet is that? Show me the code plz.
Why would anyone use asymmetrical RSA instead of a symmetrical block cipher like AES to encrypt a wallet?
There's an imbalance where the post-Brexit remainder, which is severely hurting with reduced libertarian influence, tries to keep the former momentum without a good check on markets. Like a leg got amputated during the race and you're still running as if you're going to win.
Brexit thus far seems a lose-lose-lose.
Yeah. ToS worthless if they get hacked, or if they break it anyway and go to jail SBF style. They definitely will not have the budget to compensate me the 5k BTC my health information is worth to me.
By my judgment, it's not worth it for them to take the risk really, and therefore, if I ever get time, I shall help protect them from this inevitable outcome by completely eradicating all data sharing in their software, because they cannot eff up if they have none of my data. Win-win.
Normies
may not care. Politicians may not care. Many journalists may not care.