pull down to refresh

Well thought-out piece that doesn't buy the hype, but also doesn't attack it as having no value. Probably one of the more balanced ones I've seen on vibe coding, and well-written. Quote for a bit of context, but read the whole thing.
I think a lot of the more genuine 10x AI hype is coming from people who are simply in the honeymoon phase or haven't sat down to actually consider what 10x improvement means mathematically. I wouldn't be surprised to learn AI helps many engineers do certain tasks 20-50% faster, but the nature of software bottlenecks mean this doesn't translate to a 20% productivity increase and certainly not a 10x increase.
121 sats \ 1 reply \ @fiatbad 21h
Yup. Upper management is already botching this. They're firing too many people, thinking AI is going to be their panacea. In reality, AI will save them 20% on production costs over the next few decades, as you say. It's massive, yes, but it won't offset the damage they're doing by laying off entire segments of their workforce.
In the short-term, they're going to be surprised by the way they can no longer deliver products on time after leaning too heavily into AI and offshoring.
In the meantime, developers like me will refuse to come back to work for dollars. If they aren't paying in Bitcoin, I will refuse them.
reply
Yes I think this is key. I still earn $US for day job, but this looks less attractive month by month. Eventually I will only get out of bed for Bitcoin. It is inevitable.
reply
I am operating at about 8.6x since May 2025.
It’s real, but I was already very productive to begin with, and reasonably knowledgeable.
reply
0 sats \ 1 reply \ @optimism 13h
8.6x is impressive. How did you measure it?
reply
Project completion rate vs previous abandonment rate. But mostly time to implement a task vs before.
I used to spent my days writing unit tests. Integration tests. Functional tests. Now I synth the unit tests and adjust I/O to define desired implementation. It’s very effective.
I will say it’s a subjective measure that’s not really general or transferable. Most folks will have marginal gains. I “feel” about 10x more effective, but I’m sure others would disagree.
reply
deleted by author
Often they just waste time and tokens, going back and forth with themselves not seeming to gain any deeper knowledge each time they fail.
This is the token-burning reality, which really sucks if you're paying by the token (or if your electricity is prohibitively expensive.)
LLMs do not make rustc go faster.
Great point. Also, even if you run all your integration tests at once in a massively scaled parallel environment, if you don't have one that takes a couple of minutes you're probably not covering scenarios properly. So you (or this magic LLM) has to context switch. But eventually this will not scale to 10x unless you don't do the intensive tests... which I don't think you can truly 10x from a non-LLM optimized state.
Nice references in the article:
reply