pull down to refresh

Judge presents the Fermi's Paradox of AI assisted coding:
If so many developers are so extraordinarily productive using these tools, where is the flood of shovelware? We should be seeing apps of all shapes and sizes, video games, new websites, mobile apps, software-as-a-service apps — we should be drowning in choice. We should be in the middle of an indie software revolution. We should be seeing 10,000 Tetris clones on Steam.
The most interesting thing about these charts is what they’re not showing. They’re not showing a sudden spike or hockey-stick line of growth. They’re flat at best. There’s no shovelware surge. There’s no sudden indie boom occurring post-2022/2023. You could not tell looking at these charts when AI-assisted coding became widely adopted. The core premise is flawed. Nobody is shipping more than before.
He did a lot of this real search himself, and his conclusion:
This whole thing is bullshit.
Admitting I'm not a developer, I'm not sure that I am ready to go down such a negative path, but it's nice to hears counter to the pressures of AI adoption.
I wonder what @k00b thinks
I don't think I agree with this approach. You can't measure the impact of AI based on macro trends like this; too many other factors could be at play. There's also a lag between introduction of the technology and its adoption and then another lag between adoption into product.
Lastly, I'm very wary of telling people that their perceptions are wrong. If someone who's normally lucid perceives something to be true, yet the data says it's wrong, I'm more inclined to re-interpret my data than to say that the person was wrong. (This only applies to perceptions about their own experiences, like how fast do I code, than perceptions about the outside world, like the effects of capitalism). So, with that being said, if developers perceive themselves to be more productive with AI, but the data is saying otherwise, my first instinct is to question the way in which the data was collected, or how we can actually interpret the data.
From my personal experience, AI doesn't accelerate me much on tasks that I'm already very competent in. Maybe 10% boost due to autocomplete, but you subtract 5% due to it being wrong sometimes. But AI accelerates me immensely in learning a new technology from scratch, or getting me to that first prototype for what I'm trying to do.
reply
102 sats \ 0 replies \ @k00b 55m
My intuition mirrors yours. With technology things I trust people's sense of them unless I can point to things distorting their senses.
I'm seeing people program toy applications without understanding what they're doing. And I'm hearing from experienced, junior-to-mid-ish programmers, that they are full-time vibing; though, until I see the output, it might be wishful thinking looking to quench imposter syndrome (I don't need to be the master programmer I pretend to be, I can just be an LLM wizard and arrive at the king's table as I am destined).
I do think there are things distorting senses: VC money spent trying to justify itself and non-programmers, and weak/lazy/ill-suited programmers, relieved they can achieve results without the aptitude/dopamine/fixation/skills. I also think mid and lower programmers probably struggle to review LLM output well and overestimate LLMs' abilities.
Regardless of all that, the trend is strong afaict - LLMs are getting better at programming pretty fast.
For me personally, I haven't experimented much due to the context problem. Most of my programming tasks lately have been ocean-sized problems rather than pond or even lake-sized. But when I have a pond-sized problem, I use LLMs. For lake-sized ones, I might use LLMs for pre-code ideation.
My hope is to spend a month full-time vibing before the year is over and see how my opinion changes.
reply
I’ve been pretty confident that this initial AI boom will be another tech bubble.
That doesn’t mean it won’t eventually meet and exceed the hype. It just won’t do it immediately.
reply
I'm not sure. I don't want to be too trusting, but I am still vividly remembering how when the first Ai generated images came out and the fingers were all messed up and I said to myself "sure, but I'll always be able to tell, just look at the fingers!" and it took them like six months to fix that problem.
I can theorize all I like about LLMs being "word calculators" (Allen Farrington's excellent term), but then I use the darn things, and I find them helpful -- and the experience is so darn hard to pin down. I know it's not thinking, but how much better does it have to get before I won't be able to tell the difference? And at that point, will it matter if it is not thinking?
reply
121 sats \ 0 replies \ @optimism 19m
it took them like six months to fix that problem
This is key.
I think @Undisciplined is right that it's a bubble and I 100% believe that LLM coding underdelivers when measured against the promises originating from VC banter. It does for me personally in my experiments, and I have yet to see someone propose a close-to-acceptable LLM coded pull request on any of the repos I maintain. I've had one AI-generated 3-liner on a python repo that was ok to let through. In 8 months or so.
Here's how I currently expect for this to go: businesses prematurely adopt AI and change hiring decisions, classic FOMO. There will absolutely be significant problems coming from this. Many businesses will fail because they will mess up and others will solve the problems and thrive. Humans ultimately always solve problems if they have to.
Bottom line, this is what disruption does: there will be losers, but there will be progress.
reply
I have no doubt that it will progress quickly and be highly productivity enhancing. So was the internet 25 years ago.
My expectation is more that it will be misapplied and overhyped by people taking advantage of investors’ lack of understanding about what reasonable expectations are.
reply
You cannot trust AI. So, if you build an app, and you don't know how it works, you cannot release it. Therefore, you spend time reading code that was written by a machine that doesn't make a lot of sense which is similar to reading code by inexperienced programmers. Hence, the only thing cut out of the equation is inexperienced programmers which have been replaced by something that cannot explain to you what it did reliably. That, it turns out, is going to take up more time, not less.
reply