pull down to refresh
102 sats \ 0 replies \ @k00b 2h \ parent \ on: Where's the Shovelware? Why AI Coding Claims Don't Add Up - Mike Judge AI
My intuition mirrors yours. With technology things I trust people's sense of them unless I can point to things distorting their senses.
I'm seeing people program toy applications without understanding what they're doing. And I'm hearing from experienced, junior-to-mid-ish programmers, that they are full-time vibing; though, until I see the output, it might be wishful thinking looking to quench imposter syndrome (I don't need to be the master programmer I pretend to be, I can just be an LLM wizard and arrive at the king's table as I am destined).
I do think there are things distorting senses: VC money spent trying to justify itself and non-programmers, and weak/lazy/ill-suited programmers, relieved they can achieve results without the aptitude/dopamine/fixation/skills. I also think mid and lower programmers probably struggle to review LLM output well and overestimate LLMs' abilities.
Regardless of all that, the trend is strong afaict - LLMs are getting better at programming pretty fast.
For me personally, I haven't experimented much due to the context problem. Most of my programming tasks lately have been ocean-sized problems rather than pond or even lake-sized. But when I have a pond-sized problem, I use LLMs. For lake-sized ones, I might use LLMs for pre-code ideation.
My hope is to spend a month full-time vibing before the year is over and see how my opinion changes.