pull down to refresh

We were discussing this some in our SN weekly meeting yesterday. Beyond the obvious AI spam, I think even functional, error-free AI code can grind a team down. The main problem I'm seeing is that AI code is harder to read because, aside from the excessive line count, it wasn't produced by nor, often, read by a human. It threatens human maintainability of a code base, which is maybe worth the tradeoff if everything will eventually be written by AI, but in the meantime it tends to make complex, non-linear, abstraction necessitating code even hard to extend. It's kind of like adding a team member that insists on abstracting things before abstraction is required - it obscures things more than needs to.
To the upside, earnest contributors do read the AI code and make sure it makes sense, and AI lowers the barrier to entry for them. But earnest contributors of this kind, especially when you're incentivizing contributions like we do, are going to be in the minority.
110 sats \ 3 replies \ @klk 8h
I've experienced something similar yet different as a contributor.
I've received comments on MRs I had created from random people (not maintainers) that were just AI generated suggestions on how to improve the changes that came from seeing the changes, without actually understanding the issue and its context.
I'm not really sure about what the person was after, there was no clear incentive.
Apart from that, at my day job, I had to deal with colleagues submitting absolute trash AI generated MRs, with emojis in the comments, imports in the middle of the file, and placeholders. Luckily you can also do the first round of QA using an LLM and ask it to detect potentially AI generated leftovers. Low effort contribution -> low effort QA.
reply
I think I've noticed a few of these on SN, but can't say for sure whether they're AI.
reply
30 sats \ 0 replies \ @k00b OP 8h
There's only one person that's outright spammed us with AI on the repo. Everyone else has been relatively earnest, but the trend seems to be more lines of code and lower obviousness in terms of understandability.
reply
0 sats \ 0 replies \ @k00b OP 8h
If it's coming from the inside of orgs too then we deserve to let the robots take our jobs.
reply
50 sats \ 3 replies \ @k00b OP 8h
I think underpinning the problem is the cost of creating a solution. A human approaching the problem will spend ample time trying to figure out how to be most surgical - what's the best approach to take to make my effort small and worthwhile. When the code generation is easy, folks lack concern for surgical coherence because it makes no difference to them.
reply
100 sats \ 0 replies \ @optimism 5h
Make it make a difference: review surgically coherent pull reqs first.
reply
There's an analogous problem in academia, which is that as the cost of compute went down and the ease of statistical programming went up, papers started getting longer and longer. Mostly, they were filled with dozens of "robustness tests" which are basically checking that your results still hold under small alternative assumptions for the model.
But most people now agree that it's gotten out of hand, that these robustness tests don't really add much, and it severely undermines the readability of the papers by being so long
reply
Good point: in the AI era, developer "laziness" is a feature not a bug.
Laziness - in this sense - is isomorphic to solving for the path of least resistance, which like the traveling salesman problem or other optimizations is actually really hard. Humans probably use heuristics to get close to lower bound solution which is what "good code" looks like.
Personally, I expect the path of least resistance changes with 2025-era coding agents, and so we'll actually end up changing our metrics for what good code is.
reply
The first report he showed is clearly AI-generated. The second one is better disguised and might have been written by someone and just polished by AI, like a standard template — I’m not sure. Either way, I can’t see this as anything more than widespread trolling.
I don’t see any viable use for AI in suggesting improvements — only the user and dev can truly see and report what they experience.
reply