pull down to refresh

Someone in a bitcoin founder group chat, who wouldn't mind me naming them but won't to demonstrate my capacity for prudence 💅, asked:
To those of you allowing open code contributions: How are you balancing newcomer training with drive-by LLM solution filtering? Some of these contributions serve as little more than a distraction.
My response which sort of evades the practical part of the question:
We incentivize contributions which should make it worse and we try to make it really easy to get started (clone and run command), yet real contributions out number slop ones by a good margin still. (We probably also benefit in having a frivolous product and the incentives make FOSS PRs part of regular dev life which might amortize any single distraction.)
I have a nearly positive take on ai contributions after getting quite a few: it’s a source of development diversity that a closed source team might struggle to find, ie maybe we are underutilizing AI dev tools internally and we might have gaps filled by cyborg contribs that’d otherwise be left unfilled.
That said we have one contributor that stretches ai way beyond their ability to self-review and it’s annoying af. Yet, again, they occasionally surface an issue (one in ten hit rate or so) that we hadn’t considered and make wading through all their slop worth it. They are annoying and unskilled but earnest. Human, cyborg, or machine, earnestness is where I draw the line between distraction and not.
My answer to the practical part might be that we don't really balance it and instead just lower priority based on historical slop and earnestness ratios.
Would you answer differently?
177 sats \ 2 replies \ @optimism 5h
Overall good answer, I think, though it really depends on what you're developing and with what language. I've accepted exactly 1 LLM contribution, but it was on a python repo, after the author cleaned up all the effing emojis.
More annoying to me still is people who feed issues to grok or gpt and then try to reap credit in issue comments. It's been happening more often since recent releases, and people apparently believe that maintainers don't notice the slop, or that they went zero to hero in 10 minutes using terminology they don't understand. People also get really upset if I say something about it, or when I even dare asking "did you test this?". I'm getting even less liked than I already was, but whatevs.
What I don't really see anymore in my own projects is this part:
newcomer training
I've had zero newcomers since April or so, where normally I'd have 2-3 a month.
reply
it was surprising to me to see how many people submit PRs but say they haven't tested it. I try to test everything (as much as I can), even single line code changes, and wouldn't imagine submitting something for review without having done so.
reply
144 sats \ 0 replies \ @optimism 3h
Took me years to get rid of "LGTM" culture in reviews, which ultimately heightened review quality. Now the pressure is on the submitter because no one will want a fight when their review on an untested PR breaks shit
It sucks that FOSS development brings much stress and isn't always newcomer friendly but if you have a massive installed base you have to deliver quality work and avoidable debt on your main branch is going to hurt.
reply
I don't know how I'd answer since I've never maintained a FOSS project.
But what stands out to me is that bots clearly aren't at the point yet where they can independently contribute to projects without human supervision.
reply
44 sats \ 0 replies \ @k00b OP 6h
(Upon second reading, I'm realizing how important context is. High levels of self-deprecation and frustration with third parties are even more unseemly outside of close quarters.)
reply