Someone in a bitcoin founder group chat, who wouldn't mind me naming them but won't to demonstrate my capacity for prudence đź’…, asked:
To those of you allowing open code contributions: How are you balancing newcomer training with drive-by LLM solution filtering? Some of these contributions serve as little more than a distraction.
My response which sort of evades the practical part of the question:
We incentivize contributions which should make it worse and we try to make it really easy to get started (clone and run command), yet real contributions out number slop ones by a good margin still. (We probably also benefit in having a frivolous product and the incentives make FOSS PRs part of regular dev life which might amortize any single distraction.)I have a nearly positive take on ai contributions after getting quite a few: it’s a source of development diversity that a closed source team might struggle to find, ie maybe we are underutilizing AI dev tools internally and we might have gaps filled by cyborg contribs that’d otherwise be left unfilled.That said we have one contributor that stretches ai way beyond their ability to self-review and it’s annoying af. Yet, again, they occasionally surface an issue (one in ten hit rate or so) that we hadn’t considered and make wading through all their slop worth it. They are annoying and unskilled but earnest. Human, cyborg, or machine, earnestness is where I draw the line between distraction and not.
My answer to the practical part might be that we don't really balance it and instead just lower priority based on historical slop and earnestness ratios.
Would you answer differently?