pull down to refresh
30 sats \ 0 replies \ @bounty_hunter 9h \ on: How are you balancing newcomer training with drive-by LLM solution filtering? AI
I think SN has a standout system for FOSS PRs that most other projects lack:
- Disciplined maintenance of issue tracker: this takes a lot of time to do right, and OSS devs want to write code, not "be a PM", so the priorities of the project maintainer become illegible to outsiders looking to get involved.
- Comprehensive dev environment: "works on my machine"-friction adds a lot of costs to collaboration. NB: i think this is what's holding back a lot of "drive by contributions", I dont think there's any AI coding system that can handle the docker environment without a lot of expert customization.
- Consistent response time and courteousness: On other projects you'll often see a hostility to even earnest contributions, or it's a side project that got big while the maintainer has a full time job so PRs go un-reviewed for weeks or rejected for petty reasons.
- Not too large, not too small rewards and tasks: ONE BIG REWARD type bounties (or hackathons) are good for marketing and attract lots of contributors but here, the median quality will be quite low.
- The contributors are almost always users of the product: whereas for many other projects it's specialized software that each person uses differently in private on their own computer, e.g. MoviePy pakcage.
Overall this seems to hit the sweet spot of being welcoming and soliciting useful things. But the cost to duplicate these features for other projects also seems high.
Some other incentivized contributions I've come across that I think could be relatively AI-resistant are perf-based challenges like tinygrad bounties and kaggle competitions where a test-harness can be applied in a semi-automated to see if the contribution boosts accuracy or perf on some metric, and if so, then look into the code quality.