pull down to refresh

Yes we're aligned, I 100% agree on all points you make here.
The scary-ish part? I think that after a year or so of intense lmao at people trying to do pull requests with AI-coded crap, I may be giving the ACK on the first AI-written contribution ever to one of the o/s repos I maintain today (not written by me, I'm just reviewing it, and testing it 20x to make absolutely sure.) The caveat is that the repo in question is a python module and although it could use a follow-up pull req to do a little more cleanup, the job was done, because it was a small change. Wouldn't fly on c++ or even golang, but for python, it seems LLM starts to be tuned enough to the point it can actually deliver really small things (that would have taken me an hour to fix manually or so, but still)
This is my experience as well.
reply
I said yesterday that I'd share an example of LLM arguing/pipelining.
This is a simple demo of a 3-step pipeline answer->formulate questions based on answer->improve answer based on questions. It basically uses the same technique as "reasoning" models: by generating a lot of additional context, the forward text prediction gets influenced.
reply