pull down to refresh
This is my experience as well.
reply
I said yesterday that I'd share an example of LLM arguing/pipelining.
This is a simple demo of a 3-step pipeline answer->formulate questions based on answer->improve answer based on questions. It basically uses the same technique as "reasoning" models: by generating a lot of additional context, the forward text prediction gets influenced.
reply
ACK
on the first AI-written contribution ever to one of the o/s repos I maintain today (not written by me, I'm just reviewing it, and testing it 20x to make absolutely sure.) The caveat is that the repo in question is a python module and although it could use a follow-up pull req to do a little more cleanup, the job was done, because it was a small change. Wouldn't fly on c++ or even golang, but for python, it seems LLM starts to be tuned enough to the point it can actually deliver really small things (that would have taken me an hour to fix manually or so, but still)