pull down to refresh

Did you try pipelines, either between generic LLMs, retrained "experts", or even with NLP like spaCy? My "hobby" is making AI reason with AI and see who convinces who - I'll publish a demo today or tomorrow.
I haven't.
I think we are in very tight alignment. I don't think machines can be sentient, at least I've seen zero evidence that this is possible. What seems to be the case when people disagree on this is their definitions. People are the threat as you say. Same is true of most tech.
I'm a firm believer in history being a great tool to understand what will happen in the future. It does repeat but it rhymes.
The other thing that you likely get but I find a ton of people do not seem to get is this. Sam Altman and those like him are incentivized to over sell their stuff. He's a pitchman. That doesn't mean everything he says is a lie but most people that fear AI seem to just take this for granted.
Yes we're aligned, I 100% agree on all points you make here.
The scary-ish part? I think that after a year or so of intense lmao at people trying to do pull requests with AI-coded crap, I may be giving the ACK on the first AI-written contribution ever to one of the o/s repos I maintain today (not written by me, I'm just reviewing it, and testing it 20x to make absolutely sure.) The caveat is that the repo in question is a python module and although it could use a follow-up pull req to do a little more cleanup, the job was done, because it was a small change. Wouldn't fly on c++ or even golang, but for python, it seems LLM starts to be tuned enough to the point it can actually deliver really small things (that would have taken me an hour to fix manually or so, but still)
reply
This is my experience as well.
reply
I said yesterday that I'd share an example of LLM arguing/pipelining.
This is a simple demo of a 3-step pipeline answer->formulate questions based on answer->improve answer based on questions. It basically uses the same technique as "reasoning" models: by generating a lot of additional context, the forward text prediction gets influenced.
reply