pull down to refresh
reply
reply
I'm not convinced. Fact checking outputs before decision making is hard work. On my "own" black box, for which I have tuned my system prompts, have selected and reviewed tooling injects and so on, this is already costly. Making the black box blacker isn't worth much if it isn't reproducible, imho.
That said, to me an LLM is a tool. Like my laptop is a tool. Or a hammer, or a lighter. So I may not have the same p.o.v. as the people that ascribe actual sentience to something I am pretty sure is a query mechanism on a vector database at the moment.
reply
I guess it depends if they're better than the bots we already have.