AFAIK most of the industry is moving away from RAG in favor of agentic search. I also feel like complaints like this are bit like pointing at a pothole and saying that roads don't work.
I'm not the only one seeing AI results starting to go downhill. In a recent Bloomberg Research study of Retrieval-Augmented Generation (RAG), the financial media giant found that 11 leading LLMs, including GPT-4o, Claude-3.5-Sonnet, and Llama-3-8 B, using over 5,000 harmful prompts would produce bad results.RAG, for those of you who don't know, enables large language models (LLMs) to pull in information from external knowledge stores, such as databases, documents, and live in-house data stores, rather than relying just on the LLMs' pre-trained knowledge.You'd think RAG would produce better results, wouldn't you? And it does. For example, it tends to reduce AI hallucinations. But, simultaneously, it increases the chance that RAG-enabled LLMs will leak private client data, create misleading market analyses, and produce biased investment advice.