I just factored llama_index out of my memory/context/dream experiment (where I aggregate quality-agnostic content) in favor of faiss with a metadata repository that allows me to track the sources better, because I can't track all the bs the LLM makes up in the middle. I've felt the same about NotebookLLM.
It would be good to extend an audit trail through the model execution.
llama_index
out of my memory/context/dream experiment (where I aggregate quality-agnostic content) in favor offaiss
with a metadata repository that allows me to track the sources better, because I can't track all the bs the LLM makes up in the middle. I've felt the same about NotebookLLM.