I lately began to favor Graph RAGs greater than vector store-backed ones.
No offense to vector databases; they work fantastically usually. The caveat is that you simply want specific mentions within the textual content to retrieve the proper context.
We’ve got workarounds for that, and I’ve lined a couple of in my earlier posts.
As an illustration, ColBERT and Multi-representation are useful retrieval fashions we must always contemplate when constructing RAG apps.
GraphRAGs endure much less from retrieval points (I didn’t say they don’t endure.) Each time the retrieval requires some reasoning, GraphRAG performs terribly.
Offering related context solves a key drawback in LLM-based functions: hallucination. Nonetheless, it doesn’t eradicate hallucinations altogether.
When you’ll be able to’t repair one thing, you measure it. And that’s the main target of this put up. In different phrases, how can we consider RAG apps?