One of the biggest criticisms of LLMs is that they don’t actually know anything. Many techniques have been explored to use general purpose artificial intelligence to solve domain specific problems using information that it was not trained on. Retrieval-augmented generation (RAG) does a decent job of enabling you to “bring your own data” but can still fail on more specialized use cases.
GraphRAG combines a knowledge graph (interlinking entities and concepts) to guide document retrieval and responding to the prompt.
This has been found to improve results by providing a scaffolding of pre-processed knowledge to constrain the LLM. A paper analyzing different retrieval techniques found that graph search combined with an LLM and writer retrieval had the highest quality answers in the least amount of time compared to other techniques.
Read GraphRAG: Unlocking LLM discovery on narrative private data from Microsoft Research.
See also:
- Might this improve AI models hallucinating?
- Zettelkasten is a mind map of notes which makes it a natural fit for this technique
- This reminds me of GraphPlan in that the first step is to construct a data structure from the information and then run a search algorithm over it