This note does not have a description yet.
Links to this note
-
Graphrag Combines Knowledge Graphs With Retrieval
One of the biggest criticisms of LLMs is that they don’t actually know anything. Many techniques have been explored to use general purpose artificial intelligence to solve domain specific problems using information that it was not trained on. Retrieval-augmented generation (RAG) does a decent job of enabling you to “bring your own data” but can still fail on more specialized use cases.
-
With the growing popularity of tools like Perplexity, OpenAI, Search GPT, and retrieval-augmented generation (RAG), and a healthy dose of skepticism in artificial intelligence (e.g. hallucinations) the industry is moving from “authoritative search” to “research and check”.
-
LLM Latency Is Output-Size Bound
As it stands today, LLM applications have noticeable latency but much of the latency is output-size bound rather than input-size bound. That means the amount of text that goes into a prompt does not matter.