This note does not have a description yet.
Links to this note
-
At time of writing, AI needs humans to do anything useful and there is a big difference between the best and worst employees at using AI.
-
Graphrag Combines Knowledge Graphs With Retrieval
One of the biggest criticisms of LLMs is that they don’t actually know anything. Many techniques have been explored to use general purpose artificial intelligence to solve domain specific problems using information that it was not trained on. Retrieval-augmented generation (RAG) does a decent job of enabling you to “bring your own data” but can still fail on more specialized use cases.
-
I build personal infrastructure around the things I do constantly, refined for my workflow (quirks included), with built-in privacy, and for fun.
-
With the growing popularity of tools like Perplexity, OpenAI, Search GPT, and retrieval-augmented generation (RAG), and a healthy dose of skepticism in artificial intelligence (e.g. hallucinations) the industry is moving from “authoritative search” to “research and check”.
-
LLM Latency Is Output-Size Bound
As it stands today, LLM applications have noticeable latency but much of the latency is output-size bound rather than input-size bound. That means the amount of text that goes into a prompt does not matter.