Why Vector Databases Are Important to AI

The recent success of large language models like ChatGPT have led to a new stage of applied AI and with it, new challenges. One of those challenges is building context with a limited amount of space to get good results.

For example, lets say you want the AI to respond with content from your data set. In order to do that you could stuff all of your data into the prompt and then ask the model to respond using it. However, it’s unlikely the data would fit neatly into ~3000 words (the input token limitation of GPT-3.5). Rather than try to train your own model (expensive), you need a way to retrieve only the relevant content to pass to the model in a prompt.

This is where vector databases come in. You can use a vector DB like Chroma, Weaviate, Pinecone, and many more to create an index of embeddings to perform similarity searches on documents to determine what context to pass to a model for the best results.

  • AI for Notes

    Now that my Zettelkasten has over a thousand notes, I’d like to try to quite literally create the experience of a conversation with my second brain. The AI interface should be conversational rather than search queries. It should draw from the knowledge in my notes and respond in natural language. Finally, it should be useful in helping me make connections between ideas I hadn’t thought of before.

  • Emacs and Chromadb

    The nice part about ChromaDB is that you can read the tables in SQLite using the new Emacs 29 sqlite-mode. That means there is no other configuration for accessing the database, you can open the sqlite file directly.

  • Org-Ai Is Chat for Notes

    I started building AI for notes to help me chat with my library of notes. The result of that exploration is org-ai—my one of one software that helps me remember what I’ve previously written, summarize information. Under the hood it uses vector-based similarity search and LLMs and agent-based AI to extract useful information from my zettelkasten in a chat-based interface.