Artificial Intelligence

This note does not have a description yet.

  • AI Is the next Great Interop Layer

    I had previously observed that humans are the great interop layer—we are the glue that fits together disparate processes and tools into usable systems. After using large language models, I’m becoming convinced that they can offload a large amount of the interop cost that currently falls to us. In a nut shell, AI can ‘do what I mean not what I say’ pretty darn well.

  • Advantages of Open Source AI

    It’s almost inevitable that, after an initial research phase, progress of AI models and tools will come from open source communities rather than a corporation. Individuals can utilize fair-use to do things businesses can not do (e.g. using leaked LLaMa weights and fine tuning it). There are more people to work on fringe usecases that do not have to be commercialized. Finally, open source increases access (running 13B LLMs on a laptop, on a Raspberry Pi) allowing more people to try it and provide more feedback.

  • Planning AI

    A sub field of artificial intelligence (AI) concerned with helping agents generate valid and coherent plans of actions to reach a goal.

  • AI Is Usually Described as Singular, but Really it Will Be a Multitude of Ais

    When AI comes about it will be colored by the creators. Data used to train and techniques for replicating traits we associate with intelligence will encode the culture and philosophy into the AI. In that way, we will likely have an “American AI” that is significantly different to AI created elsewhere in the world.

  • AI Models at the Edge

    Today, most large language models are run by making requests over the network to a provider like OpenAI which has several disadvantages. You have to trust the entire chain of custody (e.g. network stack, the provider, their subprocessors etc.). It can be slow or flakey and therefore impractical for certain operations (e.g. voice inference, large volumes of text). It can also be expensive—providers are charging per API call and experiments can result in a surprising bill (my useless fine-tuned OpenAI model cost $36).

  • AI for Notes

    Now that my Zettelkasten has over a thousand notes, I’d like to try to quite literally create the experience of a conversation with my second brain. The AI interface should be conversational rather than search queries. It should draw from the knowledge in my notes and respond in natural language. Finally, it should be useful in helping me make connections between ideas I hadn’t thought of before.

  • Org-Ai Emacs Integration

    I built org-ai using Python which exposes an AI chat interface through a simple CLI. This makes it a bit clunky when using it from Emacs—I would need to open up an instance of a terminal, activate the virtual environment, and execute the program to start the chat.

  • Trying to Know the Unknowable Leads to Pessimism

    We do not yet know what we have not discovered and trying to know the unknowable (prophesy) leads to pessimism. A Malthusian catastrophe ends up being wrong because it does not predict knowledge that resulted in efficiency of food production. Similarly the pessimism of energy economics is error laden because it can not predict what new discoveries we will make in social and political systems or new defenses.

  • AI Puts a Higher Premium on Unique Knowledge

    AI augmented tools for creative processes like writing (ChatGPT) and drawing (StableDiffusion, DALL-E-2) establish a new baseline for content. This is a step change for many industries where the value will get competed away (e.g. everyone can compete in editorial SEO). That means that there will be an even higher premium for unique knowledge that is, by definition, not replicable by advancements in general AI tools.

  • AI Multiplies the Value of Expertise

    AI reduces the cost of certain tasks to effectively zero. In doing so, it lowers the barriers to domains that would previously take years to build skills such as writing code, data analysis, and more. This is precisely why AI also increases the value of expertise and experience.

  • The Unknown God

    An English physician once described radium as “the unknown god”. This was at a time where radiation and it’s effects were still being discovered. Radium was being used to treat all manner of ailment, thinking that if it was helpful in large amounts for treating cancer, it must also keep you healthy in small amounts.

  • Theories of Consciousness

    There are many theories put forth to explain human consciousness and experiments are running to test them. With all the discussion around AGI, it’s timely to keep an eye on them.

  • Org-Ai Is Chat for Notes

    I started building AI for notes to help me chat with my library of notes. The result of that exploration is org-ai—my one of one software that helps me remember what I’ve previously written, summarize information. Under the hood it uses vector-based similarity search and LLMs and agent-based AI to extract useful information from my zettelkasten in a chat-based interface.

  • How Langchain Works

    As it turns out, combining large language models together can create powerful AI agents that can respond to and take action on complicated prompts. This is achieved by composing models and tools with an overall language model to mediate the interaction.

  • Ways to Use AI With Emacs

    I want to better utilize AI tools in my day-to-day work. I suspect there is much more I can be doing and using Emacs as building material to make it work for me.