Artificial Intelligence

This note does not have a description yet.

  • AI Is the next Great Interop Layer

    I had previously observed that humans are the great interop layer—we are the glue that fits together disparate processes and tools into usable systems. After using large language models, I’m becoming convinced that they can offload a large amount of the interop cost that currently falls to us. In a nut shell, AI can ‘do what I mean not what I say’ pretty darn well.

  • Devin AI Fixes Bugs it Created

    The “the first AI software engineer” Devin, from Cognition Labs, was found to be fixing bugs of it’s own doing, solving problems in a roundabout way, and taking a long time, in a debunking video by a human software engineer.

  • Advantages of Open Source AI

    It’s almost inevitable that, after an initial research phase, progress of AI models and tools will come from open source communities rather than a corporation. Individuals can utilize fair-use to do things businesses can not do (e.g. using leaked LLaMa weights and fine tuning it). There are more people to work on fringe usecases that do not have to be commercialized. Finally, open source increases access (running 13B LLMs on a laptop, on a Raspberry Pi) allowing more people to try it and provide more feedback.

  • Planning AI

    A sub field of artificial intelligence (AI) concerned with helping agents generate valid and coherent plans of actions to reach a goal.

  • AI Is Usually Described as Singular, but Really it Will Be a Multitude of Ais

    When AI comes about it will be colored by the creators. Data used to train and techniques for replicating traits we associate with intelligence will encode the culture and philosophy into the AI. In that way, we will likely have an “American AI” that is significantly different to AI created elsewhere in the world.

  • AI Employees

    Several startups are touting AI employees that you can hire to perform a specific function. Itercom announce Fin, an AI customer service agent and so did Maven AGI. Piper is an AI sales development representative and so is Artisan. Devin is a software engineer.

  • AI Models at the Edge

    Today, most large language models are run by making requests over the network to a provider like OpenAI which has several disadvantages. You have to trust the entire chain of custody (e.g. network stack, the provider, their subprocessors etc.). It can be slow or flakey and therefore impractical for certain operations (e.g. voice inference, large volumes of text). It can also be expensive—providers are charging per API call and experiments can result in a surprising bill (my useless fine-tuned OpenAI model cost $36).

  • AI for Notes

    Now that my Zettelkasten has over a thousand notes, I’d like to try to quite literally create the experience of a conversation with my second brain. The AI interface should be conversational rather than search queries. It should draw from the knowledge in my notes and respond in natural language. Finally, it should be useful in helping me make connections between ideas I hadn’t thought of before.

  • AI Agent

    An AI agent is an intent-based abstraction that combines LLMs to plan and take action in order produce a desired goal.

  • Getting Ready for AI

    The other day I noticed a tweet from Justin Duke which outlined a plan to get his company’s codebase ready for Devin—a programming focused generative AI product. While many are skeptical about AI taking over coding tasks, progress happening quickly and it seems likely that these tools will help software engineers, though maybe not replace the job outright).

  • Legal AI Models Hallucinate in 16% or More of Queries

    A recent study from Stanford found that LLM’s (GPT-4) and RAG-based AI tools (Lexis+ AI, Westlaw AI-Assisted Research, Ask Practical Law AI) hallucinate answers 16% to 40% of the time in benchmarking queries. GPT-4 had the worst performance while RAG-based AI tools did slightly better.

  • How to Build an Intuition of What AI Can Do

    One of the difficult parts of applying AI to existing processes and products is that people aren’t calibrated on what generative AI can and can’t do. This leads to both wild ideas that are not possible and missed opportunities to automate seemingly difficult work that is possible.

  • Knowledge Collapse

    Knowledge collapse is the paradox where increasing access to certain types of knowledge actually harms understanding.

  • Use AI in a Google Sheets

    I want to be able to use generative AI in spreadsheets to solve unique problems. I want to call OpenAI from a cell that passes in a prompt and a value from a column then returns an answer I can easily parse.

  • Org-Ai Emacs Integration

    I built org-ai using Python which exposes an AI chat interface through a simple CLI. This makes it a bit clunky when using it from Emacs—I would need to open up an instance of a terminal, activate the virtual environment, and execute the program to start the chat.

  • Using AI Tools at Work

    I recently shared some observations about trying to use generative AI tools at work and shared the experience on LinkedIn.

  • Trying to Know the Unknowable Leads to Pessimism

    We do not yet know what we have not discovered and trying to know the unknowable (prophesy) leads to pessimism. A Malthusian catastrophe ends up being wrong because it does not predict knowledge that resulted in efficiency of food production. Similarly the pessimism of energy economics is error laden because it can not predict what new discoveries we will make in social and political systems or new defenses.

  • AI Puts a Higher Premium on Unique Knowledge

    AI augmented tools for creative processes like writing (ChatGPT) and drawing (StableDiffusion, DALL-E-2) establish a new baseline for content. This is a step change for many industries where the value will get competed away (e.g. everyone can compete in editorial SEO). That means that there will be an even higher premium for unique knowledge that is, by definition, not replicable by advancements in general AI tools.

  • AI Multiplies the Value of Expertise

    AI reduces the cost of certain tasks to effectively zero. In doing so, it lowers the barriers to domains that would previously take years to build skills such as writing code, data analysis, and more. This is precisely why AI also increases the value of expertise and experience.

  • The Labor Market Is Merging With the Saas Market

    What if the entire services industry merges with SaaS when it becomes possible to deliver a service with artificial intelligence?

  • The Unknown God

    An English physician once described radium as “the unknown god”. This was at a time where radiation and it’s effects were still being discovered. Radium was being used to treat all manner of ailment, thinking that if it was helpful in large amounts for treating cancer, it must also keep you healthy in small amounts.

  • Theories of Consciousness

    There are many theories put forth to explain human consciousness and experiments are running to test them. With all the discussion around AGI, it’s timely to keep an eye on them.

  • UI Requires Deductive Reasoning

    Using a UI is a form of deductive reasoning that takes effort. You need to build out a plan on how to get what you intend. You need to follow interaction patterns you’ve seen before. You need to have built an intuition for where to look and where to go. You need motor skills to engage in just the right way to induce the transition of the state machine.

  • Org-Ai Is Chat for Notes

    I started building AI for notes to help me chat with my library of notes. The result of that exploration is org-ai—my one of one software that helps me remember what I’ve previously written, summarize information. Under the hood it uses vector-based similarity search and LLMs and agent-based AI to extract useful information from my zettelkasten in a chat-based interface.

  • Professional Services Spend Is Double Software Spend

    According to the Bureau of Economic Analysis, the contribution to US GDP in Q1 2024 of professional services industries was more than double that of information industries (which includes softare publishing).

  • How Langchain Works

    As it turns out, combining large language models together can create powerful AI agents that can respond to and take action on complicated prompts. This is achieved by composing models and tools with an overall language model to mediate the interaction.

  • Ways to Use AI With Emacs

    I want to better utilize AI tools in my day-to-day work. I suspect there is much more I can be doing and using Emacs as building material to make it work for me.