This note does not have a description yet.
Links to this note
-
AI Is the next Great Interop Layer
I had previously observed that humans are the great interop layer—we are the glue that fits together disparate processes and tools into usable systems. After using large language models, I’m becoming convinced that they can offload a large amount of the interop cost that currently falls to us. In a nut shell, AI can ‘do what I mean not what I say’ pretty darn well.
-
Judgment Disqualifies Automation
When a process requires human judgement for an unknown number of possible decisions, automation is not possible.
-
Devin AI Fixes Bugs it Created
The “the first AI software engineer” Devin, from Cognition Labs, was found to be fixing bugs of it’s own doing, solving problems in a roundabout way, and taking a long time, in a debunking video by a human software engineer.
-
Graphrag Combines Knowledge Graphs With Retrieval
One of the biggest criticisms of LLMs is that they don’t actually know anything. Many techniques have been explored to use general purpose artificial intelligence to solve domain specific problems using information that it was not trained on. Retrieval-augmented generation (RAG) does a decent job of enabling you to “bring your own data” but can still fail on more specialized use cases.
-
It’s almost inevitable that, after an initial research phase, progress of AI models and tools will come from open source communities rather than a corporation. Individuals can utilize fair-use to do things businesses can not do (e.g. using leaked LLaMa weights and fine tuning it). There are more people to work on fringe usecases that do not have to be commercialized. Finally, open source increases access (running 13B LLMs on a laptop, on a Raspberry Pi) allowing more people to try it and provide more feedback.
-
A sub field of artificial intelligence (AI) concerned with helping agents generate valid and coherent plans of actions to reach a goal.
-
AI Is Usually Described as Singular, but Really it Will Be a Multitude of Ais
When AI comes about it will be colored by the creators. Data used to train and techniques for replicating traits we associate with intelligence will encode the culture and philosophy into the AI. In that way, we will likely have an “American AI” that is significantly different to AI created elsewhere in the world.
-
Dify mashes together LLMs, tools, and an end-user facing UI together to make an LLM workshop. The builder is a visual programming interface (similar to iOS Shortcuts) where each step is pre-defined units of functionality like an LLM call, RAG, and running arbitrary code.
-
Outcome-based pricing (or result-based pricing) is becoming popularized again due to services powered by artificial intelligence that are enable intent-based outcome specification. That means charging per unit of value which is the desired outcome.
-
Several startups are touting AI employees that you can hire to perform a specific function. Itercom announce Fin, an AI customer service agent and so did Maven AGI. Piper is an AI sales development representative, so is Artisan and 11x. Devin is a software engineer.
-
With the growing popularity of tools like Perplexity, OpenAI, Search GPT, and retrieval-augmented generation (RAG), and a healthy dose of skepticism in artificial intelligence (e.g. hallucinations) the industry is moving from “authoritative search” to “research and check”.
-
Today, most large language models are run by making requests over the network to a provider like OpenAI which has several disadvantages. You have to trust the entire chain of custody (e.g. network stack, the provider, their subprocessors etc.). It can be slow or flakey and therefore impractical for certain operations (e.g. voice inference, large volumes of text). It can also be expensive—providers are charging per API call and experiments can result in a surprising bill (my useless fine-tuned OpenAI model cost $36).
-
When Does a Service-as-Software Model Make Sense?
The service-as-software model is nacsent but expected to be experimented with in different fields as artificial intelligence techniques improve and enable new applications.
-
Now that my Zettelkasten has over a thousand notes, I’d like to try to quite literally create the experience of a conversation with my second brain. The AI interface should be conversational rather than search queries. It should draw from the knowledge in my notes and respond in natural language. Finally, it should be useful in helping me make connections between ideas I hadn’t thought of before.
-
An AI agent is an intent-based abstraction that combines LLMs to plan and take action in order produce a desired goal.
-
Mentioning AI Decreases Purchase Intent
A recent study measuring the effect of including the term “artificial intelligence” in the description of products and services decreases overall purchase intent. This is more pronounced in high-risk products (like financial products) than in low-risk products (like a vacuum cleaner).
-
The other day I noticed a tweet from Justin Duke which outlined a plan to get his company’s codebase ready for Devin—a programming focused generative AI product. While many are skeptical about AI taking over coding tasks, progress happening quickly and it seems likely that these tools will help software engineers, though maybe not replace the job outright).
-
Legal AI Models Hallucinate in 16% or More of Queries
A recent study from Stanford found that LLM’s (GPT-4) and RAG-based AI tools (Lexis+ AI, Westlaw AI-Assisted Research, Ask Practical Law AI) hallucinate answers 16% to 40% of the time in benchmarking queries. GPT-4 had the worst performance while RAG-based AI tools did slightly better.
-
How to Build an Intuition of What AI Can Do
One of the difficult parts of applying AI to existing processes and products is that people aren’t calibrated on what generative AI can and can’t do. This leads to both wild ideas that are not possible and missed opportunities to automate seemingly difficult work that is possible.
-
Knowledge collapse is the paradox where increasing access to certain types of knowledge actually harms understanding.
-
I want to be able to use generative AI in spreadsheets to solve unique problems. I want to call OpenAI from a cell that passes in a prompt and a value from a column then returns an answer I can easily parse.
-
I built org-ai using Python which exposes an AI chat interface through a simple CLI. This makes it a bit clunky when using it from Emacs—I would need to open up an instance of a terminal, activate the virtual environment, and execute the program to start the chat.
-
I recently shared some observations about trying to use generative AI tools at work and shared the experience on LinkedIn.
-
AI Doubles Productivity of Top Researchers
A study of artificial intelligence on productivity at a materials research lab found that the bottom third of researchers saw no improvement but top researchers doubled in productivity (as measured by materials discovered, patents, and “downstream product innovation”). The top researchers used AI to offload 57% of idea generation to focus on evaluation of the most promising ones rather than chasing dead ends. This suggests AI multiplies the value of expertise rather than leveling the playing field.
-
Trying to Know the Unknowable Leads to Pessimism
We do not yet know what we have not discovered and trying to know the unknowable (prophesy) leads to pessimism. A Malthusian catastrophe ends up being wrong because it does not predict knowledge that resulted in efficiency of food production. Similarly the pessimism of energy economics is error laden because it can not predict what new discoveries we will make in social and political systems or new defenses.
-
AI Puts a Higher Premium on Unique Knowledge
AI augmented tools for creative processes like writing (ChatGPT) and drawing (StableDiffusion, DALL-E-2) establish a new baseline for content. This is a step change for many industries where the value will get competed away (e.g. everyone can compete in editorial SEO). That means that there will be an even higher premium for unique knowledge that is, by definition, not replicable by advancements in general AI tools.
-
AI Multiplies the Value of Expertise
AI reduces the cost of certain tasks to effectively zero. In doing so, it lowers the barriers to domains that would previously take years to build skills such as writing code, data analysis, and more. This is precisely why AI also increases the value of expertise and experience.
-
The Labor Market Is Merging With the Saas Market
What if the entire services industry merges with SaaS when it becomes possible to deliver a service with artificial intelligence?
-
An English physician once described radium as “the unknown god”. This was at a time where radiation and it’s effects were still being discovered. Radium was being used to treat all manner of ailment, thinking that if it was helpful in large amounts for treating cancer, it must also keep you healthy in small amounts.
-
There are many theories put forth to explain human consciousness and experiments are running to test them. With all the discussion around AGI, it’s timely to keep an eye on them.
-
An incomplete list of AI-powered web browser automation and AI agent projects.
-
Service as software is the inverse of software as a service. Rather than building software for people to do their job, service-as-software uses AI to fulfill the intent directly and more faithfully sell solutions not software.
-
UI Requires Deductive Reasoning
Using a UI is a form of deductive reasoning that takes effort. You need to build out a plan on how to get what you intend. You need to follow interaction patterns you’ve seen before. You need to have built an intuition for where to look and where to go. You need motor skills to engage in just the right way to induce the transition of the state machine.
-
I started building AI for notes to help me chat with my library of notes. The result of that exploration is org-ai—my one of one software that helps me remember what I’ve previously written, summarize information. Under the hood it uses vector-based similarity search and LLMs and agent-based AI to extract useful information from my zettelkasten in a chat-based interface.
-
Professional Services Spend Is Double Software Spend
According to the Bureau of Economic Analysis, the contribution to US GDP in Q1 2024 of professional services industries was more than double that of information industries (which includes softare publishing).
-
There Is No AI Strategy Without a Data Strategy
Startups typically have an advantage over incumbents when it comes to adopting new technology. With artificial intelligence however, incumbents are fast to integrate LLMs and have the data needed to make better AI-powered products. For example, an established CRM platform has the data needed to train, evaluate, and deploy AI products that a startup would not have access to.
-
As it turns out, combining large language models together can create powerful AI agents that can respond to and take action on complicated prompts. This is achieved by composing models and tools with an overall language model to mediate the interaction.
-
I want to better utilize AI tools in my day-to-day work. I suspect there is much more I can be doing and using Emacs as building material to make it work for me.
-
Satya from Microsoft talks about how orchestrating between business applications is the next step for artificial intelligence which will replace business logic with AI.