Mushy Systems

Published

As large language models proliferate into every service and ultimately replaces business logic, we will be left with the horrible burden of maintaining mush.

Mush happens when a system can’t quite be understood by looking at it. LLMs and abstractions like AI agents cause us to lose read access—one can no longer read code to understand what’s going on. Even if you could read it, code generated by LLMs make a codebase harder to reason about.

My biggest fear with large, complex, AI-powered systems is that debugging starts to look more like psychiatry.

  • Typed Languages Are Best for AI Agents

    Typed languages should be the best fit for useful AI agents. Context is needed for practical LLM applications and type systems provide a ton of context. Compiling code provides a short loop that can help the agent.

  • Latent Space Reasoning

    Rather than converting to text at every step in a chain of thought process with large language models to solve a complex problem, new research suggests that reasoning can happen in a latent space using the internal representation of the model. Besides improving responses that require a greater degree of reasoning, utilizing latent space is faster because it skips the continuous tokenization and text generation.