As large language models proliferate into every service and ultimately replaces business logic, we will be left with the horrible burden of maintaining mush.
Mush happens when a system can’t quite be understood by looking at it. LLMs and abstractions like AI agents cause us to lose read access—one can no longer read code to understand what’s going on. Even if you could read it, code generated by LLMs make a codebase harder to reason about.
My biggest fear with large, complex, AI-powered systems is that debugging starts to look more like psychiatry.
Links to this note
-
Rather than converting to text at every step in a chain of thought process with large language models to solve a complex problem, new research suggests that reasoning can happen in a latent space using the internal representation of the model. Besides improving responses that require a greater degree of reasoning, utilizing latent space is faster because it skips the continuous tokenization and text generation.