One of the super powers of large language models is that it can “do what I mean” instead of “do what I say”. This ability to interpret prompts can drastically lower the barriers to accessing and interoperating between systems. For example, writing “Send a slack message to the Ops channel with a list of customers from HubSpot that signed up in the last week” would generate actions that query the HubSpot Contacts API, parse and extract the results, and make another API request to Slack to post to the #ops
channel.
The best example I can find of this is from Zapier’s Natural Language Actions (NLA) API. Zapier is perfectly positioned to enable a jump to universality for natural language UI because it already connects thousands of API providers together.
NLUI can also be useful as a frontend to a single service. For example, making a chat support bot that can access the content of support docs.
Links to this note
-
AI Is the next Great Interop Layer
I had previously observed that humans are the great interop layer—we are the glue that fits together disparate processes and tools into usable systems. After using large language models, I’m becoming convinced that they can offload a large amount of the interop cost that currently falls to us. In a nut shell, AI can ‘do what I mean not what I say’ pretty darn well.