With the capabilities of large language models getting more useful for real work, the pressure is on to incorporate them everywhere. I’m seeing an increase in loud voices proclaiming people and businesses must become “AI Native” if they are to survive.
While I wouldn’t put it in such absolute terms, competitive people who aim to do great work ought to take notice—moments of rapid progress and change are rare.
But what does it mean to be AI native?
Being AI native is to incorporate AI into the foundation of how you create and positioning yourself to take advantage of rapid progress in AI. For individuals, that means augmenting how they work to utilize AI to increase their productivity: improved efficiency, increased output, but also solving problems that were previously intractable due to limited resources. For businesses, that means building the culture and infrastructure to use AI safely, automation by default, and applying new techniques to solve customer problems faster and more completely (no, this does not mean you should build a damn chat bot).
The individual and the business go hand-in-hand. It’s going to be difficult for a business to become “AI native” if employees don’t enthusiastically engage with AI (it’s difficult to build an intuition of what it can do. Since this requires a change in culture, larger organizations will struggle while startups will succeed (this is an advantage we shouldn’t squander!).
In practice, I think it looks like this:
- Problem solving starts with an AI co-pilot or AI agent to rapidly get up to speed and explore the solution search space. For engineers that means building incremental improvements with AI-powered autocomplete or full features using an agent. For designers that means prompting to explore different solutions all at once and then refining it, before passing it to engineering with all the html and css already written.
- Recurring tasks are prototyped in workflow tools like n8n or Dify before being applied to everything. If you run into a problem trying to automate it go to item 1. For example, meeting follow-ups, lead nurturing, support, monitoring, and standard operating procedures.
- Internal tools and systems become significantly larger (probably larger than the customer facing application) and the internal platform provides access to data and actions that adhere business rules (for safety, security, and compliance reasons). These are designed primarily for use with other AI-powered tools, workflows, and one-off applications using code written by (surprise!) other AI-assisted workflows.
- The product and experience delivered aims to be a complete solution that is customized for each customer so that they are more directly paying for outcomes rather than software (while avoiding the infinite butler problem). The marketing of AI doesn’t matter so much as the solution delivered takes advantage of AI to get there faster or more completely.
- More one of one software is created by individuals specifically for their needs and preferences because LLMs significantly lower the effort for their payoff.
- Everything else that can’t be automated or stitched together in the day-to-day of running the business is sped up with faster communication. For example, voice dictation like Wispr Flow, progressive summarization in email (Gemini in Gmail), messaging (Slack AI), and documentation (Notion AI), and generative AI to respond quickly.
What else am I missing?
See also:
- Lump of labor fallacy shows that we won’t be losing jobs to AI, but the jobs will change (as this post I think demonstrates the difference)
- It’s hard to automate and build systems if you don’t have experience doing it—past experience is a repetoire not a playbook