One of the difficult parts of applying AI to existing processes and products is that people aren’t calibrated on what generative AI can and can’t do. This leads to both wild ideas that are not possible and missed opportunities to automate seemingly difficult work that is possible.
How do you build an intuition of what AI can do?
There’s no substitute for trying out these tools on your own. Not just fiddling around with ChatGPT but actually trying to solve a real problem using LLMs. Surprise plus memory equals learning, so the more things you try, the more surprises you will encounter, and the more you can learn about what it can and can’t do.
Speaking for myself, I’ve been quietly working on one of one software that helps me explore the latest AI advancements in a safe place. I built org-ai to try out RAG techniques over a large number of documents (my public and private notes). I then applied what I learned to a prototype at work using the same techniques. From there I spotted problems with prompts and retrieving relevant documents that helped me understand some limitations of vector databases and RAG.
I’m documenting a list of prompts that I can reuse for various problems. I wrote a script to use AI in a Google spreadsheet and wrote a prompt to detect specific kinds of spam signups. I wrote a prompt to help summarize the last week and communicate it to the team based on previous examples to match my writing style.
Once you build up the habit of applying AI to real problems it starts to become a default starting point for new solutions. Not everything can or should be solved with AI but as a way to rapidly explore the space and gain real-world experience, learning by doing is unmatched.
Links to this note
-
While I don’t write code every day, I find it incredibly convenient.
-
Several startups are touting AI employees that you can hire to perform a specific function. Itercom announce Fin, an AI customer service agent and so did Maven AGI. Piper is an AI sales development representative and so is Artisan. Devin is a software engineer.
-
LLM Latency Is Output-Size Bound
As it stands today, LLM applications have noticeable latency but much of the latency is output-size bound rather than input-size bound. That means the amount of text that goes into a prompt does not matter.