Welcome to my notes! I regularly publish notes to better understand what I’ve learned and to explore new ideas. I would love to hear from youβleave a comment or reach out on Bluesky @alexkehayias.com or find me on LinkedIn.
Notes From the Field
-
Export Meeting Videos From Fathom
Annoyingly, Fathom does not provide a way of downloading all meeting videos from an account. The workaround support shared with me was to set up a Zapier automation that is triggered via the Fathom integration when a new video is available, using that to store the file in Google Drive, then the Fathom eng team would trigger an even for all videos in the account. But only if you’re on a Teams account…
Instead, go to the home page and scroll down until all videos appear (infinite scrollβhow quaint).
In the browser’s JavaScript console, run the following to get a list of all the links and extract the ID of each video.
const ids = Array.from( document.querySelectorAll('section.flex.flex-wrap a[href]') ).filter(a => a.href.startsWith('https://fathom.video')).map(a => a.href.split('/').at(-1));
Prime the video export otherwise downloading will 404.
ids.map(i => `https://fathom.video/calls/${i}/activate_download`);
Wait awhile.
Construct the download url:
ids.map(i => `https://fathom.video/calls/${i}/download`);
Since download links require authentication, open the download link in a new tab.
You can download many at once by programmatically opening a tab for each url (warning this could open a lot of tabs if there are a lot of videos to download!).
ids.forEach((i, idx) => window.open(`https://fathom.video/calls/${i}/download`, `_blank_{idx}`));
Published
-
When to Show Error Details
Only show error message details when the user can recover from it. Never show errors that expose backend exceptions or stack traces.
When a user can’t do anything to recover, be clear about it. It’s almost always better to say “Something went wrong, we’re looking into it” than to dump a stack trace for the user to puzzle over.
But what if you really really need that information to figure out the bug? Don’t make your user do it! Use logging and traces to be alerted when an exception occurs and include enough context for an engineer to resolve the issue. If you absolutely must have the user participate, include instructions for how to file a good support ticket. For example, “Something went wrong, please contact support@acme.com and include the following identifier for our team to investigate. {identifier with click to copy}”.
Published
-
Local AI
I’ve been spending more time with local large language models. By some estimates, open-weight models (Qwen, Deepseek, GLM, Gemma, etc.) are only 9 months behind frontier commercial models (OpenAI, Anthropic, and Google) but with the added advantages of being able to run on locally.
Privacy is, of course, the top reason to go local. I won’t repeat all the reasons why but I’ll point out that local LLMs open up interesting high-trust use cases. For some, sharing your inner thoughts and feelings over chat that are logged forever might be fine, but businesses that need to process sensitive data, probably aren’t keen to.
For me personally, I’m okay with storing data with providers that have a long track record of good stewardship (AWS, Google, Microsoft) but for processing that data with AI, I’m not ready to grant the big AI labs license to everything.
In my view, AI is more like personal computing. There are advantages to cloud computing and SaaS application for sure, but sovereignty has more value to me especially for one of one software.
Put another way, I’d rather buy than rent and running LLMs locally enables me to do that.
Published
-
Why I Don't Write (As Much) Anymore
It’s been awhile since I’ve published anything. I don’t feel as motivated about my note taking practice. I’ve fallen out of the habit.
The idea of building up a personal library of atomic units of knowledge about everything I encounter doesn’t interest me anymore because the advancements in artificial intelligence are so darn incredible.
If I’m being honest, part of the reason I write publicly is recognition. There is something satisfying about helping another human being find an un-google-able answer.
Now, we’re better off just asking ChatGPT.
It’s kind of funny and sad at the same time. I have “real intelligence” but the simulation of intelligence turns out to be far more useful.
That’s why what I want to write more about ideas. Instead of squirreling away rote facts and tidbits hoping my supplies will be useful some day, I want to hold onto my interpretations and (informed) opinions.
Maybe there’s a chance for a human internet again. Maybe it’s not geocities and blink tag nostalgia but more about the unique expression of each person that dares to smush their fleshy fingers against a keyboard to publish a web page in the age of AI.
Maybe?
Published
-
Comet AI Browser From Perplexity
Some initial impressions on the AI-first web browser from Perplexity, Comet:
- A little buggy: when I visited a new website from an existing tab and opened Assistant, it rant the same prompt from last time which didn’t make sense within the context of the current website.
- Assistant doesn’t seem to be as fully utilizing the current page as context as I hoped, it just felt like chat next to the web site rather than integrated somehow.
Published
-
AI Web Browser
Just like AI coding agents are forking VSCode so that agents have more control over the development environment, AI companies are forking Chromium so that agents have more control over the browser environment.
Comet AI browser from Perplexity Brave Leo Arc Max OpenAI web browser (rumor) Dia OpenDia (open-source clone of Dia)
Published
-
How to Lead a Meeting
Meetings are not useful unless they lead to an outcome or clearly defined tasks needed to reach an outcome. If you want influence over the outcome, it’s always best to lead.
What I’ve found that works best to lead a meeting:
Before the meeting
- Send the agenda a day before the meeting. Break down the sections strategically so that you can cover the most important details as a group or give you the best chance to steer the conversation to the outcome you believe is best.
- Send a pre-read that already answers the question you are there to discuss.
During the meeting
- Before anyone joins, put a slide on the screen with the agenda. Sometimes you want to include key points you want someone to take away from the discussion.
- Wait for people to join up to 3 minutes after start time, ask if the group is waiting on anyone who must attend 2 minutes after start time.
- Assign a note taker or record the meeting and let people know you will send it afterwards.
- Short intros. Ask team leads to introduce their team. Use the phrase, “Tweet length intro”.
- Go through each the agenda one by one, be clear about what the topic is, if the group veers, politely say “we’re getting off topic, I’ll make a note to follow up about this later, is that okay?”. No one will say no unless it’s suddenly very important.
- With 5-10 minutes remaining (more the longer the meeting), summarize action items that arrise and assign exactly one owner by name, ideally with a date. Write it down so there can be an agreement. Schedule follow-up call on calendar.
- Send a follow-up meeting summarizing the outcome and action items.
Published
-
Malleable Software
Today’s applications provide closed systems that can not be modified or extended to suit the needs of a user. Settings, plugins, modding, and even open-source don’t address these limitations because they create sharp cliffs between a worker’s power and and a worker’s skill required for tailoring.
The solution is malleable softwareβa philosophy of software as resuasable, user-configurable tools that offer a “gentle slope” to tailoring them to the user’s specific needs. Malleable software leads to powerful systems that can be adapted to the environment of the user with varying degrees of skill.
Read Malleable software - Restoring user agency in a world of locked-down apps from Ink & Switch.
See also:
- One of my favorite examples is that Emacs is the ultimate editor building material
- While this is akin to one of one software, I like that the authors insist that community and collaboration are important to malleable software
- As a counterpoint, applications are almost perfect for commercialization and that makes popularization of malleable software more of a cultural innovation than a technological one
Published
-
DB Connections in Healthchecks Is a Bad Idea
Here’s how I’ve seen DB connection/checks in a load balancer healthcheck become bad idea:
- Healthcheck timeout causes replacing servers leading to hard down time as all instances are pulled out of rotation and fail to start new ones because latency with the DB was temporarily > healthcheck timeout (3 seconds for us currently!)
- Relying on a healthcheck to determine an DB health takes time: unhealthy threshold * interval before taking action (120s compared to crashing at start time which would be right away)
- Healthcheck induced db query floods amplify a db issue as more instances are spun up all trying to create new connections and execute db queries at once. Solution is usually hopping into the db server and killing all connections, not try to spin up more server instances which could make it worse.
- DB restart causes healthcheck fails and server instances to cycle taking healthcheck threshold + fargate provisioning time to fully recover
What to do instead?
Monitor database health separately. Identify failure modes a load balancer can solve by pulling unhealthy server instances out of rotation (unrecoverable crash, hardware failure) and ones where it can’t (database cluster issues, latency) then tune healthchecks accordingly to speed up recovery time.
Published
-
Unsubscribe Links Are a Sneaky Phishing Vector
Maybe you want to be nice. Maybe you don’t want to hurt another well-meaning business' email delivery because you know from experience how painful debugging deliverability issues can be. So you click the unsubscribe link or you click the unsubscribe button that Gmail handily derives from the email.
Then you land on a phishing site. You maybe enter in your email to confirm unsubscribe like many an unsubscribe page has trained us to do before. Maybe it asks you to log in. Maybe you just gave your password to the phisher or confirmed that you do indeed have a FooCorp account.
Sad to say, but it’s safer to mark-as-spam. There are no penalties for the receiver. You can always undo it later.
See also:
Published
-
Ask AI to Ask You Questions
A surprisingly useful technique for getting the most out of AI tools is to prompt it to ask you questions that would help it to complete the task you want.
For example: “We’re working on building a report about foo. Please ask me question you need answers to in order to write a report about foo.”
This works well because LLMs are generally better at writing prompts for LLMs.
You can pair this technique with asking it to write a prompt for you. “Based on this, write a prompt for an LLM to write a report about foo.”
See also:
Published
-
AI Agent Design Patterns
It’s still early days for building user applications with AI but we are starting to see things progress as the industry progresses from LLM workflow to AI agents.
Here I’m cataloging different UI and UX patterns that I find to be useful.
Checkpoints
Examples: Replit, Cursor Best for: applications that make multipled changes all at once in response to the user like writing code, editing a document, or updating a SQL query
When working on a shared artifact with an AI agent, the user is building or iterating on existing output from an AI agent. As the user prompts for changes, these are effectively destructive edits and, if this were typical software, the user would expect a way to undo these edits.
With AI agents, “undo” is more ambiguous. Does that mean undo the last change the agent made? What if there were many edits made all at once?
Checkpoints provide a more explicit way to rollback changes. Explicit because the unit of work is the response from a prompt, not an individual change.
Approvals
Examples: Goose, Claude Code Best for: applications with broad based control and hard to undo side-effects like controlling a web browser or accessing the command line of the user’s computer.
Approvals are important when the user grants access to a powerful environment from which the agent takes action on their behalf. For example, an agent that has full access to the user’s computer should probably ask before deleting anything as the consequences can be severe.
The AI agent detects the sensitive action (usually a tool call or MCP call) and prompts the user for an approval before proceeding.
Escalations
Examples: ??? Best for: autonomous agents that perform actions on behalf of a user where reliability, accuracy, or completion are the highest priority
When an autonomous agent runs into an exception, the best thing to do might be to escalate to a human to take over rather than proceed. For example, an agent responding to a support inquiry might decide it’s unable to answer the customers question and escalate to a human agent.
The AI agent needs to detect the exception (from the example, checking
logprobs
of the response or an LLM as judge scores the response below an acceptable threshold) and then have a mechanism to escalate to another system.Hooks
Examples: Claude Code Best for: agents where interactions need to be governed deterministically. For example, running linters after a code agent makes a file change or logging tool usage for future audit.
Published
-
CSM to Customer Benchmark
Based on a data from Gainsight the average ratio of customer success managers to customers is 144 for low-touch businesses with less than $10k in annual contract value (ACV).
Annual Contract Value Avg accounts managed > $100k ACV 22 $10k-$100k ACV 49 < $10k ACV 144 See also:
Published
-
Axum and Dynamic Dispatch
I kept getting an opaque rust compile errors when writing an
axum
API handler.Like this:
the trait bound `fn(axum::extract::State<Arc<std::sync::RwLock<AppState>>>, axum::Json<ChatRequest>) -> impl Future<Output = axum::Json<ChatResponse>> {chat_handler}: Handler<_, _>` is not satisfied
And:
the trait `Handler<_, _>` is not implemented for fn item `fn(State<Arc<…>>, …) -> … {chat_handler}` = help: the following other types implement trait `Handler<T, S>`: `MethodRouter<S>` implements `Handler<(), S>` `axum::handler::Layered<L, H, T, S>` implements `Handler<T, S>`
Turns out my handler code utilized dynamic dispatch using trait objects and that was incompatible with Axum’s async implementation.
See if you can spot the issue:
let note_search_tool = NoteSearchTool::default(); let tools: Option<Vec<Box<dyn ToolCall>>> = Some(vec![Box::new(note_search_tool)]); let mut history = vec![]; chat(&mut history, &tools).await;
The fix is actually straightforward, but the opaque error messages made this difficult to track down. By default, a trait object is not thread-safe. I needed to add some additional trait bounds to the usage of dynamic dispatch.
let note_search_tool = NoteSearchTool::default(); let tools: Option<Vec<Box<dyn ToolCall + Send + Sync + 'static>>> = Some(vec![Box::new(note_search_tool)]);
Adding a type alias cleans up these type signatures.
type BoxedToolCall = Box<dyn ToolCall + Send + Sync + 'static>; let note_search_tool = NoteSearchTool::default(); let tools: Option<Vec<BoxedToolCall>>> = Some(vec![BoxedToolCall::new(note_search_tool)]);
Published
-
What Does It Mean to Be AI Native?
With the capabilities of large language models getting more useful for real work, the pressure is on to incorporate them everywhere. I’m seeing an increase in loud voices proclaiming people and businesses must become “AI Native” if they are to survive.
While I wouldn’t put it in such absolute terms, competitive people who aim to do great work ought to take noticeβmoments of rapid progress and change are rare.
But what does it mean to be AI native?
Being AI native is to incorporate AI into the foundation of how you create and positioning yourself to take advantage of rapid progress in AI. For individuals, that means augmenting how they work to utilize AI to increase their productivity: improved efficiency, increased output, but also solving problems that were previously intractable due to limited resources. For businesses, that means building the culture and infrastructure to use AI safely, automation by default, and applying new techniques to solve customer problems faster and more completely (no, this does not mean you should build a damn chat bot).
The individual and the business go hand-in-hand. It’s going to be difficult for a business to become “AI native” if employees don’t enthusiastically engage with AI (it’s difficult to build an intuition of what it can do). Since this requires a change in culture, larger organizations will struggle while startups will succeed (this is an advantage we shouldn’t squander!).
In practice, I think it looks like this:
- Problem solving starts with an AI co-pilot or AI agent to rapidly get up to speed and explore the solution search space. For engineers that means building incremental improvements with AI-powered autocomplete or full features using an agent. For designers that means prompting to explore different solutions all at once and then refining it, before passing it to engineering with all the html and css already written.
- Recurring tasks are prototyped in workflow tools like n8n or Dify before being applied to everything. If you run into a problem trying to automate it go to item 1. For example, meeting follow-ups, lead nurturing, support, monitoring, and standard operating procedures.
- Internal tools and systems become significantly larger (probably larger than the customer facing application) and the internal platform provides access to data and actions that adhere to business rules (for safety, security, and compliance reasons). These are designed primarily for use with other AI-powered tools, workflows, and one-off applications using code written by (surprise!) other AI-assisted workflows.
- The product and experience delivered aims to be a complete solution that is customized for each customer so that they are more directly paying for outcomes rather than software (while avoiding the infinite butler problem). The marketing of AI doesn’t matter so much as the solution delivered taking advantage of AI to get there faster and more completely.
- More one of one software is created by individuals specifically for their needs and preferences because LLMs significantly lower the effort to payoff ratio (as evidenced by the rise of “vibe coding”).
- Everything else that can’t be automated or stitched together in the day-to-day of running the business is sped up with faster communication. For example, voice dictation like Wispr Flow, progressive summarization in email (Gemini in Gmail), messaging (Slack AI), and documentation (Notion AI), and generative AI to respond quickly.
What else am I missing?
See also:
- Lump of labor fallacy shows that we won’t be losing jobs to AI, but the jobs will change (as this post I think demonstrates)
- It’s hard to automate and build systems if you don’t have experience doing itβpast experience is a repetoire not a playbook
Published
-
Typed Languages Are Best for AI Agents
Typed languages should be the best fit for useful AI agents. Context is needed for practical LLM applications and type systems provide a ton of context. Compiling code provides a short loop that can help the agent.
Strongly typed languages like rust are even better. Not only do you get great compiler errors for the agent to incorporate, you can easily tell when LLMs are subtly wrong. For example, when an AI agent was writing a function that used a library, it wrote it based on an older version so it didn’t compile.
See also:
- Static types make it easier work on projects sporadically when you don’t have time to page in the whole codebase into your working memory
- Static types might also help prevent mushy systems as AI contributes more to large codebases
- I tried out goose coding AI agent on some end-to-end features which went surprisingly okay
- More agents are coming as the market gets flooded with AI employees
Published
-
Install a PWA in MacOS
Apple keeps making it more and more difficult to “install” a PWA. At time of writing, there is no longer a “Share” button in the navigation bar so you have to go menu diving to find how to intall a PWA now…and by install I mean add it to the Dockβwhatever that means.
- Navigate to the URL of the PWA in Safari
- Go to File -> Share
- Select “Add to Dock”
Published
-
Goose Coding AI Agent
I’ve been testing out
goose
an AI agent for writing code that runs on your machine instead of as an IDE co-pilot.Writing features for my personal indexing service (a rust codebase), I learned:
- For small, additive, self-contained features, it works well. For example, spawning a long-running task that used to be blocking, adding a client-side PWA service-worker, and adding a test function.
- Sometimes I found that
goose
would replace a line rather than add itβwhen creating a new module it overwrote a line inlib.rs
which caused another module to no longer be linked and fail to compile. This also happened when adding a dependency. - One time it overwrote a file and truncated it and never finished outputting the rest.
- Responds well to guidance about correcting issues. For example, when implementing the server side implementation for push notifications it used a library incorrectly (probably an old version), but when I provided the readme example as context, it fixed it up.
- Typed languages are probably best for AI agents
Published
-
Prospects Will Thank You for Disqualifying Them
When I started doing founder-led sales, I thought it was my job to find a way to sell to anyone who got in touch and scheduled a meeting. When you have no customers and no sales, it’s natural to want to sell to anyone. This is a mistake.
Qualifying might sound awkward at first. There are some questions to ask and answers you are listening for. I worried that it wouldn’t sound natural.
Then learned that disqualifying someone from buying your product is a good thing. So much so that prospects will go so far as to say, “Thank you for your honesty”, when I tell an tiny company with one employee that our product is overkill for them. Not only does this buy some good will because you didn’t waste their time, it leaves the door open for them to come back to you in the future because earned some trust. (I also recommend giving them very simple signs of when they should get back in touch!).
See also:
- Discovery questions in sales calls should feel consultative
- Customer success overcomes technical hurdles but not buyer mismatches
- Selling something to the wrong customer leads to leads to churn
Published
-
The Bitter Lesson
Ironically, one of the most efficient strategies for building with AI is to wait for better models.
Researchers try to address shortcomings of AI models by constraining them from general purpose tools to specific purpose tools. The exact opposite approach eventually obviates their work as other researchers improve models by scaling compute rather than through better algorithms and clever constraints.
This is relevant to today’s approach to AI using large language models. The models improve incredibly fast and one side-effect of that is foundation model providers regularly wipe out companies trying to build specific applications once the model becomes good enough.
Read The Bitter Lesson by Rich Sutton and AI Founder’s Bitter Lesson from Lukas Petersson.
See also:
- AI puts a higher premium on unique knowledge but the area of knowledge narrows over time as models get generally better at more things
- GΓΆdel Incompleteness For Startups
Published
-
Why I Like Ring Binders - Plotter Notebook
The Plotter notebook from Designphil is a minimal ring binder for planning and writing. I use the A5 size for work and journaling in addition to org-mode for managing tasks and writing permanent notes.
Being a ring binder allows for customization as you go. Instead of worrying about leaving space to add on to something you just wrote, you can open up the rings and place another page anytime in the future. You can add different page templates as needed using off-the-shelf refills (the Plotter refill paper is quite good) or even print your own (I recommend a standard size like A5 if you plan to do this).
Being a ring binder means pages lay flat. This is such a useful feature not only for writing comfort (which is very important) but also for referencing the contents at a glance whilst sitting on your desk while working.
Writing with pen and paper is a welcome break from constant screen time. While far less efficient, research suggests analog writing is more effective for encoding information into long-term memory. I feel more focused because I have to fully concentrate on writing compared to typing.
See also:
Published
-
AI Browser Automation
An incomplete list of AI-powered web browser automation and AI agent projects.
- Google Project Mariner
- Skyvern
- Claude Computer Use
- OpenAI Operator agent
- browser-use
- magnitude
- Smooth
- UI-TARS
- WebVoyager
- WILBUR
- Agent-E
- Runner H from Hcompany
Published
-
N8n for Automation
I started trying out n8n for setting up automated workflows.
I tried setting up a basic HubSpot filter to collect all deals in a certain stage. Unfortunately, the API doesn’t return the names of things, just IDs so I can’t tell by looking at it what the pipeline is or the stage because those are customized to our instance of HubSpot. This would probably not be possible for a non technical person to do.
How is it different than Dify?
n8n has out of the box integrations with many B2B SaaS tools like HubSpot. It also provides triggers like cron jobs and webhooks and Dify does not.
Notes on usage:
- Quirky UI: double click to edit, triple pane layout when editing with dismiss button on the left, when to use an expression or not for a field value, tools have to be linked visually to an LLM node
- When working off of large lists, speed up the feedback loop by only executing a node once while you test it out by going to Settings -> Execute Once (don’t for get to flip it back)
- Clicking the test button doesn’t actually seem to execute the full flow
- To extract fields from a large API response, use the Edit node (horribly named)
- Tools can only be used in conjuction with a language model (e.g. SERP can’t be used to make arbitrary search queries in a workflow)
- HubSpot has a horrible API and the built-in integration in n8n does not make it any easier (e.g. you need to map IDs yourself by making multiple API calls and using a Merge node to join them)
- Scheduled tasks don’t run until you activate the workflow
Published
-
Ink Reviews
This is my personal list of ink reviews based on my usage and preferences. Black ink unless otherwise noted.
Caran D’Arche 0.7mm (parker style)
Very smooth and dark on the page makes writing with it very clear. Ink takes awhile to dry so smudging happens frequently if I’m not careful.
Ohto Ceramic Gel Pen Refill 0.5mm (
PG-M05NP
, parker style, needle tip)Writing is not consistently smooth and sometimes needs more pressure on the paper to write. Drying is relatively fast but heavier spots will still smudge a few seconds after writing. I would prefer a 0.7mm point but at time of writing, they do not make one.
Ohto Flash Dry Gel Pen Refill 0.5mm (
PG-105NP
, parker style, needle tip)Much smoother than the Ohto Ceramic roller ball refill and the same dry time. Definitely recommend these over the
PG-M05NP
.Ohto Ballpoint Refill 0.7mm (
PS-107NP
, parker style, needle tip)Smooth writing but the ink is very light for a 0.7mm and it’s signficantly lighter than the Ohto Flash Dry Gel ink refill.
Published