• Thinking in Scenarios Simplifies Scoping

    When scoping out and planning work, an easy way to get a comprehensive understanding of everything that needs to happen is to enumerate every scenario. I find this technique saves a lot of time and clarifies what to do in a way that avoids surprises later.

    How do you do it?

    Start by laying out each variable (try using a scenario table). For a product feature it might be some aspect of the user and the action. For example: is the user new or existing?, did they take the action yet?, is the action complete or in-progress?, did something go wrong?

    This gives you a set of scenarios to enumerate: a new user who hasn’t taken the action, a new user who has taken the action but it’s not done yet, a new user who completed the action, a new user that encountered an exception (same for an existing user). Now you can trim down the list of permutations to the ones that makes sense and decide how to handle each one.

    That’s a trivial example, but I find the same technique is just as useful when negotiating a contract and wanting to make sure all your bases are covered.

    See also:

    • Many more things can be explained with finite state machines but it’s a little harder to understand compared to the scenarios technique
    • This also helps spot potential issues early like a pre-mortem exercise

  • Right of First Refusal

    A right of first refusal (RoFR) in an agreement is a deterrent to other acquirers. The acquirer wants to be in control and this automatically makes it a multi-bidder situation. The acquirer might not want to make an offer knowing that it might not go through and result in a different acquirer winning the deal.


  • The Pareto Principle and Chatbots

    Support cases at many businesses follow the Pareto principle. For example, DoorDash and Uber 87% of support requests relate to 16 issues. Deploying chatbots to solve high concentration issues makes it economical to build and maintain conversation trees by hand. What about the remaining 20% and what about businesses that have a wider distribution of issue types?

    Advancements in large language models might make the last 20% tractable if not economical. The general reasoning skills and ability to respond to unique situations with mostly right answers could be a boon for chatbot based support. What’s more, businesses can use their existing data to fine tune and supplement resolution paths using support materials they have already produced for human agents.


  • Reduce Damage of Shipping a Bike by Pretending It's a TV

    When Vanmoof started shipping bikes to the US, they experienced a high number of returns because the bike arrived damaged, despite there being warnings on the box to be careful. Rather than try to fix the US shipping industry’s practices, they did something very clever—they changed their packaging to make it look like it was a flat screen TV.

    The US shipping industry has many years of experience delivering flat screen TVs as the demand for them skyrocketed since the early 2000’s. I can imagine the big retailers had significantly more pull with the likes of UPS, FedEex, and even the US Postal Service to reduce damages during deliveries.

    By piggybacking off of years of safe flat screen TV delivery practices, Vanmoof’s shipping damages dropped by 70-80%.

    Read VanMoof files for bankruptcy protection from the Pragmatic Engineer.


  • Zapier NLA Is Bad at Generating API Parameters

    I tried out Zapier Natural Language Actions API and found that it’s not particularly good for the one thing I needed it to be good at glueing together other APIs with natural language. API endpoints that are simple and straightforward are easy for large language models to generate the right inputs but more complicated (and poorly designed) like HubSpot are unusable.

    This is a shame because mashing up simple APIs is trivial and the things that would relieve the most effort (like poorly designed APIs), NLA can’t do.

    Maybe this gets better eventually and Zapier adds enough special handling for HubSpot. I still think this could be extraordinarily powerful if it can become the substrate for interop rather than people.


  • Compounding Is Unintuitive Because the Initial Curve Feels Flat

    I’ve always wondered why the nature of compounding and any exponential relationship feels unintuitive. That is until I read this quote from Paul Graham’s essay How to do great work.

    The trouble with exponential growth is that the curve feels flat in the beginning. It isn’t; it’s still a wonderful exponential curve. But we can’t grasp that intuitively, so we underrate exponential growth in its early stages.

    We tend to understand immediate relationships easily, but when it comes to exponential growth over time, the outsized gain occurs at the end. That’s simple to visualize (draw and upward facing curve) but hard to know when you’re on it.

    It will feel closer to flat or linear when you are at the head of the curve over short time windows. Even more difficult, results are not always easily as observable. For example, how would you know you are thinking better thoughts?

    The point is, you might need to slog through some things for awhile to begin seeing the result of exponential growth that was already happening.

    See also:


  • Zoom Note Taker Bots Are Disruptive

    Several products these days have an AI bot that will join your Zoom call and take notes. That’s bad.

    It puts everyone in the meeting on edge. It’s like a courtroom stenographer sitting in on your call. I don’t want to be in a courtroom, I want to be in a semi-private conversation with other human beings.

    The UI of a bot joining a Zoom meeting is that they are an equal participant. They have a square filled with a black screen like someone without their camera on. They have a name. If multiple people use this kind of service, multiple bots join the meeting.

    This UI makes the bots feel more “there”, making it even harder to ignore.

    It’s okay to take notes (I would have no problem with a human participant typing up some notes as we go) but somehow I feel weird about a carbon copy transcript or video of the meeting. It feels like a permanent record that can never be expunged. What if I said something wrong?

    I don’t want to be “on the record” all the time.

    See also:


  • What's So Great About Michelangelo's David

    I am by no means an art buff but this is my explanation about what is so moving about Michelangelo’s David 500 years after it was made.

    When I first saw him at the Accademia in Florence, Italy it immediately struck me how large he is. At a height of nearly 17 feet, it’s immediately impressive, but that’s not what pulled me in.

    The most striking thing about this David is that we don’t know if the scene is before or after killing Goliath.

    Imagine for a moment that this is before the fateful moment—here he stands facing down an overwhelming force in Goliath. He shows courage, determination, composure, and vulnerability (literally standing there naked) knowing that all he has is his wits.

    That feels like life sometimes. We all have conflicts and problems. I get a clear sense of David’s interiority and see a piece of myself in there.

    Then the realization hit me and I couldn’t help welling up inside. Let me try to explain.

    Michelangelo encodes a humanist message here. David is massive in stature. He is facing a giant but what you feel looking at him is how immense he is. And that’s precisely the message—we are all David, confronting our hidden Goliath and we too can face it because we too are giant.

    What remains is an incredible sense of hope that is impossible to capture in words. That’s why Michelangelo’s David is great.


  • A Little Bit Broken Is Still Broken

    Engineers and product people tend to think about issues as frequency distributions. How many users does this impact? How severe is it? But this misses one inescapable truth from a user’s perspective: a little bit broken is still broken.

    I was staying at a hotel recently that highlights the point.

    The shower worked but would occilate between lukewarm, cold, and burning hot. I could still bathe by hiding from the shower head every 3 seconds. Needless to say, it was frustrating and uncomfortable.

    I could imagine this problem being triaged by a product team.

    “A user is reporting a temperature issue with the shower.”

    “Well, how many users have this problem?”

    “Only one.”

    “How bad is it?”

    “There is a workaround and they can still bathe.”

    “We’ll come back to it.”

    The simple solution to avoiding these kinds of low-fidelity histogram-based prioritization is to use the product. Within 1 minute of actually trying to use it for themselves, this figurative product team would put this issue directly at the top of the list.

    See also:


  • Ephemeral Jump Box Using AWS Session Manager

    Sometimes you need a “jump box” that you can SSH into and get shell-level access to a production environment. That might be for debugging and maintenance or part of a deploy process.

    This comes with a tradeoff. If you are using a private AWS subnet (which you should use for security), you need to open up access for the jump box making it a big target for attackers.

    Ideally, you could get access to a production environment only when access is needed and done as securely as possible. AWS Session Manager has pretty much all the ingredients to do that but we only want to allow connections temporarily on a new server (not an existing server with the Session Manager plugin installed).

    For that we need to combine that with ECS so we can spin up a brand new server, connect to it securely, and then shut it down when we’re done.

    How to create an ephemeral jump box with AWS ECS and Session Manager

    1. Make sure the plugin is installed locally.
    2. Create a new ECS task (a new server) using the same definition as the production instance you want to access (or create a new definition just for bastion boxes)
         aws ecs run-task --cluster {YOUR CLUSTER NAME} --task-definition {YOUR TASK DEFINITION NAME} --count 1 --launch-type FARGATE --enable-execute-command --network-configuration '{"awsvpcConfiguration": {"subnets": [{LIST OF SUBNETS YOU NEED ACCESS TO}], "assignPublicIp": "ENABLED"}}' --overrides '{"containerOverrides": [{"name": "shell", "command": ["sleep", "{TIME IN SECONDS BEFORE SHUTTING DOWN}"]}]}'
      
    3. Grab the task ARN from the output, wait for the task to start running, and connect
         aws ecs wait tasks-running --cluster {YOUR CLUSTER NAME} --task "{TASK ARN}"
         aws ecs execute-command --cluster {YOUR CLUSTER NAME} --task "{TASK ARN}" --command "bash" --interactive 2>/dev/null
      
    4. Stop the task (or wait for it to expire)
         aws ecs stop-task --cluster {YOUR CLUSTER NAME} --task "{TASK ARN}"
      

  • Intent-Based Outcome Specification

    A new paradigm for user interfaces is starting to take shape with the rise of AI powered tools. Rather than a loop of sending a command, receiving the output, and continuing (like graphical user interfaces), an intent-based outcome specification is telling the computer what the outcome should be—“open the door” instead of “check auth, unlock, open latch, extend door”.

    This has many challenges (chatbots lack affordances), especially because many people are not articulate enough to describe what they want. AI can help interpret vague language, but designers will need to develop patterns for refinement (maybe by mashing up other UI paradigms).

    Read AI: First New UI Paradigm in 60 Years.


  • How to Send Email as Someone Else in HubSpot

    Sometimes you want another email to be able to send email as you in HubSpot. For example, having a BDR send follow-up notes to prospects you haven’t heard from in a while, or setting appointments. This isn’t possible out of the box with HubSpot, but if you are using Gmail, there is a workaround.

    However, you should probably just set up a shared inbox in HubSpot!

    1. Add an email delegate in Gmail
    2. Confirm the delegation request from the email
    3. Connect the delegate email to HubSpot HubSpot -> Settings -> Email -> Connect personal email
    4. An alias should now be available to send emails in HubSpot when you click the “From” field
    5. Sending via personal email accounts will prepend the HubSpot user’s name so to change the name to match, go to Profile & Preferences -> Global and change your first name and last name. Log out and then back in so the email name change takes effect

  • When to Write a Brief

    Sometimes you should try to build the thing instead of writing about the thing. While I’m a staunch believer that writing is a super power (especially for remote teams), it can make small problems seem bigger than they actually are. Knowing when to reach for a brief can be difficult to pin down.

    When should you write a brief or just build?

    If it’s obvious and small (days not weeks), skip the brief and build it. You don’t want to make more work for yourself.

    If you have an idea of what to do but are not feeling clear about how important it is, write a short brief. It’s okay to write in order to justify your idea—in fact, writing to convince others can be clarifying (and sometimes convinces you it’s not important after all).

    If you catch yourself thinking of a solution disguised as a problem. Write a brief to get to the bottom of what’s actually going on.

    If it’s a technical problem, taking the first few steps to build it and then write a brief. It might be necessary to try to make a prototype just to know the shape of the problem so you can write about it. Code can be very if you can write software fast and it might turn out to be not that hard at all.

    If you have a vague understanding of a problem, write the intro of a brief to quickly identify what you don’t know.

    (Tips for writing a brief is covered in how to write for remote teams).


  • Taxonomy of Platforms

    Technology platforms exhibit one or more of the following models.

    Platform models and key challenges:

    • API as platform APIs deliver abstract away difficult problems adjacent to the user’s core business such as payments (e.g Stripe) or running on multiple operating systems (e.g. Java). The key challenge of API platforms is change over time—broadening capabilities while abstracting away the messy details.
    • User experience as platform Rich user experiences with novel capabilities and near-universal adoption define the work of entire professions (e.g. Figma). The key challenge is moving users over to new UX over time (who moved my cheese?) and evolving into an API platform (frontend UX and backend implementation tend to be harder to decouple than one thinks).
    • App as platform A product that is so good at solving a critical problem ends up defining how the problem is solved for others.

    …More soon!

    • Data storage as platform
    • Horizontal or Vertical
    • Customization
    • Connectors and integrations
    • App store
    • Marketplaces and networks
    • Open source

    From The Ultimate Guide to Platforms Steven Sinofsky.


  • How to Make Python Functions Extensible

    Let’s say you have a python function that is used widely throughout your code base. You need to make changes to it. You change the arguments and everything breaks. You painstakingly update all call sites and hope that, when the tests pass, it’s all working as expected.

    Is there a better way?

    Consider the function foo.

    def foo(a, b, c):
        pass
    
    foo(1, 2, 3)
    

    If you want to add another argument to foo you might update it like so.

    def foo(a, b, c, d):
        pass
    
    foo(1, 2, 3, 4)
    

    You need to update all of the call sites because d is a required positional argument. That’s not too bad because there are a small number of arguments and it’s easy to update by just adding an argument to the end.

    Now let’s say you need to update foo with argument e which is sometimes None. Oh, and you realized that c can also be None.

    def foo(a, b, c, d, e):
        pass
    
    foo(1, 2, None, 4, None)
    

    Now we have several problems:

    1. Every call site needs to be updated again
    2. The argument list is getting longer, what is argument 3 again?
    3. It’s easy to make a mistake updating calls to foo because ordering matters—python will happily take foo(1, 2, 4, 3, 5)

    Finally, you want to refactor because the argument list is getting long and they aren’t in a logical order causing frequent instances of problem 3.

    def foo(a, b, e, g, c, f, d, h):
        pass
    
    foo(1, 2, None, 7, None, 6, 4, None)
    

    A more extensible way to do this is to use keyword arguments kwargs.

    The ordering of arguments doesn’t matter. If you change the order of arguments in the function, you won’t need to update every call site.

    def foo(a, b, e, g, c, f, d, h):
        pass
    
    foo(e=None, a=1, h=None, g=7, c=None, f=6, b=2, d=4)
    

    Using kwargs with default values can be omitted, perfect for indicating what is actually optional when calling foo. Python will insist that these arguments go last (making it possible to still reach for calling the function with positional args.

    def foo(a, b, d, f, g, c=None, e=None, h=None):
        pass
    
    foo(a=1, b=2, d=4, f=6, g=7)
    

    Extending foo with another optional argument is now super easy—no need to update every call to foo (unless it needs to use the new optional argument). Notice how the example function call hasn’t changed even though we added argument i which also altered the argument ordering.

    def foo(a, b, d, f, g, c=None, e=None, i=None, h=None):
        pass
    
    foo(a=1, b=2, d=4, f=6, g=7)
    

    To take it one step further, you can require keyword arguments so the function can’t be called with positional arguments.

    def foo(*, a, b, d, f, g, c=None, e=None, i=None, h=None):
        pass
    
    # This will throw an error
    foo(1, 2, 4, 6, 7)
    
    # This will not
    foo(a=1, b=2, d=4, f=6, g=7)
    

    Neat!


  • AI Models at the Edge

    Today, most large language models are run by making requests over the network to a provider like OpenAI which has several disadvantages. You have to trust the entire chain of custody (e.g. network stack, the provider, their subprocessors etc.). It can be slow or flakey and therefore impractical for certain operations (e.g. voice inference, large volumes of text). It can also be expensive—providers are charging per API call and experiments can result in a surprising bill (my useless fine-tuned OpenAI model cost $36).

    Open-source tools for running AI models locally (or “at the edge”) are being built to solve that. Utilities like ggml an llama.cpp lets you run models on commodity hardware like your laptop or even a phone. They also have all the advantages of open source AI—pushing the boundary of where these models run and improving it by having increasing access.

    I’m very excited about the interest in AI at the edge because driving the cost to near zero (at least for the folks with the skillset to tinker) and removing the network stack creates will encourage more people to try and build new things.


  • Who You Try to Impress Is Who You Become

    I don’t recall where I heard this, but who you try to impress is who you become so choose who you want to impress wisely.

    Early in life it’s a fairly common experience to do something dumb so someone thinks you’re cool. Later in life you realize that’s a bad deal—why would you let someone else dictate what you do and set you on downward path? Yet, this happens all the time at work, at home, your friend group, your spouse, and so on.

    Be mindful of the influence of who you are trying to impress as it might just change your life!