• Capture Output of Chatgpt-Shell in Org-Mode

    I’m messing around with ChatGPT in Emacs and I want to capture the results of a session in org-mode. Since chatgpt-shell is just a buffer (like everything else in emacs), we can write an elisp source block to grab the contents of buffer. Even better, we can tell org-mode to export the results as a markdown block since the code examples all use code fences (the triple backtick).

    #+BEGIN_SRC emacs-lisp :results value :wrap src markdown :cache yes :exports results
    (let ((buf (get-buffer-create "*chatgpt*")))
      (with-current-buffer buf
        (buffer-string)))
    #+END_SRC
    
    ChatGPT> You are an elisp programming expert assistant.
    <gpt-end-of-prompt>
    Great! How can I assist you today?
    
    ChatGPT> How do I use org-mode to run a block that contains the output of another buffer? For example, there is a library that uses comint-mode as a chat interface. I want to embed that into an org-mode document, ideally using org-mode blocks. How do I do that?
    <gpt-end-of-prompt>
    You can use the `src` code block in org-mode to execute commands and capture their output. Here's an example:
    
    First, you need to define the comint-based chat interface as a function that returns its output as a string. Here's a basic example using `comint-send-string` and `comint-show-output`:
    
    ```
    (defun chat-interface ()
      (let ((buf (get-buffer-create "*chat*")))
        (with-current-buffer buf
          (erase-buffer)
          (comint-mode)
          (comint-send-string (get-buffer-process buf) "Hello, World!\n")
          (comint-show-output))
        (with-current-buffer buf
          (buffer-string))))
    ```
    
    This function creates a new buffer called `*chat*`, switches to it, sends a message to the chat interface, shows its output, then returns the buffer's content.
    
    Next, you can use this function in a `src` code block in org-mode like this:
    
    ```
    #+NAME: chat-example
    #+BEGIN_SRC emacs-lisp :results output
    (chat-interface)
    #+END_SRC
    ```
    
    This creates a new named code block called `chat-example` that calls the `chat-interface` function and captures its output as the block's result.
    
    You can now reference this block elsewhere in your org-mode document, include it in exports, etc. For example:
    
    ```
    In this section we're going to use the chat interface to say hello:
    
    #+RESULTS: chat-example
    : Hello, World!
    ```
    
    This would include the output of the `chat-example` block in the exported document.
    
    ChatGPT> please print out the chat so far, formatted in org-mode sytnax
    <gpt-end-of-prompt>
    Certainly! Here's the conversation so far, formatted in org-mode syntax:
    
    ```
    * Conversation with elisp programming expert assistant
    
    ** You, 2021-11-28 10:15
    How do I use org-mode to run a block that contains the output of another buffer? For example, there is a library that uses comint-mode as a chat interface. I want to embed that into an org-mode document, ideally using org-mode blocks. How do I do that?
    
    ** Me, 2021-11-28 10:19
    You can use the `src` code block in org-mode to execute commands and capture their output. Here's an example:
    
    ...
    
    ```
    
    Is this what you were looking for?
    
    ChatGPT> yes
    <gpt-end-of-prompt>
    Great! Let me know if you have any more questions.
    
    ChatGPT> please
    <gpt-end-of-prompt>
    Sure, how can I help you?
    
    ChatGPT> in emacs lisp, how do you get the contents of a buffer as a string?
    <gpt-end-of-prompt>
    You can get the contents of a buffer as a string in Emacs Lisp using the `buffer-string` function. You can call this function with the buffer you want to retrieve the contents from as an argument. Here's an example:
    
    ```
    ;; Get the contents of the current buffer
    (buffer-string)
    
    ;; Get the contents of another buffer by name
    (with-current-buffer "some-buffer-name"
      (buffer-string))
    ```
    
    The first example retrieves the contents of the current buffer, while the second example retrieves the contents of a buffer named "some-buffer-name".
    
    `buffer-string` returns the entire text of the buffer as a string, including any properties and text properties. If you only want the text, you can use the `buffer-substring` function instead. Here is an example:
    
    ```
    ;; Get the text of the current buffer
    (buffer-substring (point-min) (point-max))
    
    ;; Get the text of another buffer by name
    (with-current-buffer "some-buffer-name"
      (buffer-substring (point-min) (point-max)))
    ```
    
    In this example, `buffer-substring` returns the text of the buffer as a string, excluding any properties and text properties. The arguments `(point-min)` and `(point-max)` specify the region to be returned, which in this case is the entire buffer.
    
    I hope this helps!
    
    ChatGPT> buffer-string is returning text from a buffer that looks like an S-expression. How do I get just the text?
    <gpt-end-of-prompt>
    If `buffer-string` is returning text that looks like an S-expression, it's possible that the buffer in question has some Lisp code in it. In that case, you can use the `buffer-substring-no-properties` function to retrieve the text content without any properties or S-expression formatting. Here's an example:
    
    ```
    (with-current-buffer "some-buffer-name"
      (buffer-substring-no-properties (point-min) (point-max)))
    ```
    
    This will return the text content of the buffer as a string, without any properties or formatting.
    
    You could also use the `subseq` function to extract just the text content from the `buffer-string` output. Here's an example:
    
    ```
    (subseq (buffer-string) 2 -2)
    ```
    
    This will return the text content of the buffer without the first and last parentheses of the S-expression.
    
    ChatGPT>
    

    Published

  • Using ChatGPT With Emacs

    There are a few packages and libraries that are being built to use ChatGPT along with Emacs.

    This page is becoming fairly popular in search results so please let me know if there are other libraries you love or think I should try out.

    Recommended

    I use the following libraries myself for both writing code and prose (more prose than code these days).

    • GPTel focuses on a chat client session within an Emacs buffer and sending text from the buffer (like a selected region or line). Also uses a transient menu (a la magit) for on-the-fly configuration like custom prompts and models to use.
    • ChatGPT Shell is an eshell for ChatGPT and dall-e that fascilitats interacting with a chat session. By default it’s prompt is configured for helping with writing code.

    Best of the rest

    An incomplete list of other libraries I’ve come across.

    • Emacs OpenAI has a collection of packages for using ChatGPT and dall-e
    • ChatGPT.el was one of the first, but hasn’t been updated to use the official API yet
    • ChatGPT Arcana is a more integrated approach including prompt from region
    • org-ai combines OpenAI whisper and GPT so you can talk to Emacs (also has a nifty begin_ai block for org-mode)
    • leafy-mode adds completions to org-mode using ChatGPT

    Querying your own data

    Most of the libraries mentioned rely on the user to send text to a chat session. However, they don’t have ways to index your own data to provide context to a chat session or for single-shot completions.

    For example, I wrote a LangChain agent for helping me chat with my notes (also named org-ai, oh well!). While it’s one of one software and I have no intention of making it generic, you could adapt it to your own use.

    See also:


    Published

  • Context Is Needed for Practical LLM Applications

    One of the limitations of large language models is that it is often missing context when responding. LLMs like ChatGPT (GPT 3.5 and GPT 4 at time of writing) don’t allow fine tuning and can only take in ~4,000 tokens (roughly 3,000 words) as part of the prompt. That’s not nearly enough to provide context to be specific to your application. For example, you might want an AI coding tool to help you add a feature to your codebase, but the codebase is likely much larger than the prompt would allow.

    To fit more information into prompts for context, LLMs could benefit from a cheatsheet, generated for the prompt. Combining tools like semantic search with an LLM could allow for better applications that are more specific to the user’s domain. For example, when asking an AI coding tool to add a function, the search part could load in all of the related modules or just the type signatures rather than the entire codebase.

    Read Cheating is all you need.


    Published

  • Remote Work Permanence

    It’s becoming clear that remote work isn’t going anywhere. A large portion of the workforce continues to work from home. Return to office stagnated. Office real estate value is plummetting.

    Employers are shifting from remote work enablement to remote work permanence.

    With a permanent part of the workforce always working from home, there are important challenges ahead:

    • Effective management of remote teams
    • Compliance everywhere all at once
    • Accommodating worker preferences

    Effective management of remote teams

    To be an effective people leader in a world of remote work permanence, managers need to be significantly more effective at building an environment where people can do their best work.

    Trust is essential—especially in a remote-first organization. Being overly worried about people not putting in the hours or working two jobs simultaneously is not productive.

    Unfortunately, there is reason for employees to feel distrustful. There are massive layoffs in tech stoking fear but don’t improve company performance. Companies are pausing promotions and raises and eliminating employee resource groups. Some are forcing workers to return to the office or be fired. The Overton window of employment is shifting.

    What needs to happen?

    Managers need to reorient around outcomes not appearing busy. This challenges preconceived notions about management and reveals how reliant we were on seeing people working rather than understanding what jobs entail and the outcomes that are needed to measure performance.

    There is no software solution here. My guess is that there will be a massive retraining of managers needed and a change to more asynchronous working styles to fit an increasingly remote workforce. The good news is these changes benefit every employee by making a more equitable and effective working environment.

    See also:

    Compliance everywhere all at once

    The rules for employment and tax are different in state, city, and even county. Remote work led to a dramatic increase in the number of locations a business operates in. However, the size of the People, Finance, and Legal teams remained the same.

    There is step change in the amount of compliance work and complexity that businesses need to keep up with. In the US, 50 states might as well be 50 countries with all of the rules and changes businesses are expected to comply with.

    This is the problem that Mosey is addressing.

    See also:

    Accommodating worker preferences

    With the rise of return-to-office policies, it’s clear what management’s preferences are. As I mentioned earlier though, it will never be all or nothing—competing for top talent now necessitates accommodating remote work. Similarly, not every organization or worker should be remote.

    That leaves us with the final challenge: worker preferences. Some people prefer working in an office. Some people prefer working remotely. Some will change their mind.

    Self-selection might be one way this plays out. People with a preference for remote choose working at a company that is fully remote. People that prefer in-office, will work at a fully collocated organization.

    Of course, as a business grows larger they will need to accommodate more preferences. I imagine the ultimate solution for larger organizations is hybrid but rather than individuals splitting their time between home and office, the decision of how the team works will be pushed down to teams and their managers.


    Published

  • Turning Cost Centers Into Profit Centers

    A successful business playbook I keep coming across is turning cost centers into profit centers. That is, taking a problem that takes money to solve for one business, the solving it and offering it to other businesses that need it.

    The best example is AWS. Amazon needed to solve the problem of high quality infrastructure to reliably host their ecommerce store. By making it available to everyone else, they ended up building a business that might end up being bigger than Amazon.

    What are the cost centers in your business? If you concluded that you need to build it to properly solve it (build, buy, partner), might there be others interested in buying it?

    See also:


    Published

  • Product Debt

    Product debt is when promises made about functionality exceed what the product can actually do. When this happens there is only two things that can be done: 1) make up for it with manual work or 2) commit to building the functionality. If you stop doing 1) the customer will be disappointed. If you keep adding product debt and never do 2), the product will never keep up and you’ve inadvertently ended up building a services business.

    The ideal amount of product debt, like tech debt, is greater than zero. It can be helpful to over promise when you doing customer development and figuring out what is going to sell. However, keeping in mind product debt (specifically that it must be paid down) means knowing a technical solution is possible and what’s promised stays within scope of what the technology and team are capable of.

    One of the ways product debt increases without you realizing is in support and operations. As customers come across challenges, it is in the best interest of the team to help. Through the course of helping they may do things that the product is not capable of, working around an edge case here, handling something on behalf of the customer, etc. For this reason, product and engineering should spend time with support and operations to see how they are making up for shortcomings in the product—you might not be happy with the product debt you are incurring without knowing it.


    Published

  • Answer Required, Not Necessarily a Good One

    There are many times in business when an answer is required. A customer asking you a question about your product. An investor asking you about the market. A sales lead asking you about competitors.

    A good answer is seldom necessary. It’s enough that you listened and answered the question. They may not be happy with your answer but satisfied to move on.

    This is an important lesson because it can feel like a catastrophe when you’re asked a tough question or something you haven’t prepared for. Just keep in mind that in a very large percentage of these situations, people will be okay with the fact that you merely answered their question.

    See also:


    Published

  • A Non-Nullable Value Reduces Complexity by Half

    A small thing I realized about programming is that nullable values carry a sneaky amount of complexity in code. Whether or not your programming language forces you to deal with it (like rust), there exists two possible states for every nullable values. If that value interacts with more nullable values, complexity increases exponentially.

    Conversely, changing a nullable value in a function’s argument to a non-nullable value has the effect of cutting complexity in half. It’s worth questioning why a function or code path needs a nullable value and if it can be handled elsewhere or removed altogether to save yourself incidental complexity.


    Published

  • SaaS Revenue per Employee Benchmarks for B2B Companies

    A useful metric for SaaS businesses is the amount of Annual Recurring Revenue (ARR) per employee. A high revenue per employee implies the efficiency of the business and is a proxy for it’s ability to break-even or become profitable (the majority of expenses for tech companies is people). As the business starts to scale, revenue per employee is expected to increase.

    The table below shows the result of a survey by SaaS Capital looking at the median ARR per full time employee in 2022 for venture backed companies.

    ARR Median Revenue per Employee
    < $1MM $25,750
    $1-3MM $59,108
    $3-5MM $92,327
    $5-10MM $107,143
    $10-20MM $128,961
    > $20MM $170,950

    The table below shows the result from Capchase grouped by sector in H2 2022.

    Sector Median Revenue per Employee
    Consumer goods & services $45,177
    Marketing & media $61,082
    Corporate services $51,042
    Financial solutions $40,335
    Data & cloud $52,259
    Industrial solutions $67,122
    Real estate & construction $69,110
    Health & life sciences $75,326

    See also:

    • SaaS churn benchmarks are another useful measurement for figuring out the efficiency of a SaaS business (if churn is high, the amount of growth required to replace old customers goes way up)
    • Burn multiples along with revenue per employee can be a short-hand way of keeping burn in check—e.g. set a headcount cap until you hit the revenue goal to stay on track

    Published

  • Eigengrau Is the Color We See in Total Darkness

    The color we see in complete darkness is not pure black, instead they see a dark gray called Eigengrau. Seeing Eigengrau instead of black has been studied and repeated many times, but there does not seem to be an explanation about why this is the case. Perhaps this is a baseline noise from the nerves.

    An uplifting way to think about this is that the darkness we see is never devoid of light.


    Published

  • How to Be a Good Product Engineer

    Companies don’t really want frontend engineers or backend engineers or infrastructure engineers. If you work at an engineering as product organization, they want good product engineers solving user problems. As an industry, this is poorly understood and little is written to help people understand the principles of good product engineering.

    This is my attempt at explaining some of the core tenets of the best product engineers I’ve worked with and what I strive for.

    Engineer: “Does it work”, product engineer “Does it make sense.”

    You are not the user

    This is the main challenge of product engineering. Unless you are writing software that is specifically targeting a problem you have directly that you would buy for yourself, you are not the user. It is significantly harder to build products for other people because the further from the self, the less real it feels.

    To overcome this simple fact, you must make contact with reality. You can ride along sales calls to hear how your product is pitched and hear first-hand how prospective customers are describing their problems. You can answer support tickets to see how users struggle with the product.

    If you never make contact with reality, it’s easy to fool yourself.

    Develop domain knowledge and expertise about the user

    I’ve never met a good product engineer who doesn’t talk to users and doesn’t know the ins and outs of the domain their users are in.

    If you are working in a highly technical field (legal, tax, banking, etc.), it doesn’t mean you need to be a CPA or lawyer, but you’ll need to get to a foundational understanding. Without this, you won’t be able to understand the context in which your customers experience their problems and you could miss key insights.

    Ever read a mainstream news outlet story about engineering and think—gosh that’s not even close? Every domain has infinite detail and it’s obvious who ‘gets it’ and who doesn’t. People are smart and your product will look like marketing bullshit if you don’t have domain expertise.

    Conjecture is more important than data

    Data-driven decisions sound intuitively correct but trends are not explanations. Great product development requires conjecture. While luck is important to success, making good choices about a product does not. Reasoning about problems clearly starts with deeply understanding the situation and drawing conclusions. This is why building up domain expertise and talking to users is so important.

    Beware the downside of first-principles thinking, talk to users as a shortcut, and, crucially, make contact with reality often (try out your ideas iteratively).

    Cultivating product sense is having good taste

    Product sense is making good decisions about the user often. To do that needs a refined sense of taste. Good taste is an intuition about power.

    Cultivating taste is hard. If you are building products and are driven to build great products, chances are your taste is already pretty good (you can spot the taste gap) but refining it is difficult.

    I like the definition of taste from Creative Selection: taste is the refined sense of judgment and finding balance that produces a pleasing and integrated whole.

    I don’t know of a way to cultivate really good taste other than to make it less ethereal—you are after distinct knowledge about the user, their problem, and the solution. Objective knowledge begins as conjecture and is then corrected with criticism. Work in an environment that embraces good explanations and is error correcting.


    Published

  • Taravangian Intelligence Test

    On one very special day, Taravangian was the smartest person in the world. In a single effort, he scrawled a codex that laid forth a plan to save the world that would make him king of all things. Unfortunately, he’s not always intelligent enough to understand his own plan.

    Every morning Taravangian has his servants test his intelligence to see if he is fit to make progress on his plans. Most days he is of average intelligence and can do things. Some days he is dumb and must be stoped from making any decisions for fear of destroying his plans. On fewer days he is of above-average intelligence but he never repeated the single day of genius ever again.

    From The Stormlight Archive series by Brandon Sanderson.

    See also:


    Published

  • Puffery Doctrine

    It’s fairly common for companies to make sweeping claims in the course of day-to-day business. While this can be misleading, it is not particularly illegal. The “puffery doctrine” covers the line between unbridled optimism and misrepresentation.

    From Room for Optimism: The “Puffery” Defense under the Federal Securities Laws:

    Although there is some variation from circuit to circuit, as a general matter, under the “puffery” or “corporate optimism” doctrine, the courts have ruled that an alleged misrepresentation is not actionable where:

    • It is so vague that it is not subject to being either proved or disproved.
    • It is a nonspecific expression of optimism regarding future performance.
    • It is a future projection stated as an opinion and not as a guarantee, or which is indefinite as to time.

    See also:

    • This sort of makes sense that courts must take this stance otherwise they would need to be arbiters of truth which is maybe not possible
    • False precision is another form of “puffery”

    Published