A recent paper studied text summarization improvements using a chain of density prompt. The prompt improves over vanilla GPT responses and is close to human summarizations in informativeness and readibility.
The prompt instructs the LLM to do several passes at summarizing the source content. After an initial summary, it is instructed to find missing entities and include them when re-writing the summary using the same number of words.
See also:
- Context is needed for practical LLM applications
- Intent-based outcome specification
- This could improve summarization of AI for notes