When giving LLMs the capability to access private data, view untrusted content, and externally communicate, bad actors can trick AI agents into leaking private data via prompt injection.
Read: The lethal trifecta for AI agents: private data, untrusted content, and external communication by Simon Willison.