I’ve been spending more time with local large language models. By some estimates, open-weight models (Qwen, Deepseek, GLM, Gemma, etc.) are only 9 months behind frontier commercial models (OpenAI, Anthropic, and Google) but with the added advantages of being able to run on locally.
Privacy is, of course, the top reason to go local. I won’t repeat all the reasons why but I’ll point out that local LLMs open up interesting high-trust use cases. For some, sharing your inner thoughts and feelings over chat that are logged forever might be fine, but businesses that need to process sensitive data, probably aren’t keen to.
For me personally, I’m okay with storing data with providers that have a long track record of good stewardship (AWS, Google, Microsoft) but for processing that data with AI, I’m not ready to grant the big AI labs license to everything.
In my view, AI is more like personal computing. There are advantages to cloud computing and SaaS application for sure, but sovereignty has more value to me especially for one of one software.
Put another way, I’d rather buy than rent and running LLMs locally enables me to do that.