It’s almost inevitable that, after an initial research phase, progress of AI models and tools will come from open source communities rather than a corporation. Individuals can utilize fair-use to do things businesses can not do (e.g. using leaked LLaMa weights and fine tuning it). There are more people to work on fringe usecases that do not have to be commercialized. Finally, open source increases access (running 13B LLMs on a laptop, on a Raspberry Pi) allowing more people to try it and provide more feedback.
Read We Have No Moat, And Neither Does OpenAI.
See also:
- The leak of Facebook’s LLaMa model might have enabled a jump to universality for large language models via open source
- AI is usually described as singular, but really it will be a multitude of AIs
Links to this note
-
Today, most large language models are run by making requests over the network to a provider like OpenAI which has several disadvantages. You have to trust the entire chain of custody (e.g. network stack, the provider, their subprocessors etc.). It can be slow or flakey and therefore impractical for certain operations (e.g. voice inference, large volumes of text). It can also be expensive—providers are charging per API call and experiments can result in a surprising bill (my useless fine-tuned OpenAI model cost $36).
-
How to Decide If AI Tools Can Be Used at Work
Advancements in AI powered tools can greatly improve productivity but many companies have taken steps to limit or outright ban the use of OpenAI’s ChatGPT, GitHub Copilot, and others. What are they concerned about and how should you decide if it can be used by your company?