It’s almost inevitable that, after an initial research phase, progress of AI models and tools will come from open source communities rather than a corporation. Individuals can utilize fair-use to do things businesses can not do (e.g. using leaked LLaMa weights and fine tuning it). There are more people to work on fringe usecases that do not have to be commercialized. Finally, open source increases access (running 13B LLMs on a laptop, on a Raspberry Pi) allowing more people to try it and provide more feedback.
Read We Have No Moat, And Neither Does OpenAI.
- The leak of Facebook’s LLaMa model might have enabled a jump to universality for large language models via open source
- AI is usually described as singular, but really it will be a multitude of AIs
Links to this note
How to Decide If AI Tools Can Be Used at Work
Advancements in AI powered tools can greatly improve productivity but many companies have taken steps to limit or outright ban the use of OpenAI’s ChatGPT, GitHub Copilot, and others. What are they concerned about and how should you decide if it can be used by your company?