Advancements in AI powered tools can greatly improve productivity but many companies have taken steps to limit or outright ban the use of OpenAI’s ChatGPT, GitHub Copilot, and others. What are they concerned about and how should you decide if it can be used by your company?
Risks of AI tools in the workplace
Because large language models are very big and resource intensive (this is changing), they need be run on servers rather than on device. Since these models work on text, that means transmitting a lot of potentially sensitive information over the network. To my knowledge, none of the major AI platforms are offering end-to-end encryption.
There are also privacy and IP concerns. If information sent to be processes is mishandled it could leak important IP or trade secrets. Apple recently banned ChatGPT and I suspect that is the reason. I’m guessing there are also legal concerns about ownership if AI generated output ends up in a company’s IP.
How to decide
The value of AI tools in the workplace is productivity. If GitHub Copilot can improve developer productivity even a small amount, it would be well worth the return given the cost of engineering. On the other hand, there are real risks.
These risks can be managed by thoughtful policies, training, and controls.
- Do not allow sensitive customer data to be sent to AI tools over the network Mitigations might include deciding which teams can use e.g. ChatGPT, creating training on how to use AI tools, build an in-house wrapper around LLMs that can detect certain data like credit card numbers and IDs
- Avoid using coding tools that require access to the entire codebase Mitigations might include only allowing local language models, ensure all secrets and API keys are encrypted or not store in version control, ban Copilot but allow engineers to use ChatGPT for code help
- Buy the enterprise version of these tools, ban personal use tools Many providers realize that stronger guarantees about data use and storage are necessary for businesses. Coupled with banning any personal AI tool usage, it could provide more privacy and security.
This was just a quick list of ideas but it seems like more nuanced approaches can be taken to balance the risk and reward of using these tools. One big thing is missing though—what new failure modes and risks now exist as a result of using these tools? I guess we’ll find out soon enough.