Apple is introducing tools for businesses to manage how and when employees can use artificial intelligence. These controls are granular enough for managing which features can be enabled or disabled. The system also apparently allows companies to potentially restrict whether an employee’s AI requests go to ChatGPT’s cloud service, even if the business doesn’t buy services from OpenAI directly. This can prevent employees from accidentally handing over internal-only IP or data to ChatGPT, which could be used elsewhere. However, while the focus is on ChatGPT, it’s a set of tools that won’t be limited just to OpenAI’s service. The same tools can restrict any “external” AI provider, which could include Anthropic or Google, for example. Apple has a public deal with OpenAI that enables deep integration with ChatGPT on the iPhone. However, the new tools may indicate that Apple is preparing for a future where corporate users want more freedom on which AI service they use, and for Apple to potentially have more, similar integrations. While Apple does have its own Private Cloud Compute architecture to protect user data under Apple Intelligence, it doesn’t have any way of ensuring security or privacy for third party services. The tool is an attempt to provide enterprise customers some more control over these services.