Anthropic is increasing the amount of information that enterprise customers can send to Claude in a single prompt, part of an effort to attract more developers to the company’s popular AI coding models. For Anthropic’s API customers, the company’s Claude Sonnet 4 AI model now has a 1 million token context window — meaning the AI can handle requests as long as 750,000 words, more than the entire “Lord of the Rings” trilogy, or 75,000 lines of code. That’s roughly five times Claude’s previous limit (200,000 tokens), and more than double the 400,000 token context window offered by OpenAI’s GPT-5. Long context will also be available for Claude Sonnet 4 through Anthropic’s cloud partners, including on Amazon Bedrock and Google Cloud’s Vertex AI. Anthropic’s product lead for the Claude platform, Brad Abrams, expects AI coding platforms to get a “lot of benefit” from this update. When asked if GPT-5 put a dent in Claude’s API usage, Abrams downplayed the concern, saying he’s “really happy with the API business and the way it’s been growing.” Whereas OpenAI generates most of its revenue from consumer subscriptions to ChatGPT, Anthropic’s business centers around selling AI models to enterprises through an API. That’s made AI coding platforms a key customer for Anthropic and could be why the company is throwing in some new perks to attract users in the face of GPT-5. Abrams also told that Claude’s large context window helps it perform better at long agentic coding tasks, in which the AI model is autonomously working on a problem for minutes or hours. With a large context window, Claude can remember all its previous steps in long-horizon tasks. Abrams said that Anthropic’s research team focused on increasing not just the context window for Claude, but also the “effective context window,” suggesting that its AI can understand most of the information it’s given.