Claude Sonnet 4 artificial intelligence model can now process up to 1 million tokens of context in a single request — a fivefold increase that allows developers to analyze entire software projects or dozens of research papers without breaking them into smaller chunks. The expansion, available now in public beta through Anthropic’s API and Amazon Bedrock, represents a significant leap in how AI assistants can handle complex, data-intensive tasks. With the new capacity, developers can load codebases containing more than 75,000 lines of code, enabling Claude to understand complete project architecture and suggest improvements across entire systems rather than individual files. The extended context capability addresses a fundamental limitation that has constrained AI-powered software development. Eric Simons, CEO of Bolt.new, which integrates Claude into browser-based development platforms, said: “With the 1M context window, developers can now work on significantly larger projects while maintaining the high accuracy we need for real-world coding.” The expanded context enables three primary use cases that were previously difficult or impossible: comprehensive code analysis across entire repositories, document synthesis involving hundreds of files while maintaining awareness of relationships between them, and context-aware AI agents that can maintain coherence across hundreds of tool calls and complex workflows. The 1 million token context window represents significant technical advancement in AI memory and attention mechanisms. Anthropic’s internal testing revealed perfect recall performance across diverse scenarios, a crucial capability as context windows expand. The company embedded specific information within massive text volumes and tested Claude’s ability to find and use those details when answering questions.