Chinese e-commerce giant Alibaba’s “Qwen Team” has come out with Qwen3-Coder-480B-A35B-Instruct, a new open-source LLM focused on assisting with software development. It is designed to handle complex, multi-step coding workflows and can create full-fledged, functional applications in seconds or minutes. Qwen3-Coder, is available now under an open source Apache 2.0 license, meaning it’s free for any enterprise to take without charge, download, modify, deploy and use in their commercial applications for employees or end customers. It’s also so highly performant on third-party benchmarks and anecdotal usage among AI power users for “vibe coding.” Qwen3-Coder is a Mixture-of-Experts (MoE) model with 480 billion total parameters, 35 billion active per query, and 8 active experts out of 160. It supports 256K token context lengths natively, with extrapolation up to 1 million tokens using YaRN (Yet another RoPE extrapolatioN — a technique used to extend a language model’s context length beyond its original training limit by modifying the Rotary Positional Embeddings (RoPE) used during attention computation. This capacity enables the model to understand and manipulate entire repositories or lengthy documents in a single pass. Designed as a causal language model, it features 62 layers, 96 attention heads for queries, and 8 for key-value pairs. It is optimized for token-efficient, instruction-following tasks and omits support for <think> blocks by default, streamlining its outputs. Qwen3-Coder has achieved leading performance among open models on several agentic evaluation suites: SWE-bench Verified: 67.0% (standard), 69.6% (500-turn); GPT-4.1: 54.6%; Gemini 2.5 Pro Preview: 49.0%; Claude Sonnet-4: 70.4%. The model also scores competitively across tasks such as agentic browser use, multi-language programming, and tool use. For enterprises, Qwen3-Coder offers an open, highly capable alternative to closed-source proprietary models. With strong results in coding execution and long-context reasoning, it is especially relevant for: Codebase-level understanding: Ideal for AI systems that must comprehend large repositories, technical documentation, or architectural patterns Automated pull request workflows: Its ability to plan and adapt across turns makes it suitable for auto-generating or reviewing pull requests Tool integration and orchestration: Through its native tool-calling APIs and function interface, the model can be embedded in internal tooling and CI/CD systems. This makes it especially viable for agentic workflows and products, i.e., those where the user triggers one or multiple tasks that it wants the AI model to go off and do autonomously, on its own, checking in only when finished or when questions arise. Data residency and cost control: As an open model, enterprises can deploy Qwen3-Coder on their own infrastructure—whether cloud-native or on-prem—avoiding vendor lock-in and managing compute usage more directly.