Meta’s AI research team has released a new large language model (LLM) for coding that enhances code understanding by learning not only what code looks like, but also what it does when executed. The model, named Code World Model (CWM), is trained on vast amounts of data showing how code interacts with its environment, allowing it to build an internal “world model” of how computational systems work. In addition to learning the dynamics of its environment, CWM shows strong performance on standard coding and math benchmarks, opening a potential new direction for training AI agents that can handle more complex, dynamic software development tasks in enterprise environments. CWM is part of a broader set of efforts to push LLMs beyond next-token prediction into developing world models. It trains on extensive “code world modeling data.” Instead of waiting until the final fine-tuning stage, CWM is taught how code behaves during its “mid-training” phase. The hypothesis is that grounding the model’s predictions in the dynamics of computational environments early on provides a much stronger foundation for later training and reinforcement learning stages.