Meta’s MobileLLM-R1, a family of sub-billion parameter models, deliver specialized reasoning. Its release is part of a wider industry push for developing compact, powerful models that challenge the “bigger is better” narrative. Meta’s MobileLLM-R1 is a family of reasoning models that come in 140M, 360M, and 950M parameter sizes and are purpose-built for math, coding, and scientific reasoning (they’re not suitable for general chat applications). The models are made more efficient based on some design choices that Meta laid out in the original MobileLLM models, optimized specifically for sub-one-billion parameter architectures. The 950M model slightly outperforms Alibaba’s Qwen3-0.6B on the MATH benchmark (74.0 vs 73.0) and establishes a clear lead on the LiveCodeBench coding test (19.9 vs 14.9). This makes it ideal for applications requiring reliable, offline logic, such as on-device code assistance in developer tools. While MobileLLM-R1 pushes the performance boundary, the broader SLM landscape offers commercially viable alternatives tailored to different enterprise needs. Google’s Gemma 3 270M, for instance, is an ultra-efficient workhorse. At just 270 million parameters, it is designed for extreme power savings. Internal tests showed 25 conversations consumed less than 1% of a phone’s battery. Its permissive license makes it a strong choice for companies looking to fine-tune a fleet of tiny, specialized models for tasks like content moderation or compliance checks. Instead of paying per API call, you can license a model once and use it infinitely on-device. This move also solves for privacy and reliability, as processing sensitive data locally enhances compliance and ensures applications work without a constant internet connection. The potential impact is significant, with a “trillion-dollar opportunity in the small model regime by 2035. The availability of capable SLMs enables a new architectural playbook. Instead of relying on one massive, general-purpose model, organizations can deploy a fleet of specialist models.