Liquid AI, a startup pursuing alternatives to the popular “transformer”-based AI models that have come to define the generative AI era, is announcing not one, not two, but a whole family of six different types of AI models called Liquid Nanos that it says are better suited to the “reality of most AI deployments” in enterprises and organizations than the larger foundation models from rivals like OpenAI, Google, and Anthropic. Liquid Nanos are task-specific foundation models that range from 350 million to 2.6 billion parameters, targeted towards enterprise deployments — basically, you can set and forget these things on enterprise-grade, field devices from laptops to smartphones to even sensor arrays and small robots. Liquid Nanos deliver performance that rivals far larger models on specialized, agentic workflows such as multilingual data extraction, translation, retrieval-augmented (RAG) question answering, low-latency tool and function calling, math reasoning, and more. By shifting computation onto devices rather than relying on cloud infrastructure, Liquid Nanos aim to improve speed, reduce costs, enhance privacy, and enable applications in enterprise and research-grade environments where connectivity or energy use is constrained. The first set of models in the Liquid Nanos lineup are designed for specialized use cases: LFM2-Extract: multilingual models (350M and 1.2B parameters) optimized for extracting structured data from unstructured text, such as converting emails or reports into JSON or XML. LFM2-350M-ENJP-MT: a 350M parameter model for bidirectional English toJapanese translation, trained on a broad range of text types. LFM2-1.2B-RAG: a 1.2B parameter model tuned for retrieval-augmented generation (RAG) pipelines, enabling grounded question answering over large document sets. LFM2-1.2B-Tool: a model specialized for precise tool and function calling, designed to run with low latency on edge devices without relying on longer reasoning chains. LFM2-350M-Math: a reasoning-oriented model aimed at solving challenging math problems efficiently, with reinforcement learning techniques used to control verbosity.Luth-LFM2 series: community-developed fine-tunes by Sinoué Gad and Maxence Lasbordes, specializing in French while preserving English capabilities. These models target specific tasks where small, fine-tuned architectures can match or even outperform generalist systems more than 100 billion parameters in size.