If you can simply run operations locally on a hardware device, that creates all kinds of efficiencies, including some related to energy consumption and fighting climate change. Enter the rise of new Liquid Foundation Models, which innovate from a traditional transformer-based LLM design, to something else. The new LFM models already boast superior performance to other transformer-based ones of comparable size such as Meta’s Llama 3.1-8B and Microsoft’s Phi-3.5 3.8B. The models are engineered to be competitive not only on raw performance benchmarks but also in terms of operational efficiency, making them ideal for a variety of use cases, from enterprise-level applications specifically in the fields of financial services, biotechnology, and consumer electronics, to deployment on edge devices. These post-transformer models can be used on devices, cars, drones, and planes, and applications to predictive finance and predictive healthcare. LFMs, he said, can do the job of a GPT, running locally on devices. If they’re running off-line on a device, you don’t need the extended infrastructure of connected systems. You don’t need a data center or cloud services, or any of that. In essence, these systems can be low-cost, high-performance, and that’s just one aspect of how people talk about applying a “Moore’s law” concept to AI. It means systems are getting cheaper, more versatile, and easier to manage – quickly.