• Menu
  • Skip to right header navigation
  • Skip to main content
  • Skip to primary sidebar

DigiBanker

Bringing you cutting-edge new technologies and disruptive financial innovations.

  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In
  • Home
  • Pricing
  • Features
    • Overview Of Features
    • Search
    • Favorites
  • Share!
  • Log In

Liquid AI’s platform offers developers curated catalog of small language models (SLMs) which includes compact models as small as 300MB with various quantization and checkpoint options to simplify deployment on edge devices

July 17, 2025 //  by Finnovate

Liquid AI has launched LEAP aka the “Liquid Edge AI Platform,” a cross-platform SDK designed to make it easier for developers to integrate small language models (SLMs) directly into mobile applications. The SDK can be added to an iOS or Android project with just a few lines of code, and calling a local model is meant to feel as familiar as interacting with a traditional cloud API. Once integrated, developers can select a model from the built-in LEAP model library, which includes compact models as small as 300MB — lightweight enough for modern phones with as little as 4GB of RAM and up. The SDK handles local inference, memory optimization, and device compatibility, simplifying the typical edge deployment process. LEAP is OS- and model-agnostic by design. At launch, it supports both iOS and Android, and offers compatibility with Liquid AI’s own Liquid Foundation Models (LFMs) as well as many popular open-source small models. Developers can browse a curated model catalog with various quantization and checkpoint options, allowing them to tailor performance and memory footprint to the constraints of the target device. To complement LEAP, Liquid AI also released Apollo, a free iOS app that lets developers and users interact with LEAP-compatible models in a local, offline setting. Apollo is designed for low-friction experimentation — developers can “vibe check” a model’s tone, latency, or output behavior right on their phones before integrating it into a production app. The app runs entirely offline, preserving user privacy and reducing reliance on cloud compute.

Read Article

Category: AI & Machine Economy, Innovation Topics

Previous Post: « AGII’s lightweight modules optimize the interaction between smart contracts and on-chain data, allowing AI-powered Web3 systems to react instantly to changing conditions and seamless integration between various components of dApps
Next Post: Anthropic’s Claude for Financial Services enables financial professionals to conduct research, generate investment reports, perform modelling and run complex analyses including Monte Carlo simulations and risk modeling with audit trails and verified source »

Copyright © 2025 Finnovate Research · All Rights Reserved · Privacy Policy
Finnovate Research · Knyvett House · Watermans Business Park · The Causeway Staines · TW18 3BA · United Kingdom · About · Contact Us · Tel: +44-20-3070-0188

We use cookies to provide the best website experience for you. If you continue to use this site we will assume that you are happy with it.