Skip to content
OpenRouterOpenRouter
© 2026 OpenRouter, Inc

Product

  • Chat
  • Rankings
  • Models
  • Providers
  • Pricing
  • Enterprise

Company

  • About
  • Announcements
  • CareersHiring
  • Partners
  • Privacy
  • Terms of Service
  • Support
  • State of AI

Developer

  • Documentation
  • API Reference
  • SDK
  • Status

Connect

  • Discord
  • GitHub
  • LinkedIn
  • X
  • YouTube
Favicon for liquid

Liquid

Browse models from Liquid

5 models

Tokens processed on OpenRouter

  • LiquidAI/LFM2-8B-A1BLiquidAI/LFM2-8B-A1B
    2.07M tokens

    Model created via inbox interface

    by liquid33K context$0.01/M input tokens$0.02/M output tokens
LiquidAI/LFM2-2.6BLiquidAI/LFM2-2.6B
1.89M tokens

LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.

by liquid33K context$0.01/M input tokens$0.02/M output tokens
  • Liquid: LFM 7BLFM 7B

    LFM-7B, a new best-in-class language model. LFM-7B is designed for exceptional chat capabilities, including languages like Arabic and Japanese. Powered by the Liquid Foundation Model (LFM) architecture, it exhibits unique features like low memory footprint and fast inference speed. LFM-7B is the world’s best-in-class multilingual language model in English, Arabic, and Japanese. See the launch announcement for benchmarks and more info.

    by liquid33K context
  • Liquid: LFM 3BLFM 3B

    Liquid's LFM 3B delivers incredible performance for its size. It positions itself as first place among 3B parameter transformers, hybrids, and RNN models It is also on par with Phi-3.5-mini on multiple benchmarks, while being 18.4% smaller. LFM-3B is the ideal choice for mobile and other edge text-based applications. See the launch announcement for benchmarks and more info.

    by liquid33K context
  • Liquid: LFM 40B MoELFM 40B MoE

    Liquid's 40.3B Mixture of Experts (MoE) model. Liquid Foundation Models (LFMs) are large neural networks built with computational units rooted in dynamic systems. LFMs are general-purpose AI models that can be used to model any kind of sequential data, including video, audio, text, time series, and signals. See the launch announcement for benchmarks and more info.

    by liquid33K context