Liquid AI

LFM2-24B-A2B

Liquid AI lfm-2-24b-a2b
Model Information
Slug lfm-2-24b-a2b
LLM.txt View
Release Date February 25, 2026
Organization
Model Description
LFM2-24B-A2B is the largest model in the LFM2 family of hybrid architectures designed for efficient on-device deployment. Built as a 24B parameter Mixture-of-Experts model with only 2B active parameters per token, it delivers high-quality generation while maintaining low inference costs. The model fits within 32 GB of RAM, making it practical to run on consumer laptops and desktops without sacrificing capability.
Available at 10 Providers
Provider Type Model Name Original Model Input ($/1M) Output ($/1M) Free Actions
Kilo Code
Kilo Code
Code
LiquidAI: LFM2-24B-A2B
liquid/lfm-2-24b-a2b $0.03 $0.12
OpenRouter
OpenRouter
Chat Code
LFM2-24B-A2B
liquid/lfm-2-24b-a2b $0.03 $0.12
WaveSpeed AI
WaveSpeed AI
Chat Code
lfm-2-24b-a2b
liquid/lfm-2-24b-a2b $0.03 $0.12
Together AI
Together AI
LFM2-24B-A2B
LiquidAI/LFM2-24B-A2B $0.03 $0.12
Yupp
Yupp
Chat
LFM2 24B A2B Preview (Together AI)
LiquidAI/LFM2-24B-A2B - -
Nano-GPT
Nano-GPT
LFM2 24B A2B
liquid/lfm-2-24b-a2b - -
Writingmate
Writingmate
Chat Code
LiquidAI: LFM2-24B-A2B
liquid/lfm-2-24b-a2b - -
Yupp
Yupp
Chat
LFM2-24B-A2B (OpenRouter)
liquid/lfm-2-24b-a2b - -
ValorGPT
ValorGPT
LI
liquid-lfm-2-24b-a2b - -
LangDB
LangDB
lfm-2-24b-a2b
lfm-2-24b-a2b - -