Model Information
| Slug | lfm-2-24b-a2b |
|---|---|
| LLM.txt | View |
| Release Date | February 25, 2026 |
Organization
| Name | Liquid AI |
|---|---|
| Website | https://www.riverflow.ai |
Model Description
LFM2-24B-A2B is the largest model in the LFM2 family of hybrid architectures designed for efficient on-device deployment. Built as a 24B parameter Mixture-of-Experts model with only 2B active parameters per token, it delivers high-quality generation while maintaining low inference costs. The model fits within 32 GB of RAM, making it practical to run on consumer laptops and desktops without sacrificing capability.
Available at 10 Providers
| Provider | Type | Model Name | Original Model | Input ($/1M) | Output ($/1M) | Free | Actions | |
|---|---|---|---|---|---|---|---|---|
|
|
Kilo Code |
Code
|
LiquidAI: LFM2-24B-A2B
|
liquid/lfm-2-24b-a2b
|
$0.03 | $0.12 | ||
|
|
OpenRouter |
Chat
Code
|
LFM2-24B-A2B
|
liquid/lfm-2-24b-a2b
|
$0.03 | $0.12 | ||
|
|
WaveSpeed AI |
Chat
Code
|
lfm-2-24b-a2b
|
liquid/lfm-2-24b-a2b
|
$0.03 | $0.12 | ||
|
|
Together AI |
LFM2-24B-A2B
|
LiquidAI/LFM2-24B-A2B
|
$0.03 | $0.12 | |||
|
|
Yupp |
Chat
|
LFM2 24B A2B Preview (Together AI)
|
LiquidAI/LFM2-24B-A2B
|
- | - | ||
|
|
Nano-GPT |
LFM2 24B A2B
|
liquid/lfm-2-24b-a2b
|
- | - | |||
|
|
Writingmate |
Chat
Code
|
LiquidAI: LFM2-24B-A2B
|
liquid/lfm-2-24b-a2b
|
- | - | ||
|
|
Yupp |
Chat
|
LFM2-24B-A2B (OpenRouter)
|
liquid/lfm-2-24b-a2b
|
- | - | ||
|
|
ValorGPT |
LI
|
liquid-lfm-2-24b-a2b
|
- | - | |||
|
|
LangDB |
lfm-2-24b-a2b
|
lfm-2-24b-a2b
|
- | - |