Liquid
•
lfm2-8b-a1b
| Slug |
lfm2-8b-a1b
|
|---|---|
| Aliases | lfm2-8b-a1b lfm28ba1b |
| Name | Liquid |
|---|
LFM2-8B-A1B is an efficient on-device Mixture-of-Experts (MoE) model from Liquid AI’s LFM2 family, built for fast, high-quality inference on edge hardware. It uses 8.3B total parameters with only ~1.5B active per token, delivering strong performance while keeping compute and memory usage low—making it ideal for phones, tablets, and laptops.
| Provider | Model Name | Original Model | Input ($/1M) | Output ($/1M) | Free | ||
|---|---|---|---|---|---|---|---|
|
|
OpenRouter | LFM2-8B-A1B |
liquid/lfm2-8b-a1b
|
$0.01 | $0.02 | ||
|
|
ValorGPT | LI |
liquid-lfm2-8b-a1b
|
- | - | ||
|
|
Yupp | LiquidAI LFM2-8B-A1B (OpenRouter) |
liquid/lfm2-8b-a1b
|
- | - | Visit | |
|
|
LangDB | lfm2-8b-a1b |
lfm2-8b-a1b
|
- | - |