# LiquidAI/LFM2-8B-A1B LFM2-8B-A1B is an efficient on-device Mixture-of-Experts (MoE) model from Liquid AI’s LFM2 family, built for fast, high-quality inference on edge hardware. It uses 8.3B total parameters with only ~1.5B active per token, delivering strong performance while keeping compute and memory usage low—making it ideal for phones, tablets, and laptops. ## Model Information - **Organization**: [Liquid AI](/llm.txt) - **Slug**: lfm2-8b-a1b - **Available at Providers**: 8 - **Release Date**: October 20, 2025 ## Providers | Provider | Name | $ Input (per 1M) | $ Output (per 1M) | Free | Link | |----------|------|-----------------|------------------|------|------| | [OpenRouter](/llm/openrouter.txt) | LFM2-8B-A1B | 0.01 | 0.02 | | [View](https://openrouter.ai/liquid/lfm2-8b-a1b) | | [ValorGPT](/llm/valorgpt.txt) | LI | | | | [View](https://www.valorgpt.com/models/liquid-lfm2-8b-a1b) | | [Yupp](/llm/yupp.txt) | LiquidAI LFM2-8B-A1B (OpenRouter) | | | | | | [LangDB](/llm/langdb.txt) | lfm2-8b-a1b | | | | [View](https://langdb.ai/app/models) | | [Kilo Code](/llm/kilocode.txt) | LiquidAI: LFM2-8B-A1B | 0.01 | 0.02 | | [View](https://kilo.ai/models/liquid/lfm2-8b-a1b) | | [Blackbox AI](/llm/blackboxai.txt) | blackboxai/liquid/lfm2-8b-a1b | | | | | | [WaveSpeed AI](/llm/wavespeed.txt) | lfm2-8b-a1b | 0.01 | 0.02 | | | | [Writingmate](/llm/writingmate.txt) | LiquidAI: LFM2-8B-A1B | | | | [View](https://writingmate.ai/models/liquid/lfm2-8b-a1b) | --- [← Back to all providers](/llm.txt)