# LFM2-24B-A2B LFM2-24B-A2B is the largest model in the LFM2 family of hybrid architectures designed for efficient on-device deployment. Built as a 24B parameter Mixture-of-Experts model with only 2B active parameters per token, it delivers high-quality generation while maintaining low inference costs. The model fits within 32 GB of RAM, making it practical to run on consumer laptops and desktops without sacrificing capability. ## Model Information - **Organization**: [Liquid AI](/llm.txt) - **Slug**: lfm-2-24b-a2b - **Available at Providers**: 7 - **Release Date**: February 25, 2026 ## Providers | Provider | Name | $ Input (per 1M) | $ Output (per 1M) | Free | Link | |----------|------|-----------------|------------------|------|------| | [Kilo Code](/llm/kilocode.txt) | LiquidAI: LFM2-24B-A2B | 0.03 | 0.12 | | [View](https://kilo.ai/models/liquid/lfm-2-24b-a2b) | | [OpenRouter](/llm/openrouter.txt) | LFM2-24B-A2B | 0.03 | 0.12 | | [View](https://openrouter.ai/liquid/lfm-2-24b-a2b-20260224) | | [Together AI](/llm/togetherai.txt) | LFM2-24B-A2B | 0.03 | 0.12 | | | | [Yupp](/llm/yupp.txt) | LFM2 24B A2B Preview (Together AI) | | | | | | [Nano-GPT](/llm/nanogpt.txt) | LFM2 24B A2B | | | | | | [Writingmate](/llm/writingmate.txt) | LiquidAI: LFM2-24B-A2B | | | | [View](https://writingmate.ai/models/liquid/lfm-2-24b-a2b) | | [Yupp](/llm/yupp.txt) | LFM2-24B-A2B (OpenRouter) | | | | | --- [← Back to all providers](/llm.txt)