# llama-4-maverick-17b-128e-instruct Meta ## Model Information - **Organization**: [Meta](/llm.txt) - **Slug**: llama-4-maverick-17b-128e-instruct - **Available at Providers**: 8 ## Providers | Provider | Name | $ Input (per 1M) | $ Output (per 1M) | Free | Link | |----------|------|-----------------|------------------|------|------| | [FastRouter](/llm/fastrouter.txt) | Meta: LLaMA 4 Maverick 17B 128E Instruct | 0.20 | 0.60 | | [View](https://fastrouter.ai/models/meta-llama/llama-4-maverick-17b-128e-instruct) | | [Nvidia](/llm/nvidia.txt) | llama-4-maverick-17b-128e-instruct | 0.00 | 0.00 | Yes | [View](https://build.nvidia.com/meta/llama-4-maverick-17b-128e-instruct) | | [Groq](/llm/groq.txt) | Llama 4 Maverick 17B | 0.20 | 0.60 | | | | [Yupp](/llm/yupp.txt) | Llama 4 Maverick (Groq) | | | | | | [Yupp](/llm/yupp.txt) | Llama 4 Maverick (Sambanova) | | | | | | [SambaNova AI](/llm/sambanova.txt) | Llama-4-Maverick-17B-128E-Instruct | 0.63 | 1.80 | | | | [Baidu AI Studio](/llm/baidu.txt) | | | | | [View](https://aistudio.baidu.com) | | [NetMind](/llm/netmind.txt) | meta-llama/Llama-4-Maverick-17B-128E-Instruct | 0.17 | 0.85 | | | --- [← Back to all providers](/llm.txt)