# Llama 4 Maverick 17B 128E Instruct FP8 ## Model Information - **Organization**: [Azure](/llm.txt) - **Slug**: llama-4-maverick-17b-128e-instruct-fp8 - **Available at Providers**: 14 ## Providers | Provider | Name | $ Input (per 1M) | $ Output (per 1M) | Free | Link | |----------|------|-----------------|------------------|------|------| | [Abacus](/llm/abacus.txt) | Llama 4 Maverick 17B 128E Instruct FP8 | 0.14 | 0.59 | | | | [Novita AI](/llm/novita.txt) | llama-4-maverick-17b-128e-instruct-fp8 | 0.27 | 0.85 | | | | [GitHub Models](/llm/githubmodels.txt) | Llama 4 Maverick 17B 128E Instruct FP8 | 0.00 | 0.00 | Yes | | | [Azure OpenAI](/llm/azure.txt) | Llama 4 Maverick 17B 128E Instruct FP8 | 0.25 | 1.00 | | | | [IO.NET](/llm/ionet.txt) | Llama 4 Maverick 17B 128E Instruct | 0.15 | 0.60 | | | | [Azure AI Services](/llm/azurecognitiveservices.txt) | Llama 4 Maverick 17B 128E Instruct FP8 | 0.25 | 1.00 | | | | [Llama](/llm/llama.txt) | Llama-4-Maverick-17B-128E-Instruct-FP8 | 0.00 | 0.00 | Yes | | | [Requesty](/llm/requesty.txt) | | 0.20 | 0.85 | | | | [Together AI](/llm/togetherai.txt) | Llama 4 Maverick Instruct (17Bx128E) | 0.27 | 0.85 | | | | [Yupp](/llm/yupp.txt) | Llama 4 Maverick FP8 (Azure) | | | | | | [Yupp](/llm/yupp.txt) | Llama 4 Maverick FP8 (Novita) | | | | | | [Yupp](/llm/yupp.txt) | Llama 4 Maverick FP8 (Together AI) | | | | | | [DeepInfra](/llm/deepinfra.txt) | Llama-4-Maverick-17B-128E-Instruct-FP8 | 0.15 | 0.60 | | | | [GMI Cloud](/llm/gmi.txt) | Llama-4 Maverick 17B 128E Instruct FP8 | 0.25 | 0.80 | | | --- [← Back to all providers](/llm.txt)