# DeepSeek R1 Distill Llama 70B DeepSeek R1 Distill Llama 70B is a distilled large language model based on [Llama-3.3-70B-Instruct](/meta-llama/llama-3.3-70b-instruct), using outputs from [DeepSeek R1](/deepseek/deepseek-r1). The model combines advanced distillation techniques to achieve high performance across multiple benchmarks, including: - AIME 2024 pass@1: 70.0 - MATH-500 pass@1: 94.5 - CodeForces Rating: 1633 The model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models. ## Model Information - **Organization**: [DeepSeek](/llm.txt) - **Slug**: deepseek-r1-distill-llama-70b - **Available at Providers**: 29 - **Release Date**: January 20, 2025 ### Benchmark Scores - GPQA: 0.652 ## Providers | Provider | Name | $ Input (per 1M) | $ Output (per 1M) | Free | Link | |----------|------|-----------------|------------------|------|------| | [AIHubMix](/llm/aihubmix.txt) | deepseek-r1-distill-llama-70b | 0.80 | 1.60 | | [View](https://aihubmix.com/model/deepseek-r1-distill-llama-70b) | | [AIHubMix](/llm/aihubmix.txt) | DeepSeek-R1-Distill-Llama-70B | 0.60 | 0.60 | | [View](https://aihubmix.com/model/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) | | [Chutes.ai](/llm/chutes.txt) | DeepSeek-R1-Distill-Llama-70B | 0.03 | 0.11 | | | | [FastRouter](/llm/fastrouter.txt) | DeepSeek: R1 Distill Llama 70B | 0.60 | 1.20 | | [View](https://fastrouter.ai/models/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) | | [Helicone](/llm/helicone.txt) | DeepSeek R1 Distill Llama 70B | 0.03 | 0.13 | | [View](https://www.helicone.ai/model/deepseek-r1-distill-llama-70b) | | [Vultr](/llm/vultr.txt) | DeepSeek R1 Distill Llama 70B | 0.20 | 0.20 | | | | [Groq](/llm/groq.txt) | DeepSeek R1 Distill Llama 70B | 0.75 | 0.99 | | | | [Alibaba (China)](/llm/alibabacn.txt) | DeepSeek R1 Distill Llama 70B | 0.29 | 0.86 | | | | [Novita AI](/llm/novita.txt) | deepseek-r1-distill-llama-70b | 0.80 | 0.80 | | | | [OVHcloud AI Endpoints](/llm/ovhcloud.txt) | DeepSeek-R1-Distill-Llama-70B | 0.74 | 0.74 | | | | [Scaleway](/llm/scaleway.txt) | DeepSeek R1 Distill Llama 70B | 0.90 | 0.90 | | | | [OpenRouter](/llm/openrouter.txt) | R1 Distill Llama 70B | 0.70 | 0.80 | | [View](https://openrouter.ai/deepseek/deepseek-r1-distill-llama-70b) | | [Requesty](/llm/requesty.txt) | | 0.80 | 0.80 | | | | [Requesty](/llm/requesty.txt) | | 0.23 | 0.69 | | | | [Together AI](/llm/togetherai.txt) | DeepSeek R1 Distill Llama 70B | 2.00 | 2.00 | | | | [ValorGPT](/llm/valorgpt.txt) | R1 Distill Llama 70B | | | | [View](https://www.valorgpt.com/models/deepseek-deepseek-r1-distill-llama-70b) | | [Yupp](/llm/yupp.txt) | DeepSeek-R1 Distill Llama 70B (OpenRouter) | | | | | | [Yupp](/llm/yupp.txt) | DeepSeek-R1 Distill Llama 70B (Sambanova) | | | | | | [Yupp](/llm/yupp.txt) | DepepSeek R1 Distill Llama 70B (Chutes AI) | | | | | | [DeepInfra](/llm/deepinfra.txt) | DeepSeek-R1-Distill-Llama-70B | 0.70 | 0.80 | | | | [LangDB](/llm/langdb.txt) | DeepSeek-R1-Distill-Llama-70B | | | | [View](https://langdb.ai/app/models) | | [LangDB](/llm/langdb.txt) | deepseek-r1-distill-llama-70b | | | | [View](https://langdb.ai/app/models) | | [Kilo Code](/llm/kilocode.txt) | DeepSeek: R1 Distill Llama 70B | 0.03 | 0.11 | | | | [GMI Cloud](/llm/gmi.txt) | DeepSeek R1 Distill Llama 70B | 0.25 | 0.75 | | | | [SambaNova AI](/llm/sambanova.txt) | DeepSeek-R1-Distill-Llama-70B | 0.70 | 1.40 | | | | [Baidu AI Studio](/llm/baidu.txt) | | | | | [View](https://aistudio.baidu.com) | | [Blackbox AI](/llm/blackboxai.txt) | DeepSeek: R1 Distill Llama 70B | 0.10 | 0.40 | | | | [WaveSpeed AI](/llm/wavespeed.txt) | deepseek-r1-distill-llama-70b | 0.03 | 0.12 | | | | [Writingmate](/llm/writingmate.txt) | DeepSeek: R1 Distill Llama 70B | | | | [View](https://writingmate.ai/models/deepseek/deepseek-r1-distill-llama-70b) | --- [← Back to all providers](/llm.txt)