# DeepSeek V3.1 Base This is a base model, trained only for raw next-token prediction. Unlike instruct/chat models, it has not been fine-tuned to follow user instructions. Prompts need to be written more like training text or examples rather than simple requests (e.g., “Translate the following sentence…” instead of just “Translate this”). DeepSeek-V3.1 Base is a 671B parameter open Mixture-of-Experts (MoE) language model with 37B active parameters per forward pass and a context length of 128K tokens. Trained on 14.8T tokens using FP8 mixed precision, it achieves high training efficiency and stability, with strong performance across language, reasoning, math, and coding tasks. ## Model Information - **Organization**: [DeepSeek](/llm.txt) - **Slug**: deepseek-v3-1-base - **Available at Providers**: 2 ## Providers | Provider | Name | $ Input (per 1M) | $ Output (per 1M) | Free | Link | |----------|------|-----------------|------------------|------|------| | [Together AI](/llm/togetherai.txt) | Deepseek V3.1 Base | 0.00 | 0.00 | | | | [Writingmate](/llm/writingmate.txt) | DeepSeek: DeepSeek V3.1 Base | | | | [View](https://writingmate.ai/models/deepseek/deepseek-v3.1-base) | --- [← Back to all providers](/llm.txt)