# Mistral Tiny Note: This model is being deprecated. Recommended replacement is the newer [Ministral 8B](/mistral/ministral-8b) This model is currently powered by Mistral-7B-v0.2, and incorporates a "better" fine-tuning than [Mistral 7B](/models/mistralai/mistral-7b-instruct-v0.1), inspired by community work. It's best used for large batch processing tasks where cost is a significant factor but reasoning capabilities are not crucial. ## Model Information - **Organization**: [Mistral](/llm.txt) - **Slug**: mistral-tiny - **Available at Providers**: 5 ## Providers | Provider | Name | $ Input (per 1M) | $ Output (per 1M) | Free | Link | |----------|------|-----------------|------------------|------|------| | [AIMLAPI](/llm/aimlapi.txt) | Mistral tiny | | | | [View](https://aimlapi.com/models/mistral-tiny-api) | | [Nano-GPT](/llm/nanogpt.txt) | Mistral Tiny | | | | | | [ValorGPT](/llm/valorgpt.txt) | Mistral Tiny | | | | [View](https://www.valorgpt.com/models/mistralai-mistral-tiny) | | [Blackbox AI](/llm/blackboxai.txt) | Mistral Tiny | 0.25 | 0.25 | | | | [WaveSpeed AI](/llm/wavespeed.txt) | mistral-tiny | 0.28 | 0.28 | | | --- [← Back to all providers](/llm.txt)