# Novita AI Novita AI is a comprehensive AI platform providing access to over 200 AI models including LLMs, image generation, video creation, and audio models from leading providers like DeepSeek, Meta (Llama), Google (Gemini), Qwen, Anthropic, Mistral, OpenAI, and many others. The platform features an open model library with custom deployment options and GPU Instances for scalable inference. Novita specializes in state-of-the-art video generation models including Kling V1.6, V2.1, V2.5 Turbo, MiniMax Video, Hailuo 2.3, Vidu Q1, Vidu 2.0, PixVerse V4.5, Seedance V1, Wan 2.1/2.2/2.5, Hunyuan, and SynthesSeed, with transparent pricing per video/image generation. The platform also offers dedicated GPU cloud services for custom model deployment and training. ## Provider Information - **Website**: - **Available Models**: 78 ## Models | Name | Original Name | $ Input Price (per 1M) | $ Output Price (per 1M) | Free | Link | |------|---------------|---------------------|----------------------|------|------| | kimi-k2-0905 | moonshotai/kimi-k2-0905 | 0.60 | 2.50 | | | | kimi-k2-thinking | moonshotai/kimi-k2-thinking | 0.60 | 2.50 | | | | kimi-k2-instruct | moonshotai/kimi-k2-instruct | 0.57 | 2.30 | | | | mimo-v2-flash | xiaomimimo/mimo-v2-flash | 0.10 | 0.30 | | | | deepseek-r1-0528 | deepseek/deepseek-r1-0528 | 0.70 | 2.50 | | | | DeepSeek R1 0528 Qwen3 8B | deepseek/deepseek-r1-0528-qwen3-8b | 0.06 | 0.09 | | | | deepseek-v3.1-terminus | deepseek/deepseek-v3.1-terminus | 0.27 | 1.00 | | | | deepseek-v3.1 | deepseek/deepseek-v3.1 | 0.27 | 1.00 | | | | deepseek-v3-0324 | deepseek/deepseek-v3-0324 | 0.27 | 1.12 | | | | deepseek-v3.2-exp | deepseek/deepseek-v3.2-exp | 0.27 | 0.41 | | | | deepseek-r1-distill-llama-70b | deepseek/deepseek-r1-distill-llama-70b | 0.80 | 0.80 | | | | deepseek-v3.2 | deepseek/deepseek-v3.2 | 0.27 | 0.40 | | | | minimax-m1-80k | minimaxai/minimax-m1-80k | 0.55 | 2.20 | | | | minimax-m2 | minimax/minimax-m2 | 0.30 | 1.20 | | | | minimax-m2.1 | minimax/minimax-m2.1 | 0.30 | 1.20 | | | | gemma-3-27b-it | google/gemma-3-27b-it | 0.12 | 0.20 | | | | wizardlm-2-8x22b | microsoft/wizardlm-2-8x22b | 0.62 | 0.62 | | | | gpt-oss-20b | openai/gpt-oss-20b | 0.04 | 0.15 | | | | gpt-oss-120b | openai/gpt-oss-120b | 0.05 | 0.25 | | | | ernie-4.5-21B-a3b | baidu/ernie-4.5-21B-a3b | 0.07 | 0.28 | | | | ernie-4.5-21B-a3b-thinking | baidu/ernie-4.5-21B-a3b-thinking | 0.07 | 0.28 | | | | ernie-4.5-vl-424b-a47b | baidu/ernie-4.5-vl-424b-a47b | 0.42 | 1.25 | | | | ernie-4.5-vl-28b-a3b | baidu/ernie-4.5-vl-28b-a3b | 0.14 | 0.56 | | | | qwen3-vl-30b-a3b-thinking | qwen/qwen3-vl-30b-a3b-thinking | 0.20 | 1.00 | | | | qwen3-235b-a22b-instruct-2507 | qwen/qwen3-235b-a22b-instruct-2507 | 0.09 | 0.58 | | | | qwen3-omni-30b-a3b-thinking | qwen/qwen3-omni-30b-a3b-thinking | | | | | | qwen3-next-80b-a3b-instruct | qwen/qwen3-next-80b-a3b-instruct | 0.15 | 1.50 | | | | qwen2.5-vl-72b-instruct | qwen/qwen2.5-vl-72b-instruct | 0.80 | 0.80 | | | | qwen3-coder-30b-a3b-instruct | qwen/qwen3-coder-30b-a3b-instruct | 0.07 | 0.27 | | | | qwen3-vl-8b-instruct | qwen/qwen3-vl-8b-instruct | 0.08 | 0.50 | | | | qwen3-235b-a22b-thinking-2507 | qwen/qwen3-235b-a22b-thinking-2507 | 0.30 | 3.00 | | | | qwen2.5-7b-instruct | qwen/qwen2.5-7b-instruct | 0.07 | 0.07 | | | | qwen3-omni-30b-a3b-instruct | qwen/qwen3-omni-30b-a3b-instruct | | | | | | qwen-2.5-72b-instruct | qwen/qwen-2.5-72b-instruct | 0.38 | 0.40 | | | | qwen3-coder-480b-a35b-instruct | qwen/qwen3-coder-480b-a35b-instruct | 0.30 | 1.30 | | | | qwen3-vl-235b-a22b-thinking | qwen/qwen3-vl-235b-a22b-thinking | 0.98 | 3.95 | | | | qwen-mt-plus | qwen/qwen-mt-plus | 0.25 | 0.75 | | | | qwen3-max | qwen/qwen3-max | | | | | | qwen3-vl-235b-a22b-instruct | qwen/qwen3-vl-235b-a22b-instruct | 0.30 | 1.50 | | | | qwen3-vl-30b-a3b-instruct | qwen/qwen3-vl-30b-a3b-instruct | 0.20 | 0.70 | | | | qwen3-next-80b-a3b-thinking | qwen/qwen3-next-80b-a3b-thinking | 0.15 | 1.50 | | | | mistral-nemo | mistralai/mistral-nemo | 0.04 | 0.17 | | | | llama-3-70b-instruct | meta-llama/llama-3-70b-instruct | 0.51 | 0.74 | | | | llama-3-8b-instruct | meta-llama/llama-3-8b-instruct | 0.04 | 0.04 | | | | llama-3.1-8b-instruct | meta-llama/llama-3.1-8b-instruct | 0.02 | 0.05 | | | | llama-4-maverick-17b-128e-instruct-fp8 | meta-llama/llama-4-maverick-17b-128e-instruct-fp8 | 0.27 | 0.85 | | | | llama-3.3-70b-instruct | meta-llama/llama-3.3-70b-instruct | 0.14 | 0.40 | | | | llama-4-scout-17b-16e-instruct | meta-llama/llama-4-scout-17b-16e-instruct | 0.18 | 0.59 | | | | glm-4.7 | zai-org/glm-4.7 | 0.60 | 2.20 | | | | glm-4.5 | zai-org/glm-4.5 | 0.60 | 2.20 | | | | glm-4.5-air | zai-org/glm-4.5-air | 0.13 | 0.85 | | | | glm-4.5v | zai-org/glm-4.5v | 0.60 | 1.80 | | | | glm-4.6 | zai-org/glm-4.6 | 0.55 | 2.20 | | | | glm-4.6v | zai-org/glm-4.6v | 0.30 | 0.90 | | | | glm-4.7-flash | zai-org/glm-4.7-flash | 0.07 | 0.40 | | | | Seedream 4.5 | Seedream 4.5 | | | | | | Hunyuan Image 3 | Hunyuan Image 3 | | | | | | Qwen-Image | Qwen-Image | | | | | | Qwen-Image Edit | Qwen-Image Edit | | | | | | Flux.1 Kontext Dev | Flux.1 Kontext Dev | | | | | | Flux.1 Kontext Pro | Flux.1 Kontext Pro | | | | | | Flux.1 Kontext Max | Flux.1 Kontext Max | | | | | | Z-Image Turbo | Z-Image Turbo | | | | | | Kling V2.1 Master | Kling V2.1 Master-Master-5s-1080P | | | | | | Seedance V1 Lite | Seedance V1 Lite-5s-480P( 21:9 & 9:21 ) | | | | | | Seedance V1 Pro | Seedance V1 Pro-5s-480P( 21:9 & 9:21 ) | | | | | | Seedance 1.5 Pro | Seedance 1.5 Pro-SILENT / ONLINE-480P | | | | | | kimi-k2.5 | moonshotai/kimi-k2.5 | 0.60 | 3.00 | | | | hermes-2-pro-llama-3-8b | nousresearch/hermes-2-pro-llama-3-8b | 0.14 | 0.14 | | | | qwen3-coder-next | qwen/qwen3-coder-next | 0.20 | 1.50 | | | | ernie-4.5-vl-28b-a3b-thinking | baidu/ernie-4.5-vl-28b-a3b-thinking | 0.39 | 0.39 | | | | glm-5 | zai-org/glm-5 | 1.00 | 3.20 | | | | minimax-m2.5 | minimax/minimax-m2.5 | 0.30 | 1.20 | | | | qwen3.5-397b-a17b | qwen/qwen3.5-397b-a17b | 0.60 | 3.60 | | | | Seedream 5.0 lite | Seedream 5.0 lite | | | | | | qwen3.5-27b | qwen/qwen3.5-27b | 0.30 | 2.40 | | | | qwen3.5-122b-a10b | qwen/qwen3.5-122b-a10b | 0.40 | 3.20 | | | | qwen3.5-35b-a3b | qwen/qwen3.5-35b-a3b | 0.25 | 2.00 | | | --- [← Back to all providers](/llm.txt)