<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/css" href="data:text/css;base64,cnNzIHsKICBmb250LWZhbWlseTogLWFwcGxlLXN5c3RlbSwgQmxpbmtNYWNGb250LCAiU2Vnb2UgVUkiLCBSb2JvdG8sIE94eWdlbiwgVWJ1bnR1LCBzYW5zLXNlcmlmOwogIG1hcmdpbjogMDsKICBwYWRkaW5nOiAyMHB4OwogIGJhY2tncm91bmQ6ICNmOGZhZmM7CiAgY29sb3I6ICMxZTI5M2I7CiAgbGluZS1oZWlnaHQ6IDEuNjsKfQpjaGFubmVsIHsKICBkaXNwbGF5OiBibG9jazsKICBtYXgtd2lkdGg6IDgwMHB4OwogIG1hcmdpbjogMCBhdXRvOwogIGJhY2tncm91bmQ6IHdoaXRlOwogIGJvcmRlci1yYWRpdXM6IDEycHg7CiAgYm94LXNoYWRvdzogMCAxcHggM3B4IHJnYmEoMCwwLDAsMC4xKTsKICBvdmVyZmxvdzogaGlkZGVuOwp9CnRpdGxlIHsKICBkaXNwbGF5OiBibG9jazsKICBmb250LXNpemU6IDI0cHg7CiAgZm9udC13ZWlnaHQ6IDcwMDsKICBwYWRkaW5nOiAyMHB4IDI0cHg7CiAgYmFja2dyb3VuZDogbGluZWFyLWdyYWRpZW50KDEzNWRlZywgIzI1NjNlYiwgIzFkNGVkOCk7CiAgY29sb3I6IHdoaXRlOwogIG1hcmdpbjogMDsKfQpkZXNjcmlwdGlvbiB7CiAgZGlzcGxheTogYmxvY2s7CiAgcGFkZGluZzogMTJweCAyNHB4OwogIGJhY2tncm91bmQ6ICNmMWY1Zjk7CiAgZm9udC1zaXplOiAxNHB4OwogIGNvbG9yOiAjNjQ3NDhiOwogIGJvcmRlci1ib3R0b206IDFweCBzb2xpZCAjZTJlOGYwOwp9CmxpbmssIGxhbmd1YWdlLCBsYXN0QnVpbGREYXRlIHsKICBkaXNwbGF5OiBub25lOwp9Cml0ZW0gewogIGRpc3BsYXk6IGJsb2NrOwogIHBhZGRpbmc6IDIwcHggMjRweDsKICBib3JkZXItYm90dG9tOiAxcHggc29saWQgI2UyZThmMDsKfQppdGVtOmxhc3QtY2hpbGQgewogIGJvcmRlci1ib3R0b206IG5vbmU7Cn0KaXRlbSA+IHRpdGxlIHsKICBkaXNwbGF5OiBibG9jazsKICBmb250LXNpemU6IDE4cHg7CiAgZm9udC13ZWlnaHQ6IDYwMDsKICBjb2xvcjogIzBmMTcyYTsKICBtYXJnaW46IDAgMCA4cHggMDsKICBwYWRkaW5nOiAwOwogIGJhY2tncm91bmQ6IHRyYW5zcGFyZW50Owp9Cml0ZW0gPiB0aXRsZSBhIHsKICBjb2xvcjogIzI1NjNlYjsKICB0ZXh0LWRlY29yYXRpb246IG5vbmU7Cn0KaXRlbSA+IHRpdGxlIGE6aG92ZXIgewogIHRleHQtZGVjb3JhdGlvbjogdW5kZXJsaW5lOwp9Cml0ZW0gPiBkZXNjcmlwdGlvbiB7CiAgZGlzcGxheTogYmxvY2s7CiAgZm9udC1zaXplOiAxNHB4OwogIGNvbG9yOiAjNDc1NTY5OwogIG1hcmdpbjogMCAwIDEycHggMDsKICBwYWRkaW5nOiAwOwogIGJhY2tncm91bmQ6IHRyYW5zcGFyZW50OwogIHdoaXRlLXNwYWNlOiBwcmUtd3JhcDsKfQppdGVtID4gcHViRGF0ZSB7CiAgZGlzcGxheTogYmxvY2s7CiAgZm9udC1zaXplOiAxMnB4OwogIGNvbG9yOiAjOTRhM2I4OwogIG1hcmdpbi10b3A6IDhweDsKfQppdGVtID4gbGluayB7CiAgZGlzcGxheTogYmxvY2s7CiAgZm9udC1zaXplOiAxM3B4OwogIGNvbG9yOiAjMjU2M2ViOwogIG1hcmdpbi10b3A6IDRweDsKfQo=" ?>
<rss version="2.0">
    <channel>
        <title>LLM24 - Latest AI Model Releases</title>
        <description>Stay updated with the latest AI/LLM model releases. Compare prices, benchmarks, and provider availability.</description>
        <link>https://llm24.net/</link>
        <language>en-us</language>
        <lastBuildDate>Wed, 11 Mar 2026 01:41:21 +0000</lastBuildDate>
                <item>
            <title><![CDATA[GPT-5.4 - OpenAI]]></title>
            <description><![CDATA[GPT-5.4 is an AI Model by OpenAI. Available at 45 providers. Pricing with free options: from $0.00/1M input tokens, $0.00/1M output tokens
Available providers: GitHub Copilot, CometAPI, Poe, Kilo Code, OpenRouter, OpenCode Zen, Roo Code, Requesty, Cloudflare AI Gateway, OpenAI, AIHubMix, ZenMUX, CommonStack, 302.AI, FastRouter, Requesty, Helicone, Helicone, Requesty, Firmware, Glama, Blackbox AI, Azure AI Services, GMI Cloud, Abacus, Ampcode, Nano-GPT, Windsurf, Windsurf, Windsurf, Windsurf, Windsurf, Airforce API, LLM Stats, ApiYI, Arena AI, Writingmate, ZO Computer, Cursor, Cursor, Cursor, ValorGPT, Yupp, Warp, Cursor

Model Description:
GPT-5.4 is OpenAI’s latest frontier model, unifying the Codex and GPT lines into a single system. It features a 1M+ token context window (922K input, 128K output) with support for text and image inputs, enabling high-context reasoning, coding, and multimodal analysis within the same workflow.

The model delivers improved performance in coding, document understanding, tool use, and instruction following. It is designed as a strong default for both general-purpose tasks and software engineering,...]]></description>
            <link>https://llm24.net/model/gpt-5-4</link>
            <pubDate>Thu, 05 Mar 2026 00:00:00 +0000</pubDate>
        </item>
                <item>
            <title><![CDATA[GPT-5.4 Pro - OpenAI]]></title>
            <description><![CDATA[GPT-5.4 Pro is an AI Model by OpenAI. Available at 21 providers. Pricing: from $24.00/1M input tokens, $144.00/1M output tokens
Available providers: CometAPI, Poe, Kilo Code, OpenRouter, Roo Code, Requesty, OpenAI, OpenCode Zen, ZenMUX, CommonStack, FastRouter, 302.AI, Requesty, Requesty, Blackbox AI, Azure AI Services, GMI Cloud, Nano-GPT, Writingmate, Airforce API, ValorGPT

Model Description:
GPT-5.4 Pro is OpenAI&#039;s most advanced model, building on GPT-5.4&#039;s unified architecture with enhanced reasoning capabilities for complex, high-stakes tasks. It features a 1M+ token context window (922K input, 128K output) with support for text and image inputs. Optimized for step-by-step reasoning, instruction following, and accuracy, GPT-5.4 Pro excels at agentic coding, long-context workflows, and multi-step problem solving.]]></description>
            <link>https://llm24.net/model/gpt-5-4-pro</link>
            <pubDate>Thu, 05 Mar 2026 00:00:00 +0000</pubDate>
        </item>
                <item>
            <title><![CDATA[GPT-5.3 Chat - OpenAI]]></title>
            <description><![CDATA[GPT-5.3 Chat is an AI Model by OpenAI. Available at 17 providers. Pricing: from $1.40/1M input tokens, $11.20/1M output tokens
Available providers: CometAPI, Kilo Code, OpenRouter, WaveSpeed AI, ZenMUX, FastRouter, Azure AI Services, AIHubMix, Abacus, 302.AI, Nano-GPT, Writingmate, ValorGPT, ApiYI, Arena AI, LLM Stats, Yupp

Model Description:
GPT-5.3 Chat is an update to ChatGPT&#039;s most-used model that makes everyday conversations smoother, more useful, and more directly helpful. It delivers more accurate answers with better contextualization and significantly reduces unnecessary refusals, caveats, and overly cautious phrasing that can interrupt conversational flow.]]></description>
            <link>https://llm24.net/model/gpt-5-3-chat</link>
            <pubDate>Wed, 04 Mar 2026 00:00:00 +0000</pubDate>
        </item>
                <item>
            <title><![CDATA[Gemini 3.1 Flash-Lite - google]]></title>
            <description><![CDATA[Gemini 3.1 Flash-Lite is an AI Model by google. Available at 27 providers. Pricing: from $0.20/1M input tokens, $1.20/1M output tokens
Available providers: CometAPI, MegaNova, Kilo Code, OpenRouter, Poe, Requesty, Requesty, 302.AI, AIHubMix, WaveSpeed AI, CommonStack, ZenMUX, NetMind, Helicone, Abacus, FastRouter, Glama, Google Gemini, Google Vertex AI, Yupp, Airforce API, Nano-GPT, Arena AI, LLM Stats, Writingmate, ApiYI, ValorGPT

Model Description:
Gemini 3.1 Flash Lite Preview is Google&#039;s high-efficiency model optimized for high-volume use cases. It outperforms Gemini 2.5 Flash Lite on overall quality and approaches Gemini 2.5 Flash performance across key capabilities. Improvements span audio input/ASR, RAG snippet ranking, translation, data extraction, and code completion. Supports full thinking levels (minimal, low, medium, high) for fine-grained cost/performance trade-offs. Priced at half the cost of Gemini 3 Flash.]]></description>
            <link>https://llm24.net/model/gemini-3-1-flash-lite</link>
            <pubDate>Tue, 03 Mar 2026 00:00:00 +0000</pubDate>
        </item>
                <item>
            <title><![CDATA[Qwen3.5-4B - qwen]]></title>
            <description><![CDATA[Qwen3.5-4B is an AI Model by qwen. Available at 1 provider. Pricing with free options: from $0.00/1M input tokens, $0.00/1M output tokens
Available providers: SiliconFlow (China)]]></description>
            <link>https://llm24.net/model/qwen3-5-4b</link>
            <pubDate>Mon, 02 Mar 2026 00:00:00 +0000</pubDate>
        </item>
                <item>
            <title><![CDATA[Qwen3.5-9B - qwen]]></title>
            <description><![CDATA[Qwen3.5-9B is an AI Model by qwen. Available at 4 providers. Pricing: from $0.10/1M input tokens, $0.15/1M output tokens
Available providers: Together AI, Kilo Code, OpenRouter, SiliconFlow (China)

Model Description:
Qwen3.5-9B is a multimodal foundation model from the Qwen3.5 family, designed to deliver strong reasoning, coding, and visual understanding in an efficient 9B-parameter architecture. It uses a unified vision-language design with early fusion of multimodal tokens, allowing the model to process and reason across text and images within the same context.]]></description>
            <link>https://llm24.net/model/qwen3-5-9b</link>
            <pubDate>Mon, 02 Mar 2026 00:00:00 +0000</pubDate>
        </item>
                <item>
            <title><![CDATA[Nano Banana 2 (Gemini 3.1 Flash Image Preview) - google]]></title>
            <description><![CDATA[Nano Banana 2 (Gemini 3.1 Flash Image Preview) is an AI Model by google. Available at 27 providers. Pricing with free options: from $0.00/1M input tokens, $0.00/1M output tokens
Available providers: AIHubMix, FastRouter, Google Gemini, CometAPI, AIHubMix, Kilo Code, OpenRouter, Requesty, Requesty, Requesty, Requesty, Requesty, Requesty, Requesty, Requesty, Requesty, Requesty, Requesty, Requesty, ZenMUX, WaveSpeed AI, MegaNova, ApiYI, Google Vertex AI, LLM Stats, ValorGPT, Yupp

Model Description:
Gemini 3.1 Flash Image Preview, a.k.a. &quot;Nano Banana 2,&quot; is Google’s latest state of the art image generation and editing model, delivering Pro-level visual quality at Flash speed. It combines advanced contextual understanding with fast, cost-efficient inference, making complex image generation and iterative edits significantly more accessible. Aspect ratios can be controlled with the [image_config API Parameter](https://openrouter.ai/docs/features/multimodal/image-generation#image-aspect-ratio...]]></description>
            <link>https://llm24.net/model/gemini-3-1-flash-image</link>
            <pubDate>Thu, 26 Feb 2026 00:00:00 +0000</pubDate>
        </item>
                <item>
            <title><![CDATA[Seed-2.0-Mini - ByteDance Seed]]></title>
            <description><![CDATA[Seed-2.0-Mini is an AI Model by ByteDance Seed. Available at 9 providers. Pricing: from $0.10/1M input tokens, $0.40/1M output tokens
Available providers: DeepInfra, Kilo Code, OpenRouter, WaveSpeed AI, Poe, ApiYI, Yupp, Yupp, Writingmate

Model Description:
Seed-2.0-mini targets latency-sensitive, high-concurrency, and cost-sensitive scenarios, emphasizing fast response and flexible inference deployment. It delivers performance comparable to ByteDance-Seed-1.6, supports 256k context, four reasoning effort modes (minimal/low/medium/high), multimodal understanding, and is optimized for lightweight tasks where cost and speed take priority.]]></description>
            <link>https://llm24.net/model/seed-2-0-mini</link>
            <pubDate>Thu, 26 Feb 2026 00:00:00 +0000</pubDate>
        </item>
                <item>
            <title><![CDATA[Gemini 3.1 Pro Preview Custom Tools - google]]></title>
            <description><![CDATA[Gemini 3.1 Pro Preview Custom Tools is an AI Model by google. Available at 10 providers. Pricing: from $2.00/1M input tokens, $12.00/1M output tokens
Available providers: Google Vertex AI, Google Gemini, AIHubMix, Kilo Code, OpenRouter, WaveSpeed AI, ApiYI, Writingmate, Nano-GPT, ValorGPT

Model Description:
Gemini 3.1 Pro Preview Custom Tools is a variant of Gemini 3.1 Pro that improves tool selection behavior by preventing overuse of a general bash tool when more efficient third-party or user-defined functions are available. This specialized preview endpoint significantly increases function calling reliability and ensures the model selects the most appropriate tool in coding agents and complex, multi-tool workflows.

It retains the core strengths of Gemini 3.1 Pro, including multimodal reasoning a...]]></description>
            <link>https://llm24.net/model/gemini-3-1-pro-preview-customtools</link>
            <pubDate>Wed, 25 Feb 2026 00:00:00 +0000</pubDate>
        </item>
                <item>
            <title><![CDATA[LFM2-24B-A2B - Liquid AI]]></title>
            <description><![CDATA[LFM2-24B-A2B is an AI Model by Liquid AI. Available at 9 providers. Pricing: from $0.03/1M input tokens, $0.12/1M output tokens
Available providers: Kilo Code, OpenRouter, Together AI, WaveSpeed AI, Yupp, Nano-GPT, Writingmate, Yupp, ValorGPT

Model Description:
LFM2-24B-A2B is the largest model in the LFM2 family of hybrid architectures designed for efficient on-device deployment. Built as a 24B parameter Mixture-of-Experts model with only 2B active parameters per token, it delivers high-quality generation while maintaining low inference costs. The model fits within 32 GB of RAM, making it practical to run on consumer laptops and desktops without sacrificing capability.]]></description>
            <link>https://llm24.net/model/lfm-2-24b-a2b</link>
            <pubDate>Wed, 25 Feb 2026 00:00:00 +0000</pubDate>
        </item>
                <item>
            <title><![CDATA[Qwen3.5-35B-A3B - qwen]]></title>
            <description><![CDATA[Qwen3.5-35B-A3B is an AI Model by qwen. Available at 19 providers. Pricing: from $0.00/1M input tokens, $0.00/1M output tokens
Available providers: Together AI, AIHubMix, 302.AI, Kilo Code, OpenRouter, WaveSpeed AI, Routeway, SiliconFlow (China), Novita AI, GMI Cloud, Venice, ApiYI, Arena AI, Nano-GPT, Nano-GPT, Yupp, Writingmate, ValorGPT, Yupp

Model Description:
The Qwen3.5 Series 35B-A3B is a native vision-language model designed with a hybrid architecture that integrates linear attention mechanisms and a sparse mixture-of-experts model, achieving higher inference efficiency. Its overall performance is comparable to that of the Qwen3.5-27B.]]></description>
            <link>https://llm24.net/model/qwen3-5-35b-a3b</link>
            <pubDate>Wed, 25 Feb 2026 00:00:00 +0000</pubDate>
        </item>
                <item>
            <title><![CDATA[Qwen3.5-27B - qwen]]></title>
            <description><![CDATA[Qwen3.5-27B is an AI Model by qwen. Available at 17 providers. Pricing: from $0.08/1M input tokens, $0.68/1M output tokens
Available providers: AIHubMix, 302.AI, Kilo Code, OpenRouter, WaveSpeed AI, SiliconFlow (China), Routeway, Novita AI, GMI Cloud, ApiYI, Arena AI, Nano-GPT, Nano-GPT, Yupp, Writingmate, Yupp, ValorGPT

Model Description:
The Qwen3.5 27B native vision-language Dense model incorporates a linear attention mechanism, delivering fast response times while balancing inference speed and performance. Its overall capabilities are comparable to those of the Qwen3.5-122B-A10B.]]></description>
            <link>https://llm24.net/model/qwen3-5-27b</link>
            <pubDate>Wed, 25 Feb 2026 00:00:00 +0000</pubDate>
        </item>
                <item>
            <title><![CDATA[Qwen3.5-122B-A10B - qwen]]></title>
            <description><![CDATA[Qwen3.5-122B-A10B is an AI Model by qwen. Available at 19 providers. Pricing: from $0.11/1M input tokens, $0.90/1M output tokens
Available providers: AIHubMix, 302.AI, Kilo Code, OpenRouter, WaveSpeed AI, SiliconFlow (China), Routeway, Novita AI, Near AI, GMI Cloud, ApiYI, Arena AI, Nano-GPT, Nano-GPT, Yupp, Writingmate, Yupp, ValorGPT, Nvidia

Model Description:
The Qwen3.5 122B-A10B native vision-language model is built on a hybrid architecture that integrates a linear attention mechanism with a sparse mixture-of-experts model, achieving higher inference efficiency. In terms of overall performance, this model is second only to Qwen3.5-397B-A17B. Its text capabilities significantly outperform those of Qwen3-235B-2507, and its visual capabilities surpass those of Qwen3-VL-235B.]]></description>
            <link>https://llm24.net/model/qwen3-5-122b-a10b</link>
            <pubDate>Wed, 25 Feb 2026 00:00:00 +0000</pubDate>
        </item>
                <item>
            <title><![CDATA[Qwen3.5-Flash - qwen]]></title>
            <description><![CDATA[Qwen3.5-Flash is an AI Model by qwen. Available at 16 providers. Pricing: from $0.03/1M input tokens, $0.28/1M output tokens
Available providers: AIHubMix, 302.AI, Routeway, Kilo Code, OpenRouter, ZenMUX, WaveSpeed AI, Alibaba (China), ApiYI, ApiYI, Nano-GPT, Nano-GPT, Poe, Writingmate, Arena AI, ValorGPT

Model Description:
The Qwen3.5 native vision-language Flash models are built on a hybrid architecture that integrates a linear attention mechanism with a sparse mixture-of-experts model, achieving higher inference efficiency. Compared to the 3 series, these models deliver a leap forward in performance for both pure text and multimodal tasks, offering fast response times while balancing inference speed and overall performance.]]></description>
            <link>https://llm24.net/model/qwen3-5-flash</link>
            <pubDate>Wed, 25 Feb 2026 00:00:00 +0000</pubDate>
        </item>
                <item>
            <title><![CDATA[Mercury 2 - Inception]]></title>
            <description><![CDATA[Mercury 2 is an AI Model by Inception. Available at 9 providers. Pricing: from $0.25/1M input tokens, $0.75/1M output tokens
Available providers: Kilo Code, OpenRouter, Inception, LLM Stats, Nano-GPT, Yupp, Writingmate, Arena AI, ValorGPT

Model Description:
Mercury 2 is an extremely fast reasoning LLM, and the first reasoning diffusion LLM (dLLM).
Instead of generating tokens sequentially, Mercury 2 produces and refines multiple tokens in parallel, achieving &gt;1,000 tokens/sec on standard GPUs. Mercury 2 is 5x+ faster than leading speed-optimized LLMs like Claude 4.5 Haiku and GPT 5 Mini, at a fraction of the cost. 
Mercury 2 supports tunable reasoning levels, 128K context, native tool use, and schema-aligned JSON output. Built for coding workflows ...]]></description>
            <link>https://llm24.net/model/mercury-2</link>
            <pubDate>Tue, 24 Feb 2026 00:00:00 +0000</pubDate>
        </item>
                <item>
            <title><![CDATA[Aion-2.0 - AionLabs]]></title>
            <description><![CDATA[Aion-2.0 is an AI Model by AionLabs. Available at 6 providers. Pricing: from $0.80/1M input tokens, $1.60/1M output tokens
Available providers: Kilo Code, OpenRouter, WaveSpeed AI, Nano-GPT, Writingmate, Yupp

Model Description:
Aion-2.0 is a variant of DeepSeek V3.2 optimized for immersive roleplaying and storytelling. It is particularly strong at introducing tension, crises, and conflict into stories, making narratives feel more engaging. It also handles mature and darker themes with more nuance and depth.]]></description>
            <link>https://llm24.net/model/aion-2-0</link>
            <pubDate>Mon, 23 Feb 2026 00:00:00 +0000</pubDate>
        </item>
                <item>
            <title><![CDATA[Gemini 3.1 Pro - google]]></title>
            <description><![CDATA[Gemini 3.1 Pro is an AI Model by google. Available at 51 providers. Pricing with free options: from $0.00/1M input tokens, $0.00/1M output tokens
Available providers: GitHub Copilot, CometAPI, MegaNova, Routeway, AIHubMix, AIHubMix, Kilo Code, OpenRouter, Poe, Vercel AI Gateway, ZenMUX, Google Vertex AI, Google Gemini, Requesty, Requesty, 302.AI, Roo Code, GMI Cloud, Helicone, CommonStack, FastRouter, Glama, Perplexity AI, WaveSpeed AI, Perplexity Agent, NetMind, Firmware, Abacus, Zed, Venice, Mammouth AI, OpenCode Zen, Cursor, Google Vertex AI, Arena AI, Cline, Nano-GPT, ApiYI, Blackbox AI, Airforce API, Writingmate, JetBrains AI, Factory.ai, AIMLAPI, LLM Stats, ApiYI, ZO Computer, ValorGPT, Yupp, Yupp, Arena AI

Model Description:
Gemini 3.1 Pro Preview is Google’s frontier reasoning model, delivering enhanced software engineering performance, improved agentic reliability, and more efficient token usage across complex workflows. Building on the multimodal foundation of the Gemini 3 series, it combines high-precision reasoning across text, image, video, audio, and code with a 1M-token context window. Reasoning Details must be preserved when using multi-turn tool calling, see our docs here: https://openrouter.ai/docs/use-...]]></description>
            <link>https://llm24.net/model/gemini-3-1-pro</link>
            <pubDate>Thu, 19 Feb 2026 00:00:00 +0000</pubDate>
        </item>
                <item>
            <title><![CDATA[Claude Sonnet 4.6 - Anthropic]]></title>
            <description><![CDATA[Claude Sonnet 4.6 is an AI Model by Anthropic. Available at 63 providers. Pricing with free options: from $0.00/1M input tokens, $0.00/1M output tokens
Available providers: GitHub Copilot, CometAPI, MegaNova, Poe, Routeway, AIHubMix, AIHubMix, Anthropic, Kilo Code, OpenRouter, Requesty, Vercel AI Gateway, Roo Code, Glama, 302.AI, 302.AI, Cloudflare AI Gateway, Firmware, ZenMUX, FastRouter, GMI Cloud, CommonStack, Helicone, Requesty, Perplexity AI, RedPill, Mammouth AI, WaveSpeed AI, Perplexity Agent, Abacus, Zed, Requesty, Requesty, Requesty, Requesty, Requesty, Requesty, Requesty, Venice, OpenCode Zen, Arena AI, Nano-GPT, Nano-GPT, Windsurf, Windsurf, Windsurf, Windsurf, Warp, Ampcode, Google Vertex AI, AIMLAPI, ApiYI, ApiYI, Cline, Blackbox AI, JetBrains AI, Airforce API, Writingmate, LLM Stats, Factory.ai, ValorGPT, Airforce API, Yupp

Model Description:
Sonnet 4.6 is Anthropic&#039;s most capable Sonnet-class model yet, with frontier performance across coding, agents, and professional work. It excels at iterative development, complex codebase navigation, end-to-end project management with memory, polished document creation, and confident computer use for web QA and workflow automation.]]></description>
            <link>https://llm24.net/model/claude-sonnet-4-6</link>
            <pubDate>Tue, 17 Feb 2026 00:00:00 +0000</pubDate>
        </item>
                <item>
            <title><![CDATA[Qwen3.5 Plus 2026-02-15 - qwen]]></title>
            <description><![CDATA[Qwen3.5 Plus 2026-02-15 is an AI Model by qwen. Available at 22 providers. Pricing with free options: from $0.00/1M input tokens, $0.00/1M output tokens
Available providers: Alibaba Coding Plan (China), AIHubMix, 302.AI, OpenRouter, WaveSpeed AI, CometAPI, Kilo Code, ZenMUX, Vercel AI Gateway, Alibaba, MegaNova, Alibaba (China), Nano-GPT, Nano-GPT, Yupp, Poe, LangDB, AIMLAPI, ApiYI, Writingmate, ApiYI, ValorGPT

Model Description:
The Qwen3.5 native vision-language series Plus models are built on a hybrid architecture that integrates linear attention mechanisms with sparse mixture-of-experts models, achieving higher inference efficiency. In a variety of task evaluations, the 3.5 series consistently demonstrates performance on par with state-of-the-art leading models. Compared to the 3 series, these models show a leap forward in both pure-text and multimodal capabilities.]]></description>
            <link>https://llm24.net/model/qwen3-5-plus</link>
            <pubDate>Mon, 16 Feb 2026 00:00:00 +0000</pubDate>
        </item>
                <item>
            <title><![CDATA[Qwen3.5-397B-A17B - qwen]]></title>
            <description><![CDATA[Qwen3.5-397B-A17B is an AI Model by qwen. Available at 30 providers. Pricing with free options: from $0.00/1M input tokens, $0.00/1M output tokens
Available providers: Nvidia, AIHubMix, 302.AI, SiliconFlow (China), OpenRouter, WaveSpeed AI, Alibaba (China), CometAPI, RedPill, Kilo Code, Novita AI, Alibaba, Poe, Together AI, Hugging Face, Parasail, CommonStack, Synthetic.new, GMI Cloud, Yupp, Nano-GPT, Nano-GPT, Arena AI, LangDB, Writingmate, ApiYI, LLM Stats, Nano-GPT, ValorGPT, Qiniu

Model Description:
The Qwen3.5 series 397B-A17B native vision-language model is built on a hybrid architecture that integrates a linear attention mechanism with a sparse mixture-of-experts model, achieving higher inference efficiency. It delivers state-of-the-art performance comparable to leading-edge models across a wide range of tasks, including language understanding, logical reasoning, code generation, agent-based tasks, image understanding, video understanding, and graphical user interface (GUI) interactions....]]></description>
            <link>https://llm24.net/model/qwen3-5-397b-a17b</link>
            <pubDate>Mon, 16 Feb 2026 00:00:00 +0000</pubDate>
        </item>
            </channel>
</rss>
