Model Information
| Slug | llama-3-2-11b-vision-instruct |
|---|---|
| LLM.txt | View |
| Release Date | September 25, 2024 |
Organization
| Name | Nvidia |
|---|---|
| Website | https://www.nvidia.com/en-us/ai/ |
Model Description
Llama 3.2 11B Vision is a multimodal model with 11 billion parameters, designed to handle tasks combining visual and textual data. It excels in tasks such as image captioning and visual question answering, bridging the gap between language generation and visual reasoning. Pre-trained on a massive dataset of image-text pairs, it performs well in complex, high-accuracy image analysis.
Its ability to integrate visual understanding with language processing makes it an ideal solution for industries requiring comprehensive visual-linguistic AI applications, such as content creation, AI-driven customer service, and research.
Click here for the [original model card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/MODEL_CARD_VISION.md).
Usage of this model is subject to [Meta's Acceptable Use Policy](https://www.llama.com/llama3/use-policy/).
Its ability to integrate visual understanding with language processing makes it an ideal solution for industries requiring comprehensive visual-linguistic AI applications, such as content creation, AI-driven customer service, and research.
Click here for the [original model card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/MODEL_CARD_VISION.md).
Usage of this model is subject to [Meta's Acceptable Use Policy](https://www.llama.com/llama3/use-policy/).
Available at 20 Providers
| Provider | Type | Model Name | Original Model | Input ($/1M) | Output ($/1M) | Free | Actions | |
|---|---|---|---|---|---|---|---|---|
|
|
Nvidia |
llama-3.2-11b-vision-instruct
|
meta/llama-3.2-11b-vision-instruct
|
$0.00 | $0.00 | |||
|
|
GitHub Models |
Llama-3.2-11B-Vision-Instruct
|
meta/llama-3.2-11b-vision-instruct
|
$0.00 | $0.00 | |||
|
|
Together AI |
nim/meta/llama-3.2-11b-vision-instruct
|
nim/meta/llama-3.2-11b-vision-instruct
|
$0.00 | $0.00 | |||
|
|
AIHubMix |
llama-3.2-11b-vision-instruct
|
meta-llama/llama-3.2-11b-vision-instruct:free
|
$0.02 | $0.02 | |||
|
|
Cloudflare AI Gateway |
Llama 3.2 11B Vision Instruct
|
workers-ai/@cf/meta/llama-3.2-11b-vision-instruct
|
$0.05 | $0.68 | |||
|
|
OpenRouter |
Chat
Code
|
Llama 3.2 11B Vision Instruct
|
meta-llama/llama-3.2-11b-vision-instruct
|
$0.05 | $0.05 | ||
|
|
DeepInfra |
Llama-3.2-11B-Vision-Instruct
|
meta-llama/Llama-3.2-11B-Vision-Instruct
|
$0.05 | $0.05 | |||
|
|
Kilo Code |
Code
|
Meta: Llama 3.2 11B Vision Instruct
|
meta-llama/llama-3.2-11b-vision-instruct
|
$0.05 | $0.05 | ||
|
|
Cloudflare Workers AI |
Llama 3.2 11B Vision Instruct
|
@cf/meta/llama-3.2-11b-vision-instruct
|
$0.05 | $0.68 | |||
|
|
Blackbox AI |
Code
|
Meta: Llama 3.2 11B Vision Instruct
|
meta-llama/llama-3.2-11b-vision-instruct
|
$0.05 | $0.05 | ||
|
|
WaveSpeed AI |
Chat
Code
|
llama-3.2-11b-vision-instruct
|
meta-llama/llama-3.2-11b-vision-instruct
|
$0.05 | $0.05 | ||
|
|
Inference |
Llama 3.2 11B Vision Instruct
|
meta/llama-3.2-11b-vision-instruct
|
$0.06 | $0.06 | |||
|
|
Fireworks AI |
Llama 3.2 11B Vision Instruct
|
llama-v3p2-11b-vision-instruct
|
$0.20 | $0.20 | |||
|
|
Azure OpenAI |
Llama-3.2-11B-Vision-Instruct
|
llama-3.2-11b-vision-instruct
|
$0.37 | $0.37 | |||
|
|
Azure AI Services |
Llama-3.2-11B-Vision-Instruct
|
llama-3.2-11b-vision-instruct
|
$0.37 | $0.37 | |||
|
|
ValorGPT |
Llama 3.2 11B Vision Instruct
|
meta-llama-llama-3.2-11b-vision-instruct
|
- | - | |||
|
|
Yupp |
Chat
|
Llama 3.2 11B Vision Instruct (Azure)
|
Llama-3.2-11B-Vision-Instruct
|
- | - | ||
|
|
Yupp |
Chat
|
Llama 3.2 11B Vision Instruct (OpenRouter)
|
meta-llama/llama-3.2-11b-vision-instruct
|
- | - | ||
|
|
LangDB |
llama-3.2-11b-vision-instruct
|
llama-3.2-11b-vision-instruct
|
- | - | |||
|
|
Writingmate |
Chat
Code
|
Meta: Llama 3.2 11B Vision Instruct
|
meta-llama/llama-3.2-11b-vision-instruct
|
- | - |