Meta: Llama 3.2 11B Vision Instruct

by meta-llama

0 stars
Context 131K tokens
Modalities Text, Image → Text
Max Output 16,384
Input Price $0.05 / million tokens
Output Price $0.05 / million tokens

Overview

Llama 3.2 11B Vision is a multimodal model with 11 billion parameters, designed to handle tasks combining visual and textual data. It excels in tasks such as image captioning and visual question answering, bridging the gap between language generation and visual reasoning. Pre-trained on a massive dataset of image-text pairs, it performs well in complex, high-accuracy image analysis. Its ability to integrate visual understanding with language processing makes it an ideal solution for industries requiring comprehensive visual-linguistic AI applications, such as content creation, AI-driven customer service, and research. Click here for the [original model card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/MODEL_CARD_VISION.md). Usage of this model is subject to [Meta's Acceptable Use Policy](https://www.llama.com/llama3/use-policy/).

Key Features

  • Multimodal capabilities (Text, Image → Text)
  • 131K tokens context window
  • Up to 16,384 output tokens
  • API access available

Model Information

Developer:

meta-llama

Release Date:

September 25, 2024

Context Window:

131K tokens

Modalities:

Text, Image → Text

Pricing

Input Tokens $0.05 / million tokens
Output Tokens $0.05 / million tokens
Get API Key

Discussion

No comments yet. Be the first to share your thoughts about this model!