Meta: Llama 4 Maverick (free)

by meta-llama

0 stars
Context 128K tokens
Modalities Text, Image → Text
Max Output 4,028
Input Price $0.00 / million tokens
Output Price $0.00 / million tokens

Overview

Llama 4 Maverick 17B Instruct (128E) is a high-capacity multimodal language model from Meta, built on a mixture-of-experts (MoE) architecture with 128 experts and 17 billion active parameters per forward pass (400B total). It supports multilingual text and image input, and produces multilingual text and code output across 12 supported languages. Optimized for vision-language tasks, Maverick is instruction-tuned for assistant-like behavior, image reasoning, and general-purpose multimodal interaction. Maverick features early fusion for native multimodality and a 1 million token context window. It was trained on a curated mixture of public, licensed, and Meta-platform data, covering ~22 trillion tokens, with a knowledge cutoff in August 2024. Released on April 5, 2025 under the Llama 4 Community License, Maverick is suited for research and commercial applications requiring advanced multimodal understanding and high model throughput.

Key Features

  • Multimodal capabilities (Text, Image → Text)
  • 128K tokens context window
  • Up to 4,028 output tokens
  • API access available

Model Information

Developer:

meta-llama

Release Date:

April 5, 2025

Context Window:

128K tokens

Modalities:

Text, Image → Text

Content Moderation:

Enabled

Pricing

Input Tokens $0.00 / million tokens
Output Tokens $0.00 / million tokens
Get API Key

Discussion

No comments yet. Be the first to share your thoughts about this model!