Qwen: Qwen3 235B A22B Instruct 2507

by qwen

0 stars
Context 262K tokens
Modalities Text
Input Price $0.10 / million tokens
Output Price $0.39 / million tokens

Overview

Qwen3-235B-A22B-Instruct-2507 is a multilingual, instruction-tuned mixture-of-experts language model based on the Qwen3-235B architecture, with 22B active parameters per forward pass. It is optimized for general-purpose text generation, including instruction following, logical reasoning, math, code, and tool usage. The model supports a native 262K context length and does not implement "thinking mode" (<think> blocks). Compared to its base variant, this version delivers significant gains in knowledge coverage, long-context reasoning, coding benchmarks, and alignment with open-ended tasks. It is particularly strong on multilingual understanding, math reasoning (e.g., AIME, HMMT), and alignment evaluations like Arena-Hard and WritingBench.

Key Features

  • 262K tokens context window
  • API access available

Model Information

Developer:

qwen

Release Date:

July 21, 2025

Context Window:

262K tokens

Modalities:

Text

Pricing

Input Tokens $0.10 / million tokens
Output Tokens $0.39 / million tokens
Get API Key

Discussion

No comments yet. Be the first to share your thoughts about this model!