Skip to content
  • Status
  • Announcements
  • Docs
  • Support
  • About
  • Partners
  • Enterprise
  • Careers
  • Pricing
  • Privacy
  • Terms
  •  
  • © 2025 OpenRouter, Inc

    Qwen: Qwen3 235B A22B Thinking 2507

    qwen/qwen3-235b-a22b-thinking-2507

    Created Jul 25, 2025262,144 context
    $0.11/M input tokens$0.60/M output tokens

    Qwen3-235B-A22B-Thinking-2507 is a high-performance, open-weight Mixture-of-Experts (MoE) language model optimized for complex reasoning tasks. It activates 22B of its 235B parameters per forward pass and natively supports up to 262,144 tokens of context. This "thinking-only" variant enhances structured logical reasoning, mathematics, science, and long-form generation, showing strong benchmark performance across AIME, SuperGPQA, LiveCodeBench, and MMLU-Redux. It enforces a special reasoning mode (</think>) and is designed for high-token outputs (up to 81,920 tokens) in challenging domains.

    The model is instruction-tuned and excels at step-by-step reasoning, tool use, agentic workflows, and multilingual tasks. This release represents the most capable open-source variant in the Qwen3-235B series, surpassing many closed models in structured reasoning use cases.

    Recent activity on Qwen3 235B A22B Thinking 2507

    Total usage per day on OpenRouter

    Prompt
    26.4M
    Reasoning
    5.97M
    Completion
    1.29M

    Prompt tokens measure input size. Reasoning tokens show internal thinking before a response. Completion tokens reflect total output length.