Skip to content
OpenRouterOpenRouter
© 2026 OpenRouter, Inc

Product

  • Chat
  • Rankings
  • Models
  • Providers
  • Pricing
  • Enterprise

Company

  • About
  • Announcements
  • CareersHiring
  • Partners
  • Privacy
  • Terms of Service
  • Support
  • State of AI

Developer

  • Documentation
  • API Reference
  • SDK
  • Status

Connect

  • Discord
  • GitHub
  • LinkedIn
  • X
  • YouTube

DeepSeek: R1 Distill Qwen 32B

deepseek/deepseek-r1-distill-qwen-32b

Created Jan 29, 202532,768 context
$0.29/M input tokens$0.29/M output tokens

DeepSeek R1 Distill Qwen 32B is a distilled large language model based on Qwen 2.5 32B, using outputs from DeepSeek R1. It outperforms OpenAI's o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.\n\nOther benchmark results include:\n\n- AIME 2024 pass@1: 72.6\n- MATH-500 pass@1: 94.3\n- CodeForces Rating: 1691\n\nThe model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models.

Recent activity on R1 Distill Qwen 32B

Total usage per day on OpenRouter

Prompt
16.9M
Reasoning
11.8M
Completion
36K

Prompt tokens measure input size. Reasoning tokens show internal thinking before a response. Completion tokens reflect total output length.