Skip to content
OpenRouterOpenRouter
© 2026 OpenRouter, Inc

Product

  • Chat
  • Rankings
  • Models
  • Providers
  • Pricing
  • Enterprise

Company

  • About
  • Announcements
  • CareersHiring
  • Partners
  • Privacy
  • Terms of Service
  • Support
  • State of AI

Developer

  • Documentation
  • API Reference
  • SDK
  • Status

Connect

  • Discord
  • GitHub
  • LinkedIn
  • X
  • YouTube

Meituan: LongCat Flash Chat

meituan/longcat-flash-chat

Created Sep 9, 2025131,072 context
$0.20/M input tokens$0.80/M output tokens

LongCat-Flash-Chat is a large-scale Mixture-of-Experts (MoE) model with 560B total parameters, of which 18.6B–31.3B (≈27B on average) are dynamically activated per input. It introduces a shortcut-connected MoE design to reduce communication overhead and achieve high throughput while maintaining training stability through advanced scaling strategies such as hyperparameter transfer, deterministic computation, and multi-stage optimization.

This release, LongCat-Flash-Chat, is a non-thinking foundation model optimized for conversational and agentic tasks. It supports long context windows up to 128K tokens and shows competitive performance across reasoning, coding, instruction following, and domain benchmarks, with particular strengths in tool use and complex multi-step interactions.

Providers for LongCat Flash Chat

OpenRouter routes requests to the best providers that are able to handle your prompt size and parameters, with fallbacks to maximize uptime.

Performance for LongCat Flash Chat

Compare different providers across OpenRouter

Apps using LongCat Flash Chat

Top public apps this month

Recent activity on LongCat Flash Chat

Total usage per day on OpenRouter

Prompt
81.3M
Completion
6.59M
Reasoning
0

Prompt tokens measure input size. Reasoning tokens show internal thinking before a response. Completion tokens reflect total output length.

Uptime stats for LongCat Flash Chat

Uptime stats for LongCat Flash Chat across all providers

Sample code and API for LongCat Flash Chat

OpenRouter normalizes requests and responses across providers for you.

OpenRouter provides an OpenAI-compatible completion API to 400+ models & providers that you can call directly, or using the OpenAI SDK. Additionally, some third-party SDKs are available.

In the examples below, the OpenRouter-specific headers are optional. Setting them allows your app to appear on the OpenRouter leaderboards.

Using third-party SDKs

For information about using third-party SDKs and frameworks with OpenRouter, please see our frameworks documentation.

See the Request docs for all possible fields, and Parameters for explanations of specific sampling parameters.