LLM Models#
Every localization engine on Lingo.dev uses LLM models to produce translations. You choose which model handles each locale pair, configure fallback chains for reliability, and use wildcard locales to set defaults — all without managing API keys or provider accounts.
Available models#
Lingo.dev provides access to 400+ models from every major provider through a single platform:
| Provider | Notable models |
|---|---|
| OpenAI | GPT-4o, GPT-4 Turbo, o3, o4-mini |
| Anthropic | Claude Opus, Claude Sonnet, Claude Haiku |
| Gemini 2.0 Flash, Gemini Pro (up to 1M token context) | |
| Meta | Llama 3.3 70B, Llama 3.1 405B |
| Mistral | Mistral Large, Mixtral |
| DeepSeek | DeepSeek V3 |
The full model catalog — with context window sizes — is available on the LLM Models research page.
No provider accounts needed
You don't need API keys from individual providers. Lingo.dev handles authentication, billing, and routing to all models through a unified infrastructure.
Model configs#
A model config assigns a specific model to a source-target locale pair within a localization engine.
| Field | Description |
|---|---|
| Provider | The model provider (e.g., openai, anthropic, google) |
| Model | The specific model (e.g., gpt-4o, claude-sonnet-4-5-20250514) |
| Source locale | The source locale, or * for any source |
| Target locale | The target locale, or * for any target |
| Rank | Priority in the fallback chain (0 = primary, 1 = first fallback, ...) |
When the engine receives a translation request, it selects the most specific matching config based on the source and target locales.
Defaults and customization#
The Lingo.dev team has been researching which models produce the best translations for each language pair since GPT-3. When you create a new localization engine, it comes pre-configured with sensible model defaults — primary models and fallbacks selected based on that research, optimized for quality across common and low-resource languages alike.
These defaults are a starting point. You can edit any model config, swap providers, add fallbacks, or override specific locale pairs with models you prefer. The engine's model configuration is fully yours to control.
Fallback chains#
LLMs evolve fast — new models ship weekly, capabilities improve with each generation, and pricing drops as competition intensifies. But that velocity comes at a cost: provider outages, rate limits, content filter changes, and model deprecations are routine. A production localization pipeline that depends on a single model is a pipeline that will eventually break.
Lingo.dev's localization engine is purpose-built for production-grade translation workflows. Each locale pair supports a ranked fallback chain — if the primary model fails, the engine automatically and transparently tries the next model in the chain, without any intervention or failed requests reaching your users.
How fallback ordering works#
The engine sorts available configs by specificity, then by rank:
- Target locale specificity — exact target locale beats wildcard
* - Source locale specificity — exact source locale beats wildcard
* - Rank — lower rank = higher priority
Example#
Given these configs for an engine:
| Source | Target | Model | Rank |
|---|---|---|---|
en | de | GPT-4o | 0 |
en | de | Claude Sonnet | 1 |
* | de | Gemini Flash | 0 |
* | * | GPT-4o-mini | 0 |
A request translating en → de tries models in this order:
- GPT-4o — exact match, rank 0 (primary)
- Claude Sonnet — exact match, rank 1 (first fallback)
- Gemini Flash — wildcard source, exact target, rank 0
- GPT-4o-mini — wildcard both, rank 0 (catch-all)
A request translating fr → de skips the first two (source doesn't match) and starts at Gemini Flash.
Fallback tracking
When a fallback model handles a request, the engine records it in the request log. Monitor fallback usage in Reports to identify unreliable primary models.
Wildcard locales#
Set source or target locale to * to create default configs that apply when no locale-specific config exists.
Common patterns:
| Source | Target | Model | Purpose |
|---|---|---|---|
* | * | GPT-4o | Catch-all default for any locale pair |
en | * | Claude Sonnet | Default for all English-source translations |
* | ja | GPT-4o | Use a specific model for Japanese targets |
en | de | Mistral Large | Override default for this specific pair |
Specific configs always take priority over wildcard configs. Use wildcards to set sensible defaults, then override for locale pairs that need special handling.
Managing model configs via MCP#
If you use the Lingo.dev MCP server, your AI coding assistant can configure models directly:
"Set GPT-4o as the primary model for English to German,
with Claude Sonnet as fallback.""Add a catch-all model config using GPT-4o-mini for
all locale pairs."