vNext
0 stars
0 members
Dashboard

Get started

  • Welcome
  • Documentation
  • Pricing
    Soon

Research

  • Playground
    Soon
  • Languages
  • LLM Models

Developer Tools

  • Connect Your Engine
  • I18n MCP
  • CLI
  • CI/CD
  • Compiler
    Beta

Lingo.dev

  • How it works
  • API
  • MCP

Localization Engine

  • Brand Voices
  • Instructions
  • Glossaries
  • LLM Models

Quality

  • Reports
  • Scorers
  • Playground

Admin

  • API Keys
  • Team

LLM Models#

Every localization engine on Lingo.dev uses LLM models to produce translations. You choose which model handles each locale pair, configure fallback chains for reliability, and use wildcard locales to set defaults — all without managing API keys or provider accounts.


Available models#

Lingo.dev provides access to 400+ models from every major provider through a single platform:

ProviderNotable models
OpenAIGPT-4o, GPT-4 Turbo, o3, o4-mini
AnthropicClaude Opus, Claude Sonnet, Claude Haiku
GoogleGemini 2.0 Flash, Gemini Pro (up to 1M token context)
MetaLlama 3.3 70B, Llama 3.1 405B
MistralMistral Large, Mixtral
DeepSeekDeepSeek V3

The full model catalog — with context window sizes — is available on the LLM Models research page.

No provider accounts needed

You don't need API keys from individual providers. Lingo.dev handles authentication, billing, and routing to all models through a unified infrastructure.


Model configs#

A model config assigns a specific model to a source-target locale pair within a localization engine.

FieldDescription
ProviderThe model provider (e.g., openai, anthropic, google)
ModelThe specific model (e.g., gpt-4o, claude-sonnet-4-5-20250514)
Source localeThe source locale, or * for any source
Target localeThe target locale, or * for any target
RankPriority in the fallback chain (0 = primary, 1 = first fallback, ...)

When the engine receives a translation request, it selects the most specific matching config based on the source and target locales.


Defaults and customization#

The Lingo.dev team has been researching which models produce the best translations for each language pair since GPT-3. When you create a new localization engine, it comes pre-configured with sensible model defaults — primary models and fallbacks selected based on that research, optimized for quality across common and low-resource languages alike.

These defaults are a starting point. You can edit any model config, swap providers, add fallbacks, or override specific locale pairs with models you prefer. The engine's model configuration is fully yours to control.


Fallback chains#

LLMs evolve fast — new models ship weekly, capabilities improve with each generation, and pricing drops as competition intensifies. But that velocity comes at a cost: provider outages, rate limits, content filter changes, and model deprecations are routine. A production localization pipeline that depends on a single model is a pipeline that will eventually break.

Lingo.dev's localization engine is purpose-built for production-grade translation workflows. Each locale pair supports a ranked fallback chain — if the primary model fails, the engine automatically and transparently tries the next model in the chain, without any intervention or failed requests reaching your users.

How fallback ordering works#

The engine sorts available configs by specificity, then by rank:

  1. Target locale specificity — exact target locale beats wildcard *
  2. Source locale specificity — exact source locale beats wildcard *
  3. Rank — lower rank = higher priority

Example#

Given these configs for an engine:

SourceTargetModelRank
endeGPT-4o0
endeClaude Sonnet1
*deGemini Flash0
**GPT-4o-mini0

A request translating en → de tries models in this order:

  1. GPT-4o — exact match, rank 0 (primary)
  2. Claude Sonnet — exact match, rank 1 (first fallback)
  3. Gemini Flash — wildcard source, exact target, rank 0
  4. GPT-4o-mini — wildcard both, rank 0 (catch-all)

A request translating fr → de skips the first two (source doesn't match) and starts at Gemini Flash.

Fallback tracking

When a fallback model handles a request, the engine records it in the request log. Monitor fallback usage in Reports to identify unreliable primary models.


Wildcard locales#

Set source or target locale to * to create default configs that apply when no locale-specific config exists.

Common patterns:

SourceTargetModelPurpose
**GPT-4oCatch-all default for any locale pair
en*Claude SonnetDefault for all English-source translations
*jaGPT-4oUse a specific model for Japanese targets
endeMistral LargeOverride default for this specific pair

Specific configs always take priority over wildcard configs. Use wildcards to set sensible defaults, then override for locale pairs that need special handling.


Managing model configs via MCP#

If you use the Lingo.dev MCP server, your AI coding assistant can configure models directly:

text
"Set GPT-4o as the primary model for English to German,
with Claude Sonnet as fallback."
text
"Add a catch-all model config using GPT-4o-mini for
all locale pairs."

Next Steps#

Brand Voices
Define overall tone and formality per locale
Scorers
Monitor translation quality per model
Reports
Track model usage, fallbacks, and token consumption
API Reference
Integrate the localization API into your workflow