Playground#
The playground lets you test translations interactively — type text, pick locales, and see results from your localization engines in real time. It's a side-by-side comparison tool for evaluating how different engines and models translate the same input.
Comparison modes#
The playground operates in two modes, switchable via the control bar:
Engine vs Model#
Compares the output of a fully configured localization engine against a raw LLM model. The left panel runs your engine — with its brand voice, glossary, instructions, and model config applied. The right panel sends the same text to a bare model with no engine context.
This is the default mode. Use it to see the value your engine configuration adds on top of the base model's translation ability.
Engine vs Engine#
Compares the output of two different localization engines side by side. Both panels run full engine translations — each applying its own brand voice, glossary, instructions, and model configuration.
Use this to compare engines configured for different strategies: different models, different glossary coverage, or different instruction sets.
How to use#
- Select locales — pick the source and target language from the locale selectors
- Choose what to compare — select an engine (or model) in each panel
- Enter text — type or paste the source text you want to translate
- Translate — click the Translate button on each panel independently
Each panel translates independently, so you can run one side, adjust the text, and re-run without affecting the other.
Engine translations#
When the playground translates through an engine, the request flows through the full localization pipeline — exactly as it would via the API:
- Brand voice for the target locale is applied
- Glossary items are retrieved via semantic search
- Instructions matching the target locale are included
- The model config for the locale pair determines which LLM processes the request
Playground translations are logged in request logs with trigger type playground, so they appear in Reports alongside API and integration requests.
Raw model translations#
In Engine vs Model mode, the right panel sends text directly to a bare LLM — no brand voice, no glossary, no instructions. The model receives only the source text and locale pair with a basic translation prompt.
This baseline comparison shows what the model produces on its own. The difference between the raw model output and the engine output is the measurable impact of your engine configuration.