Ollama
Local AI translation with Ollama and Lingo.dev Compiler
What is Ollama?
Ollama lets you run large language models locally on your machine. It provides complete privacy, zero API costs, and works offline. You can choose from models like Llama, Mistral, Gemma, and Qwen. Ollama works with Lingo.dev Compiler, allowing you to translate your application content using local models.
Getting Started
Step 1. Install Ollama
macOS
brew install ollama
Linux
curl -fsSL https://ollama.com/install.sh | sh
Windows
- Navigate to ollama.com/download.
- Download and run the installer.
Step 2. Start Ollama
If Ollama is not already running, start it with the following command:
ollama serve
Step 2. Download a Model
ollama pull llama3.1
Using Ollama
To enable Ollama, set the models
property in the compiler options:
import react from "@vitejs/plugin-react";
import lingoCompiler from "lingo.dev/compiler";
import { type UserConfig } from "vite";
// https://vite.dev/config/
const viteConfig: UserConfig = {
plugins: [react()],
};
const withLingo = lingoCompiler.vite({
sourceRoot: "src",
lingoDir: "lingo",
sourceLocale: "en",
targetLocales: ["es", "fr", "de", "ja"],
rsc: false,
useDirective: false,
debug: false,
models: {
"*:*": "ollama:llama3.1",
},
});
export default withLingo(viteConfig);
The property accepts an object where:
- the keys are pairs of source and target locales, with
*
representing any locale - the values are model identifiers (e.g.,
ollama:llama3.1
)
You can use Ollama to translate:
- between all locales
- from a specific source locale
- to a specific target locale
- between a specific source and target locale
Translating all locales
Use the wildcard pattern *:*
to apply the same Ollama model for all translation pairs:
const withLingo = lingoCompiler.vite({
sourceRoot: "src",
lingoDir: "lingo",
sourceLocale: "en",
targetLocales: ["es", "fr", "de", "ja", "pt", "zh"],
models: {
// Use Llama 3.1 for all translation pairs
"*:*": "ollama:llama3.1",
},
});
Translating from a specific source locale
Use a specific source locale with wildcard target to apply a model for all translations from that source:
const withLingo = lingoCompiler.vite({
sourceRoot: "src",
lingoDir: "lingo",
sourceLocale: "en",
targetLocales: ["es", "fr", "de", "ja", "pt", "zh"],
models: {
// Use Llama 3.1 70B for all translations from English
"en:*": "ollama:llama3.1:70b",
// Use Mixtral for translations from Spanish to any language
"es:*": "ollama:mixtral",
// Fallback for other source languages
"*:*": "ollama:gemma2",
},
});
Translating to a specific target locale
Use a wildcard source with specific target locale to apply a model for all translations to that target:
const withLingo = lingoCompiler.vite({
sourceRoot: "src",
lingoDir: "lingo",
sourceLocale: "en",
targetLocales: ["es", "fr", "de", "ja", "pt", "zh"],
models: {
// Use specialized model for translations to Japanese
"*:ja": "ollama:qwen2",
// Use Mixtral for translations to Chinese
"*:zh": "ollama:mixtral",
// Use Mistral for translations to German
"*:de": "ollama:mistral",
// Default for other target languages
"*:*": "ollama:llama3.1",
},
});
Translating between specific locales
Define exact source-target pairs for fine-grained control over which model handles specific language combinations:
const withLingo = lingoCompiler.vite({
sourceRoot: "src",
lingoDir: "lingo",
sourceLocale: "en",
targetLocales: ["es", "fr", "de", "ja", "pt", "zh"],
models: {
// Specific pairs with optimal models
"en:es": "ollama:gemma2", // English to Spanish
"en:ja": "ollama:qwen2", // English to Japanese
"en:zh": "ollama:qwen2", // English to Chinese
"es:en": "ollama:llama3.1", // Spanish to English
"fr:en": "ollama:mistral", // French to English
"de:en": "ollama:phi3", // German to English
// Fallback for unspecified pairs
"*:*": "ollama:llama3.1",
},
});