Chrome is embedding AI models directly into the browser. Instead of calling OpenAI or Google's cloud APIs, web applications can access AI capabilities through built-in browser APIs. The models run locally on the user's device.
Available APIs
Summarizer API
const summarizer = await ai.summarizer.create({
type: 'tl;dr',
length: 'short',
});
const summary = await summarizer.summarize(longArticleText);
Summarize articles, emails, documents β anything text-heavy. No API key needed.
Translator API
const translator = await ai.translator.create({
sourceLanguage: 'en',
targetLanguage: 'es',
});
const spanish = await translator.translate('Hello, world!');
Real-time translation running entirely on-device. No data leaves the browser.
Writer API
const writer = await ai.writer.create({
tone: 'formal',
length: 'medium',
});
const draft = await writer.write('Thank the customer for their feedback');
Generate text with specific tone and length parameters.
Rewriter API
const rewriter = await ai.rewriter.create({
tone: 'more-casual',
});
const casual = await rewriter.rewrite(formalEmail);
Rephrase content to match different contexts and audiences.
Language Detection API
const detector = await ai.languageDetector.create();
const results = await detector.detect(unknownText);
// [{ detectedLanguage: 'fr', confidence: 0.97 }]
Detect the language of user input for dynamic localization.
Why This Matters
Privacy
Data stays on the device. No text sent to external servers. Critical for sensitive applications like healthcare, legal, and finance.
Cost
No per-token API charges. Once the browser downloads the model, usage is free. This makes AI features viable for small businesses with tight budgets.
Latency
No network round-trip. Responses are near-instant. Makes AI features feel native rather than cloud-dependent.
Offline Capability
Works without internet after initial model download. Useful for field workers, travel, and unreliable connections.
Limitations
- Chrome-only for now: Other browsers have not committed to matching APIs
- Model size constraints: On-device models are smaller and less capable than cloud models
- Hardware requirements: Needs modern hardware with sufficient RAM and GPU
- Inconsistent availability: Not all devices will support all APIs
- No fine-tuning: You use the model as-is, no customization
Progressive Enhancement Pattern
async function summarizeContent(text) {
// Try browser-native AI first
if ('ai' in window && 'summarizer' in window.ai) {
const summarizer = await ai.summarizer.create();
return summarizer.summarize(text);
}
// Fall back to cloud API
const response = await fetch('/api/summarize', {
method: 'POST',
body: JSON.stringify({ text }),
});
return response.json();
}
Use browser AI when available, fall back to your cloud API when not. Users get the best experience their device supports.
Practical Applications
- Content tools: Summarize, translate, and rewrite without API costs
- Form assistance: Help users write better form submissions
- Search enhancement: Improve search with language understanding
- Accessibility: Real-time content simplification for reading difficulties
- Customer support: Draft responses for support agents
Our Perspective
Browser-native AI is a significant shift. We are already experimenting with these APIs in client projects, using progressive enhancement to ensure universal compatibility while giving Chrome users an enhanced experience. The privacy and cost benefits make it especially compelling for small business applications.