LLM Providers
Sentinel works with Gemini, OpenAI, Claude, and Ollama out of the box. Swap the underlying model with a single provider option — no other code changes needed.
Overview
| Provider | Class | Default model | Peer dependency | Notes |
|---|---|---|---|---|
| Gemini | GeminiProvider | gemini-3-flash-preview | none (bundled) | Default provider. Set GEMINI_VERSION env to override model. |
| OpenAI | OpenAIProvider | gpt-4o | npm install openai | Supports any OpenAI-compatible API via baseURL. |
| Claude | ClaudeProvider | claude-sonnet-4-6 | npm install @anthropic-ai/sdk | All Claude 4.x models supported. |
| Ollama | OllamaProvider | required | none (local) | Runs locally — no API key. Vision requires llava or bakllava. |
All providers implement automatic retry with exponential backoff on rate limit errors (HTTP 429/503), connection resets, and timeouts.
Gemini Built-in · No extra package
The default provider. No peer dependency — Gemini support is bundled with Sentinel. Get a free API key at aistudio.google.com. The free tier covers thousands of runs.
import { Sentinel } from '@isoldex/sentinel';
// Shorthand — uses Gemini implicitly
const sentinel = new Sentinel({ apiKey: process.env.GEMINI_API_KEY });Explicit constructor (to pin a specific model):
import { Sentinel, GeminiProvider } from '@isoldex/sentinel';
const sentinel = new Sentinel({
apiKey: '',
provider: new GeminiProvider({
apiKey: process.env.GEMINI_API_KEY,
model: 'gemini-3-flash-preview', // or set GEMINI_VERSION in .env
}),
});GEMINI_API_KEY=your_key_here
GEMINI_VERSION=gemini-3-flash-preview # optional, this is the defaultOpenAI npm install openai
Supports GPT-4o and any other OpenAI model. Also works with any OpenAI-compatible API by setting baseURL — Groq, Together AI, Fireworks, and more.
import { Sentinel, OpenAIProvider } from '@isoldex/sentinel';
const sentinel = new Sentinel({
apiKey: '',
provider: new OpenAIProvider({
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-4o', // any OpenAI-compatible model
}),
});OpenAI-compatible APIs (Groq example):
// Works with any OpenAI-compatible API (Groq, Together, Fireworks, etc.)
import { Sentinel, OpenAIProvider } from '@isoldex/sentinel';
const sentinel = new Sentinel({
apiKey: '',
provider: new OpenAIProvider({
apiKey: process.env.GROQ_API_KEY,
model: 'llama-3.3-70b-versatile',
baseURL: 'https://api.groq.com/openai/v1',
}),
});Claude npm install @anthropic-ai/sdk
Supports all Claude 4.x models including Opus, Sonnet, and Haiku. Get an API key at console.anthropic.com.
import { Sentinel, ClaudeProvider } from '@isoldex/sentinel';
const sentinel = new Sentinel({
apiKey: '',
provider: new ClaudeProvider({
apiKey: process.env.ANTHROPIC_API_KEY,
model: 'claude-sonnet-4-6',
}),
});Available models
claude-opus-4-6 · claude-sonnet-4-6 · claude-haiku-4-5-20251001
Ollama Local · No API key
Run models locally with Ollama. No API key required. Supports vision via llava or bakllava models.
import { Sentinel, OllamaProvider } from '@isoldex/sentinel';
// Requires a running Ollama instance — https://ollama.com
const sentinel = new Sentinel({
apiKey: '',
provider: new OllamaProvider({
model: 'llama3.2',
baseURL: 'http://localhost:11434', // default
}),
});
// Pull a vision-capable model for visionFallback:
// $ ollama pull llavaCustom provider
Implement the LLMProvider interface to integrate any LLM — Mistral, Cohere, a fine-tuned model, or an internal API. Only generateStructuredData and generateText are required.analyzeImage is optional — enables visionFallback.
import type { LLMProvider, SchemaInput } from '@isoldex/sentinel';
class MyProvider implements LLMProvider {
async generateStructuredData<T>(
prompt: string,
schema: SchemaInput<T>,
): Promise<T> {
// Call your API, parse the response, and return typed data
const response = await myApi.complete(prompt, schema);
return JSON.parse(response.text) as T;
}
async generateText(
prompt: string,
systemInstruction?: string,
): Promise<string> {
const response = await myApi.complete(prompt, { system: systemInstruction });
return response.text;
}
// Optional — enables vision grounding
async analyzeImage(
prompt: string,
imageBase64: string,
mimeType?: string,
): Promise<string> {
const response = await myApi.vision(prompt, imageBase64, mimeType);
return response.text;
}
}
const sentinel = new Sentinel({ apiKey: '', provider: new MyProvider() });Retry behavior
All four built-in providers implement automatic retry with exponential backoff. No configuration required.
// All built-in providers retry automatically:
// — up to 3 attempts
// — exponential backoff starting at 1s (1s → 2s → 4s)
// — triggered on: HTTP 429, 503, ECONNRESET, timeout
// No configuration needed — this is the default behavior.
// To adjust: implement a custom provider with your own retry logic.