Ollama
Run local models with Ollama — no API key required.
Setup
Install Ollama and pull a model:
ollama pull llama3Usage
import { createLLM } from '@ahzan-agentforge/core';
const llm = createLLM({
provider: 'ollama',
model: 'llama3',
});Custom Endpoint
By default, Ollama runs at http://localhost:11434. Override with baseUrl:
const llm = createLLM({
provider: 'ollama',
model: 'llama3',
baseUrl: 'http://my-server:11434',
});Model Selection
Any model available in Ollama can be used:
// Code-focused model
const codeLlm = createLLM({ provider: 'ollama', model: 'codellama' });
// Small, fast model
const fastLlm = createLLM({ provider: 'ollama', model: 'phi3' });Limitations
- Local models may not support all tool calling patterns as well as cloud providers
- Performance depends on your hardware
- No usage-based cost tracking (tokens are free)
Next Steps
- Streaming — streaming with Ollama
- Custom LLM Provider — implement your own provider