createLLM
Factory function for creating LLM instances — configuration, providers, and options.
Usage
import { createLLM } from '@ahzan-agentforge/core';
const llm = createLLM({
provider: 'anthropic',
model: 'claude-sonnet-4-20250514',
maxTokens: 4096,
temperature: 0.7,
});from agentforge import create_llm
llm = create_llm(
provider="anthropic",
model="claude-sonnet-4-20250514",
max_tokens=4096,
temperature=0.7,
)LLMConfig
interface LLMConfig {
provider: LLMProvider; // 'anthropic' | 'openai' | 'gemini' | 'ollama'
model: string; // Model identifier
maxTokens?: number; // Max output tokens
temperature?: number; // Sampling temperature (0-1)
apiKey?: string; // API key (overrides env var)
baseUrl?: string; // Custom API base URL
}Provider Routing
createLLM() routes to the correct provider implementation:
| Provider | Class | API Key Env Var |
|---|---|---|
'anthropic' | AnthropicLLM | ANTHROPIC_API_KEY |
'openai' | OpenAILLM | OPENAI_API_KEY |
'gemini' | GeminiLLM | GOOGLE_AI_API_KEY |
'ollama' | OllamaLLM | None (local) |
Custom Base URL
Use baseUrl for proxies, self-hosted models, or compatible APIs:
const llm = createLLM({
provider: 'openai',
model: 'my-custom-model',
baseUrl: 'https://my-proxy.example.com/v1',
apiKey: 'my-key',
});ChatRequest & ChatResponse
interface ChatRequest {
system: string;
messages: LLMMessage[];
tools?: ToolDefinition[];
temperature?: number;
maxTokens?: number;
}
interface ChatResponse {
type: 'text' | 'tool_calls';
content: string;
toolCalls: ToolCallResponse[];
usage: { inputTokens: number; outputTokens: number };
}Type Guard
Check if an LLM supports streaming:
import { isStreamingLLM } from '@ahzan-agentforge/core';
if (isStreamingLLM(llm)) {
for await (const event of llm.chatStream(request)) {
// ...
}
}