MockLLM
Create scripted LLM responses for deterministic agent testing.
createMockLLM
import { createMockLLM } from '@ahzan-agentforge/core';
const mockLLM = createMockLLM({
responses: [
{ text: 'Hello! How can I help?' },
{ toolCalls: [{ name: 'search', input: { query: 'test' } }] },
{ text: 'Here are the results.' },
],
});MockLLMConfig
interface MockLLMConfig {
responses: MockResponse[];
defaultUsage?: { inputTokens: number; outputTokens: number };
}MockResponse
interface MockResponse {
text?: string; // Text response (NOT "content")
toolCalls?: Array<{
name: string;
input: unknown;
}>;
usage?: {
inputTokens: number;
outputTokens: number;
};
}Response Sequencing
MockLLM returns responses in order. Each chat() call consumes the next response:
const mock = createMockLLM({
responses: [
{ text: 'First call' },
{ toolCalls: [{ name: 'tool1', input: {} }] },
{ text: 'Third call' },
],
});
// Call 1 → "First call"
// Call 2 → tool call to tool1
// Call 3 → "Third call"
// Call 4 → throws (exhausted)Assertions
MockLLM records all calls for assertions:
const mock = createMockLLM({ responses: [{ text: 'ok' }] });
await mock.chat({ system: 'test', messages: [{ role: 'user', content: 'hi' }] });
console.log(mock.callCount); // 1
console.log(mock.calls); // [ChatRequest]
console.log(mock.exhausted); // trueStreaming Support
MockLLM implements StreamingLLM — it simulates streaming by yielding tokens one at a time:
for await (const event of mock.chatStream(request)) {
// Yields text_delta events for each character
}Reset
mock.reset(); // Reset call count and response indexNext Steps
- TestHarness — run agents with MockLLM
- Recipes — common patterns