LLM Streaming
Stream LLM responses token-by-token — ChatStreamEvent types and usage.
StreamingLLM Interface
interface StreamingLLM extends LLM {
chatStream(request: ChatRequest): AsyncGenerator<ChatStreamEvent>;
}All built-in providers implement StreamingLLM.
ChatStreamEvent
type ChatStreamEvent =
| { type: 'text_delta'; content: string }
| { type: 'tool_call_start'; id: string; name: string }
| { type: 'tool_call_delta'; id: string; content: string }
| { type: 'tool_call_end'; id: string }
| { type: 'usage'; inputTokens: number; outputTokens: number }
| { type: 'done' };Direct LLM Streaming
You can stream directly from the LLM (outside of an agent):
const llm = createLLM({ provider: 'anthropic', model: 'claude-sonnet-4-20250514' });
if (isStreamingLLM(llm)) {
for await (const event of llm.chatStream({
system: 'You are helpful.',
messages: [{ role: 'user', content: 'Tell me a story' }],
})) {
if (event.type === 'text_delta') {
process.stdout.write(event.content);
}
}
}Agent-Level Streaming
When streaming through an agent, LLM stream events are wrapped in AgentStreamEvent:
for await (const event of agent.stream({ task: 'Tell me a story' })) {
if (event.type === 'llm_token') {
// Simplified text token
process.stdout.write(event.token);
}
if (event.type === 'llm_stream') {
// Raw ChatStreamEvent
const rawEvent = event.event;
}
}Type Guard
import { isStreamingLLM } from '@ahzan-agentforge/core';
const llm = createLLM({ provider: 'anthropic', model: 'claude-sonnet-4-20250514' });
if (isStreamingLLM(llm)) {
// TypeScript knows llm has chatStream()
}Next Steps
- Agent Streaming — stream agent execution
- Overview — provider comparison