AgentForge

Quickstart

Build your first AI agent with AgentForge in under 5 minutes.

This guide walks you through creating a simple agent that can answer questions using a search tool.

1. Define a Tool

Tools give your agent capabilities. Each tool has typed input/output schemas and an execute function.

import { defineTool } from '@ahzan-agentforge/core';
import { z } from 'zod';

const lookupTool = defineTool({
  name: 'lookup',
  description: 'Look up a product by name',
  input: z.object({ name: z.string() }),
  output: z.object({ price: z.number(), inStock: z.boolean() }),
  execute: async ({ name }) => {
    // Your lookup logic here
    return { price: 29.99, inStock: true };
  },
});
from agentforge import define_tool
from pydantic import BaseModel

class LookupInput(BaseModel):
    name: str

class LookupOutput(BaseModel):
    price: float
    in_stock: bool

lookup_tool = define_tool(
    name="lookup",
    description="Look up a product by name",
    input=LookupInput,
    output=LookupOutput,
    execute=lambda input: LookupOutput(price=29.99, in_stock=True),
)

2. Create an LLM

Use createLLM() to instantiate a provider. The agent doesn't know or care which provider you use.

import { createLLM } from '@ahzan-agentforge/core';

const llm = createLLM({
  provider: 'anthropic',
  model: 'claude-sonnet-4-20250514',
});
from agentforge import create_llm

llm = create_llm(provider="anthropic", model="claude-sonnet-4-20250514")

3. Define the Agent

Combine tools, LLM, and a system prompt into an agent.

import { defineAgent } from '@ahzan-agentforge/core';

const agent = defineAgent({
  name: 'shop-assistant',
  description: 'Helps customers find products',
  tools: [lookupTool],
  llm,
  systemPrompt: 'You are a shop assistant. Use the lookup tool to find product information.',
  maxSteps: 10,
});
from agentforge import define_agent

agent = define_agent(
    name="shop-assistant",
    description="Helps customers find products",
    tools=[lookup_tool],
    llm=llm,
    system_prompt="You are a shop assistant. Use the lookup tool to find product information.",
    max_steps=10,
)

4. Run It

const result = await agent.run({
  task: 'How much does the wireless keyboard cost?',
});

console.log(result.status);  // 'completed'
console.log(result.output);  // "The wireless keyboard costs $29.99 and is in stock."
console.log(result.trace.steps.length); // Number of steps taken
result = await agent.run(task="How much does the wireless keyboard cost?")

print(result.status)   # "completed"
print(result.output)   # "The wireless keyboard costs $29.99 and is in stock."
print(len(result.trace.steps))  # Number of steps taken

5. Stream Responses

For real-time output, use the streaming API:

for await (const event of agent.stream({ task: 'Find wireless keyboard' })) {
  switch (event.type) {
    case 'llm_token':
      process.stdout.write(event.token);
      break;
    case 'tool_start':
      console.log(`\nCalling tool: ${event.toolName}`);
      break;
    case 'done':
      console.log(`\nDone: ${event.result.output}`);
      break;
  }
}
async for event in agent.stream(task="Find wireless keyboard"):
    if event.type == "llm_token":
        print(event.token, end="")
    elif event.type == "tool_start":
        print(f"\nCalling tool: {event.tool_name}")
    elif event.type == "done":
        print(f"\nDone: {event.result.output}")

What happened

Under the hood:

  1. AgentForge created a run with a unique run ID
  2. Sent the task to the LLM along with available tools
  3. The LLM decided to call the lookup tool
  4. AgentForge validated the tool input against the Zod schema
  5. Executed the tool and returned the result to the LLM
  6. The LLM generated a final text response
  7. AgentForge recorded every step, token count, and timing

The whole run is observable, replayable, and recoverable from crashes.

Next Steps

  • Agents — full agent configuration options
  • Tools — advanced tool features (retry, caching, rollback)
  • Governance — add budget limits and autonomy policies
  • Testing — test agents deterministically