Tutorial: Build a CLI Agent
Create a working AI agent from scratch in under 10 minutes. Every file, every command, every output shown.
This tutorial builds a working AI agent you can talk to from your terminal. You'll create a project, define a tool, wire up an LLM, and run the whole thing. No hand-waving. Every file shown in full.
What you'll build
A weather assistant agent that:
- Takes a question from the command line
- Uses a weather lookup tool to fetch data
- Returns a natural language answer
- Prints the full execution trace
Prerequisites
- Node.js 22+ installed
- An Anthropic API key (or OpenAI, Gemini, or Ollama — we'll show alternatives)
Step 1: Create the project
mkdir weather-agent && cd weather-agent
npm init -y
npm install @ahzan-agentforge/core zod
npm install -D typescript @types/nodeCreate tsconfig.json:
{
"compilerOptions": {
"target": "ES2022",
"module": "Node16",
"moduleResolution": "Node16",
"outDir": "dist",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true
},
"include": ["src"]
}Update package.json — add "type": "module" and a run script:
{
"type": "module",
"scripts": {
"start": "npx tsx src/main.ts"
}
}Your project structure:
weather-agent/
├── package.json
├── tsconfig.json
└── src/
└── main.tsStep 2: Set your API key
export ANTHROPIC_API_KEY=sk-ant-your-key-hereexport OPENAI_API_KEY=sk-your-key-hereNo API key needed. Make sure Ollama is running:
ollama pull llama3
ollama serveStep 3: Write the agent
Create src/main.ts. This is the entire file:
import { defineTool, defineAgent, createLLM } from '@ahzan-agentforge/core';
import { z } from 'zod';
// --- Tool ---
const getWeather = defineTool({
name: 'get_weather',
description: 'Get current weather for a city',
input: z.object({
city: z.string().describe('City name'),
}),
output: z.object({
city: z.string(),
temp_c: z.number(),
condition: z.string(),
humidity: z.number(),
}),
execute: async ({ city }) => {
// Simulated weather data — swap with a real API later
const data: Record<string, { temp_c: number; condition: string; humidity: number }> = {
london: { temp_c: 12, condition: 'Cloudy', humidity: 78 },
tokyo: { temp_c: 24, condition: 'Sunny', humidity: 55 },
new_york: { temp_c: 18, condition: 'Partly cloudy', humidity: 62 },
};
const key = city.toLowerCase().replace(/\s+/g, '_');
const weather = data[key] ?? { temp_c: 20, condition: 'Unknown', humidity: 50 };
return { city, ...weather };
},
});
// --- LLM ---
const llm = createLLM({
provider: 'anthropic',
model: 'claude-sonnet-4-20250514',
});
// --- Agent ---
const agent = defineAgent({
name: 'weather-assistant',
description: 'Answers weather questions using the weather tool',
tools: [getWeather],
llm,
systemPrompt: `You are a weather assistant. When asked about weather, use the get_weather tool to look up the data, then give a short, friendly answer. Include the temperature and conditions.`,
maxSteps: 5,
});
// --- Run ---
const task = process.argv[2] ?? "What's the weather like in Tokyo?";
console.log(`\nTask: ${task}\n`);
const result = await agent.run({ task });
console.log(`Status: ${result.status}`);
console.log(`Steps: ${result.trace.steps.length}`);
console.log(`Output: ${result.output}\n`);
// Print trace
for (const step of result.trace.steps) {
if (step.type === 'tool_call') {
console.log(` [tool] ${step.toolName}(${JSON.stringify(step.toolInput)}) → ${JSON.stringify(step.toolOutput)}`);
} else {
console.log(` [llm] ${step.type}`);
}
}import { defineTool, defineAgent, createLLM } from '@ahzan-agentforge/core';
import { z } from 'zod';
const getWeather = defineTool({
name: 'get_weather',
description: 'Get current weather for a city',
input: z.object({
city: z.string().describe('City name'),
}),
output: z.object({
city: z.string(),
temp_c: z.number(),
condition: z.string(),
humidity: z.number(),
}),
execute: async ({ city }) => {
const data: Record<string, { temp_c: number; condition: string; humidity: number }> = {
london: { temp_c: 12, condition: 'Cloudy', humidity: 78 },
tokyo: { temp_c: 24, condition: 'Sunny', humidity: 55 },
new_york: { temp_c: 18, condition: 'Partly cloudy', humidity: 62 },
};
const key = city.toLowerCase().replace(/\s+/g, '_');
const weather = data[key] ?? { temp_c: 20, condition: 'Unknown', humidity: 50 };
return { city, ...weather };
},
});
// Swap provider and model — everything else stays the same
const llm = createLLM({
provider: 'openai',
model: 'gpt-4o',
});
const agent = defineAgent({
name: 'weather-assistant',
description: 'Answers weather questions using the weather tool',
tools: [getWeather],
llm,
systemPrompt: `You are a weather assistant. When asked about weather, use the get_weather tool to look up the data, then give a short, friendly answer. Include the temperature and conditions.`,
maxSteps: 5,
});
const task = process.argv[2] ?? "What's the weather like in Tokyo?";
console.log(`\nTask: ${task}\n`);
const result = await agent.run({ task });
console.log(`Status: ${result.status}`);
console.log(`Steps: ${result.trace.steps.length}`);
console.log(`Output: ${result.output}\n`);
for (const step of result.trace.steps) {
if (step.type === 'tool_call') {
console.log(` [tool] ${step.toolName}(${JSON.stringify(step.toolInput)}) → ${JSON.stringify(step.toolOutput)}`);
} else {
console.log(` [llm] ${step.type}`);
}
}import { defineTool, defineAgent, createLLM } from '@ahzan-agentforge/core';
import { z } from 'zod';
const getWeather = defineTool({
name: 'get_weather',
description: 'Get current weather for a city',
input: z.object({
city: z.string().describe('City name'),
}),
output: z.object({
city: z.string(),
temp_c: z.number(),
condition: z.string(),
humidity: z.number(),
}),
execute: async ({ city }) => {
const data: Record<string, { temp_c: number; condition: string; humidity: number }> = {
london: { temp_c: 12, condition: 'Cloudy', humidity: 78 },
tokyo: { temp_c: 24, condition: 'Sunny', humidity: 55 },
new_york: { temp_c: 18, condition: 'Partly cloudy', humidity: 62 },
};
const key = city.toLowerCase().replace(/\s+/g, '_');
const weather = data[key] ?? { temp_c: 20, condition: 'Unknown', humidity: 50 };
return { city, ...weather };
},
});
// Local Ollama — no API key, no cost
const llm = createLLM({
provider: 'ollama',
model: 'llama3',
});
const agent = defineAgent({
name: 'weather-assistant',
description: 'Answers weather questions using the weather tool',
tools: [getWeather],
llm,
systemPrompt: `You are a weather assistant. When asked about weather, use the get_weather tool to look up the data, then give a short, friendly answer. Include the temperature and conditions.`,
maxSteps: 5,
});
const task = process.argv[2] ?? "What's the weather like in Tokyo?";
console.log(`\nTask: ${task}\n`);
const result = await agent.run({ task });
console.log(`Status: ${result.status}`);
console.log(`Steps: ${result.trace.steps.length}`);
console.log(`Output: ${result.output}\n`);
for (const step of result.trace.steps) {
if (step.type === 'tool_call') {
console.log(` [tool] ${step.toolName}(${JSON.stringify(step.toolInput)}) → ${JSON.stringify(step.toolOutput)}`);
} else {
console.log(` [llm] ${step.type}`);
}
}Step 4: Run it
npm startExpected output:
Task: What's the weather like in Tokyo?
Status: completed
Steps: 2
Output: It's 24°C and sunny in Tokyo right now, with 55% humidity. Nice day!
[tool] get_weather({"city":"Tokyo"}) → {"city":"Tokyo","temp_c":24,"condition":"Sunny","humidity":55}
[llm] llm_callTry different questions:
npm start "Is it cold in London right now?"
npm start "Compare weather in Tokyo and New York"Step 5: Add streaming
Want to see the agent think in real time? Replace the agent.run() block with streaming:
// Replace the run block with:
console.log(`\nTask: ${task}\n`);
for await (const event of agent.stream({ task })) {
switch (event.type) {
case 'llm_token':
process.stdout.write(event.content);
break;
case 'tool_start':
console.log(`\n → calling ${event.toolName}...`);
break;
case 'tool_end':
console.log(` ← ${event.toolName} returned in ${event.duration}ms`);
break;
case 'done':
console.log(`\n\nStatus: ${event.result.status}`);
break;
}
}Now you'll see tokens stream as the agent generates its response.
What just happened
Here's the full execution flow, step by step:
defineAgent()created an agent with one tool and an LLMagent.run()started a new run with a unique ID (likerun_1710000000000_abc123)- AgentForge sent the task + tool definitions to the LLM
- The LLM decided to call
get_weatherwith{ city: "Tokyo" } - AgentForge validated the input against the Zod schema, ran the tool, validated the output
- The tool result went back to the LLM
- The LLM generated a text response — that means "done"
- AgentForge recorded every step, token count, and timing in the trace
All of this is automatic. You wrote the tool logic and the system prompt. The framework handled validation, execution, tracing, and the agent loop.
Next steps
- Add Tools and Memory — give the agent more tools and persistent memory
- Test and Debug — write deterministic tests with MockLLM
- Agents reference — full configuration options