AgentForge

Installation

Install AgentForge and set up your development environment.

Requirements

  • Node.js 22+ (for TypeScript)
  • Python 3.10+ (for Python SDK)
  • Redis (optional — for persistent state and queues)
  • PostgreSQL + pgvector (optional — for long-term memory)

Install

npm install @ahzan-agentforge/core
pip install agentforge

LLM Provider Setup

AgentForge supports multiple LLM providers. Set the API key for your provider:

export ANTHROPIC_API_KEY=your-key-here
export OPENAI_API_KEY=your-key-here
export GOOGLE_AI_API_KEY=your-key-here

No API key needed — Ollama runs locally.

# Install Ollama, then pull a model
ollama pull llama3

Optional Dependencies

For persistent state (Redis):

# Start Redis locally
docker run -d -p 6379:6379 redis

For long-term memory (PostgreSQL + pgvector):

# Start PostgreSQL with pgvector
docker run -d -p 5432:5432 -e POSTGRES_PASSWORD=postgres pgvector/pgvector:pg16

Scaffold a Project

Use the CLI to scaffold a new AgentForge project:

npx @ahzan-agentforge/core init my-agent
cd my-agent
npm install

This creates a project with a basic agent configuration, a sample tool, and a development script.

Verify Installation

import { createLLM } from '@ahzan-agentforge/core';

const llm = createLLM({ provider: 'anthropic', model: 'claude-sonnet-4-20250514' });
const response = await llm.chat({
  system: 'You are a helpful assistant.',
  messages: [{ role: 'user', content: 'Hello!' }],
});
console.log(response.content); // "Hello! How can I help you today?"
from agentforge import create_llm

llm = create_llm(provider="anthropic", model="claude-sonnet-4-20250514")
response = llm.chat(
    system="You are a helpful assistant.",
    messages=[{"role": "user", "content": "Hello!"}],
)
print(response.content)  # "Hello! How can I help you today?"

Next Steps