Installation
Install AgentForge and set up your development environment.
Requirements
- Node.js 22+ (for TypeScript)
- Python 3.10+ (for Python SDK)
- Redis (optional — for persistent state and queues)
- PostgreSQL + pgvector (optional — for long-term memory)
Install
npm install @ahzan-agentforge/corepip install agentforgeLLM Provider Setup
AgentForge supports multiple LLM providers. Set the API key for your provider:
export ANTHROPIC_API_KEY=your-key-hereexport OPENAI_API_KEY=your-key-hereexport GOOGLE_AI_API_KEY=your-key-hereNo API key needed — Ollama runs locally.
# Install Ollama, then pull a model
ollama pull llama3Optional Dependencies
For persistent state (Redis):
# Start Redis locally
docker run -d -p 6379:6379 redisFor long-term memory (PostgreSQL + pgvector):
# Start PostgreSQL with pgvector
docker run -d -p 5432:5432 -e POSTGRES_PASSWORD=postgres pgvector/pgvector:pg16Scaffold a Project
Use the CLI to scaffold a new AgentForge project:
npx @ahzan-agentforge/core init my-agent
cd my-agent
npm installThis creates a project with a basic agent configuration, a sample tool, and a development script.
Verify Installation
import { createLLM } from '@ahzan-agentforge/core';
const llm = createLLM({ provider: 'anthropic', model: 'claude-sonnet-4-20250514' });
const response = await llm.chat({
system: 'You are a helpful assistant.',
messages: [{ role: 'user', content: 'Hello!' }],
});
console.log(response.content); // "Hello! How can I help you today?"from agentforge import create_llm
llm = create_llm(provider="anthropic", model="claude-sonnet-4-20250514")
response = llm.chat(
system="You are a helpful assistant.",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.content) # "Hello! How can I help you today?"Next Steps
- Quickstart — build your first agent
- Project Structure — understand the file layout
- Configuration — customize AgentForge settings