3.8 KiB
CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Project Overview
This is a character development system implementing Stanford's "Generative Agents" memory architecture for believable AI characters with dynamic personality evolution. The project uses a custom LLM connector that supports any OpenAI-compatible API endpoint, allowing flexible backend configuration.
Key Architecture Components
Agent System (living_agents/)
- Memory Stream: Stanford's memory architecture with observations, reflections, and plans
- Smart Retrieval: Combines recency (exponential decay), importance (1-10 scale), and relevance (cosine similarity)
- Auto-Reflection: Generates insights when importance threshold (150) is reached
- Character Components: Character, CharacterAgent, MemoryStream
- Trait Development: Dynamic personality evolution based on experiences
- Uses llm_connector for flexible backend support
LLM Connector Package
- Custom LLM abstraction that supports any OpenAI-compatible API
- Streaming support with both reasoning and content chunks
- Type definitions: LLMBackend (base_url, api_token, model) and LLMMessage
- Environment variables: BACKEND_BASE_URL, BACKEND_API_TOKEN, BACKEND_MODEL
Character Explorer CLI
- CLI Testing Tool: Interactive character development and testing interface
- Character Loading: YAML template system for character initialization
- Real-time Development: Direct testing of memory, traits, and personality evolution
- Located in
character_explorer.pyfor easy development iteration
Development Commands
# Install dependencies
uv sync
# Run the character explorer CLI
uv run python character_explorer.py
# Add new dependencies
uv add <package-name>
# Python environment management
uv python pin 3.12 # Pin to Python 3.12
Important Development Notes
Character Development Focus
The current focus is on perfecting single-agent character development:
- Characters evolve through experiences and interactions
- Memory system creates realistic personality development
- CLI tool allows rapid testing and iteration
LLM Integration
The project uses a flexible LLM connector supporting any OpenAI-compatible API.
When implementing new features, use the llm_connector package:
from llm_connector import get_response, LLMBackend, LLMMessage
backend: LLMBackend = {
'base_url': os.environ['BACKEND_BASE_URL'],
'api_token': os.environ['BACKEND_API_TOKEN'],
'model': os.environ['BACKEND_MODEL']
}
messages: List[LLMMessage] = [
{'role': 'system', 'content': 'You are...'},
{'role': 'user', 'content': 'Hello'}
]
# Non-streaming
response = await get_response(backend, messages, stream=False)
# Streaming
async for chunk in await get_response(backend, messages, stream=True):
if 'content' in chunk:
# Handle content
if 'reasoning' in chunk:
# Handle reasoning (if supported)
Project Structure
character_explorer.py: CLI tool for character development and testingliving_agents/: Core agent system with memory, traits, and prompt managementcharacter_templates/: YAML files defining character backgroundsllm_connector/: Custom LLM integration package for flexible backend support
Environment Variables
Required in .env:
BACKEND_BASE_URL: LLM API endpointBACKEND_API_TOKEN: API authentication tokenBACKEND_MODEL: Model identifier
Current Development Status
The system currently focuses on single-agent character development:
- Character agents with dynamic personality evolution
- Stanford-inspired memory architecture
- CLI testing tool for rapid iteration
- Flexible LLM backend configuration
Future plans include multi-agent interactions and web interface integration.