init
This commit is contained in:
1
.python-version
Normal file
1
.python-version
Normal file
@@ -0,0 +1 @@
|
||||
3.13
|
||||
107
CLAUDE.md
Normal file
107
CLAUDE.md
Normal file
@@ -0,0 +1,107 @@
|
||||
# CLAUDE.md
|
||||
|
||||
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
||||
|
||||
## Project Overview
|
||||
|
||||
This is a multi-agent roleplay system implementing Stanford's "Generative Agents" memory architecture for believable AI characters with emergent behaviors. The project currently uses OpenAI's API in the agent system but is transitioning to use a custom LLM connector that supports any OpenAI-compatible API endpoint.
|
||||
|
||||
## Key Architecture Components
|
||||
|
||||
### Agent System (agents.py)
|
||||
- **Memory Stream**: Stanford's memory architecture with observations, reflections, and plans
|
||||
- **Smart Retrieval**: Combines recency (exponential decay), importance (1-10 scale), and relevance (cosine similarity)
|
||||
- **Auto-Reflection**: Generates insights when importance threshold (150) is reached
|
||||
- **Character Components**: Character, CharacterAgent, MemoryStream, SceneManager
|
||||
- Currently uses OpenAI API directly but should be migrated to use llm_connector
|
||||
|
||||
### LLM Connector Package
|
||||
- **Custom LLM abstraction** that supports any OpenAI-compatible API
|
||||
- **Streaming support** with both reasoning and content chunks
|
||||
- **Type definitions**: LLMBackend (base_url, api_token, model) and LLMMessage
|
||||
- Environment variables: BACKEND_BASE_URL, BACKEND_API_TOKEN, BACKEND_MODEL
|
||||
|
||||
### UI Framework
|
||||
- **NiceGUI** for web interface (async components)
|
||||
- **AsyncElement base class**: Never override __init__, use create() factory method and implement build()
|
||||
- **Dialog support**: Can create elements as dialogs with as_dialog()
|
||||
- Pages are created in pages/ directory, main page is MainPage
|
||||
|
||||
## Development Commands
|
||||
|
||||
```bash
|
||||
# Install dependencies
|
||||
uv sync
|
||||
|
||||
# Run the application
|
||||
uv run python main.py
|
||||
# Application runs on http://localhost:8080
|
||||
|
||||
# Add new dependencies
|
||||
uv add <package-name>
|
||||
|
||||
# Python environment management
|
||||
uv python pin 3.12 # Pin to Python 3.12
|
||||
```
|
||||
|
||||
## Important Development Notes
|
||||
|
||||
### AsyncElement Usage
|
||||
When creating UI components that extend AsyncElement:
|
||||
- NEVER override the __init__ method
|
||||
- Always use the `create()` factory method: `await MyComponent.create(params)`
|
||||
- Implement the `build()` method for initialization logic
|
||||
- Pass parameters through build(), not __init__
|
||||
|
||||
### LLM Integration
|
||||
The project has two LLM integration approaches:
|
||||
1. **Legacy** (in agents.py): Direct OpenAI client usage
|
||||
2. **Current** (llm_connector): Flexible backend supporting any OpenAI-compatible API
|
||||
|
||||
When implementing new features, use the llm_connector package:
|
||||
```python
|
||||
from llm_connector import get_response, LLMBackend, LLMMessage
|
||||
|
||||
backend: LLMBackend = {
|
||||
'base_url': os.environ['BACKEND_BASE_URL'],
|
||||
'api_token': os.environ['BACKEND_API_TOKEN'],
|
||||
'model': os.environ['BACKEND_MODEL']
|
||||
}
|
||||
|
||||
messages: List[LLMMessage] = [
|
||||
{'role': 'system', 'content': 'You are...'},
|
||||
{'role': 'user', 'content': 'Hello'}
|
||||
]
|
||||
|
||||
# Non-streaming
|
||||
response = await get_response(backend, messages, stream=False)
|
||||
|
||||
# Streaming
|
||||
async for chunk in await get_response(backend, messages, stream=True):
|
||||
if 'content' in chunk:
|
||||
# Handle content
|
||||
if 'reasoning' in chunk:
|
||||
# Handle reasoning (if supported)
|
||||
```
|
||||
|
||||
### Project Structure
|
||||
- `main.py`: Entry point, NiceGUI app configuration
|
||||
- `agents.py`: Stanford memory architecture implementation (to be integrated)
|
||||
- `llm_connector/`: Custom LLM integration package
|
||||
- `components/`: Reusable UI components with AsyncElement base
|
||||
- `pages/`: UI pages (currently only MainPage)
|
||||
|
||||
### Environment Variables
|
||||
Required in `.env`:
|
||||
- `BACKEND_BASE_URL`: LLM API endpoint
|
||||
- `BACKEND_API_TOKEN`: API authentication token
|
||||
- `BACKEND_MODEL`: Model identifier
|
||||
- `OPENAI_API_KEY`: Currently needed for agents.py (to be removed)
|
||||
|
||||
## Next Steps for Integration
|
||||
|
||||
The agents.py system needs to be:
|
||||
1. Modified to use llm_connector instead of direct OpenAI client
|
||||
2. Integrated into the NiceGUI web interface
|
||||
3. Create UI components for character interaction, memory viewing, scene management
|
||||
4. Implement real-time streaming of agent responses in the UI
|
||||
604
agents.py
Normal file
604
agents.py
Normal file
@@ -0,0 +1,604 @@
|
||||
import json
|
||||
import os
|
||||
import math
|
||||
import time
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Dict, List, Any, Optional, Tuple
|
||||
from dataclasses import dataclass, field
|
||||
from openai import OpenAI
|
||||
import numpy as np
|
||||
from sklearn.metrics.pairwise import cosine_similarity
|
||||
|
||||
# Initialize OpenAI client
|
||||
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
|
||||
|
||||
|
||||
@dataclass
|
||||
class Memory:
|
||||
"""A single memory object with Stanford's architecture"""
|
||||
description: str
|
||||
creation_time: datetime
|
||||
last_accessed: datetime
|
||||
importance_score: int # 1-10 scale
|
||||
embedding: Optional[List[float]] = None
|
||||
memory_type: str = "observation" # observation, reflection, plan
|
||||
related_memories: List[int] = field(default_factory=list) # IDs of supporting memories
|
||||
|
||||
def __post_init__(self):
|
||||
if self.last_accessed is None:
|
||||
self.last_accessed = self.creation_time
|
||||
|
||||
|
||||
class LLMAgent:
|
||||
def __init__(self, model: str = "gpt-3.5-turbo", temperature: float = 0.8):
|
||||
self.model = model
|
||||
self.temperature = temperature
|
||||
|
||||
def chat(self, messages: List[Dict[str, str]], max_tokens: int = 200) -> str:
|
||||
try:
|
||||
response = client.chat.completions.create(
|
||||
model=self.model,
|
||||
messages=messages,
|
||||
temperature=self.temperature,
|
||||
max_tokens=max_tokens
|
||||
)
|
||||
return response.choices[0].message.content.strip()
|
||||
except Exception as e:
|
||||
return f"[LLM Error: {str(e)}]"
|
||||
|
||||
def get_embedding(self, text: str) -> List[float]:
|
||||
"""Get embedding for memory relevance scoring"""
|
||||
try:
|
||||
response = client.embeddings.create(
|
||||
model="text-embedding-ada-002",
|
||||
input=text
|
||||
)
|
||||
return response.data[0].embedding
|
||||
except Exception as e:
|
||||
print(f"Embedding error: {e}")
|
||||
return [0.0] * 1536 # Default embedding size
|
||||
|
||||
|
||||
@dataclass
|
||||
class Character:
|
||||
name: str
|
||||
age: int
|
||||
personality: str
|
||||
occupation: str
|
||||
location: str
|
||||
relationships: Dict[str, str] = field(default_factory=dict)
|
||||
goals: List[str] = field(default_factory=list)
|
||||
|
||||
|
||||
class MemoryStream:
|
||||
"""Stanford's memory architecture with observation, reflection, and planning"""
|
||||
|
||||
def __init__(self, llm_agent: LLMAgent):
|
||||
self.memories: List[Memory] = []
|
||||
self.memory_counter = 0
|
||||
self.llm = llm_agent
|
||||
self.importance_threshold = 150 # Reflection trigger threshold
|
||||
self.recent_importance_sum = 0
|
||||
|
||||
def add_observation(self, description: str) -> int:
|
||||
"""Add a new observation with importance scoring"""
|
||||
importance = self._score_importance(description)
|
||||
|
||||
memory = Memory(
|
||||
description=description,
|
||||
creation_time=datetime.now(),
|
||||
last_accessed=datetime.now(),
|
||||
importance_score=importance,
|
||||
memory_type="observation"
|
||||
)
|
||||
|
||||
# Get embedding for retrieval
|
||||
memory.embedding = self.llm.get_embedding(description)
|
||||
|
||||
memory_id = self.memory_counter
|
||||
self.memories.append(memory)
|
||||
self.memory_counter += 1
|
||||
|
||||
# Track for reflection trigger
|
||||
self.recent_importance_sum += importance
|
||||
|
||||
# Trigger reflection if threshold exceeded
|
||||
if self.recent_importance_sum >= self.importance_threshold:
|
||||
self._generate_reflections()
|
||||
self.recent_importance_sum = 0
|
||||
|
||||
return memory_id
|
||||
|
||||
def _score_importance(self, description: str) -> int:
|
||||
"""Use LLM to score memory importance (Stanford approach)"""
|
||||
prompt = f"""On the scale of 1 to 10, where 1 is purely mundane (e.g., brushing teeth, making bed) and 10 is extremely poignant (e.g., a break up, college acceptance), rate the likely poignancy of the following piece of memory.
|
||||
|
||||
Memory: {description}
|
||||
Rating: """
|
||||
|
||||
try:
|
||||
response = self.llm.chat([{"role": "user", "content": prompt}], max_tokens=5)
|
||||
# Extract number from response
|
||||
score = int(''.join(filter(str.isdigit, response))[:1] or "5")
|
||||
return max(1, min(10, score))
|
||||
except:
|
||||
return 5 # Default moderate importance
|
||||
|
||||
def _generate_reflections(self):
|
||||
"""Generate high-level reflections from recent memories"""
|
||||
# Get recent high-importance memories
|
||||
recent_memories = [m for m in self.memories[-20:] if m.memory_type == "observation"]
|
||||
|
||||
if len(recent_memories) < 3:
|
||||
return
|
||||
|
||||
# Generate questions for reflection
|
||||
memory_descriptions = "\n".join([f"{i+1}. {m.description}" for i, m in enumerate(recent_memories)])
|
||||
|
||||
questions_prompt = f"""Given only the information above, what are 3 most salient high-level questions we can answer about the subjects in the statements?
|
||||
|
||||
{memory_descriptions}
|
||||
|
||||
Questions:"""
|
||||
|
||||
try:
|
||||
questions_response = self.llm.chat([{"role": "user", "content": questions_prompt}])
|
||||
|
||||
# For each question, generate insights
|
||||
insight_prompt = f"""Statements:
|
||||
{memory_descriptions}
|
||||
|
||||
What 5 high-level insights can you infer from the above statements?
|
||||
Format: insight (because of 1, 3, 5)"""
|
||||
|
||||
insights_response = self.llm.chat([{"role": "user", "content": insight_prompt}])
|
||||
|
||||
# Parse insights and create reflection memories
|
||||
for line in insights_response.split('\n'):
|
||||
if '(' in line and ')' in line:
|
||||
insight = line.split('(')[0].strip()
|
||||
if insight and len(insight) > 10:
|
||||
# Create reflection memory
|
||||
reflection = Memory(
|
||||
description=f"Reflection: {insight}",
|
||||
creation_time=datetime.now(),
|
||||
last_accessed=datetime.now(),
|
||||
importance_score=7, # Reflections are generally important
|
||||
memory_type="reflection",
|
||||
embedding=self.llm.get_embedding(insight)
|
||||
)
|
||||
self.memories.append(reflection)
|
||||
self.memory_counter += 1
|
||||
|
||||
except Exception as e:
|
||||
print(f"Reflection generation error: {e}")
|
||||
|
||||
def retrieve_memories(self, query: str, k: int = 10) -> List[Memory]:
|
||||
"""Retrieve relevant memories using recency, importance, relevance"""
|
||||
if not self.memories:
|
||||
return []
|
||||
|
||||
query_embedding = self.llm.get_embedding(query)
|
||||
current_time = datetime.now()
|
||||
scores = []
|
||||
|
||||
for i, memory in enumerate(self.memories):
|
||||
# Update last accessed
|
||||
memory.last_accessed = current_time
|
||||
|
||||
# Calculate recency (exponential decay)
|
||||
hours_since_accessed = (current_time - memory.last_accessed).total_seconds() / 3600
|
||||
recency = 0.995 ** hours_since_accessed
|
||||
|
||||
# Importance (already scored 1-10)
|
||||
importance = memory.importance_score / 10.0
|
||||
|
||||
# Relevance (cosine similarity)
|
||||
if memory.embedding and query_embedding:
|
||||
relevance = cosine_similarity([query_embedding], [memory.embedding])[0][0]
|
||||
else:
|
||||
relevance = 0.0
|
||||
|
||||
# Combined score (equal weighting as in Stanford paper)
|
||||
score = recency + importance + relevance
|
||||
scores.append((score, i, memory))
|
||||
|
||||
# Sort by score and return top k
|
||||
scores.sort(reverse=True, key=lambda x: x[0])
|
||||
return [memory for _, _, memory in scores[:k]]
|
||||
|
||||
|
||||
class CharacterAgent:
|
||||
"""Enhanced agent with Stanford's memory architecture"""
|
||||
|
||||
def __init__(self, character: Character, llm: LLMAgent):
|
||||
self.character = character
|
||||
self.llm = llm
|
||||
self.memory_stream = MemoryStream(llm)
|
||||
self.current_plan: List[str] = []
|
||||
|
||||
# Initialize with character background
|
||||
self._initialize_memories()
|
||||
|
||||
def _initialize_memories(self):
|
||||
"""Initialize agent with background memories"""
|
||||
background_facts = [
|
||||
f"My name is {self.character.name} and I am {self.character.age} years old",
|
||||
f"My personality: {self.character.personality}",
|
||||
f"My occupation: {self.character.occupation}",
|
||||
f"I live in {self.character.location}"
|
||||
]
|
||||
|
||||
for fact in background_facts:
|
||||
self.memory_stream.add_observation(fact)
|
||||
|
||||
for person, relationship in self.character.relationships.items():
|
||||
self.memory_stream.add_observation(f"My relationship with {person}: {relationship}")
|
||||
|
||||
def perceive(self, observation: str) -> None:
|
||||
"""Add new observation to memory stream"""
|
||||
self.memory_stream.add_observation(observation)
|
||||
|
||||
def plan_day(self) -> List[str]:
|
||||
"""Generate high-level daily plan"""
|
||||
# Retrieve relevant memories about goals, habits, schedule
|
||||
relevant_memories = self.memory_stream.retrieve_memories(
|
||||
f"{self.character.name} daily routine goals schedule", k=5
|
||||
)
|
||||
|
||||
memory_context = "\n".join([m.description for m in relevant_memories])
|
||||
|
||||
plan_prompt = f"""You are {self.character.name}.
|
||||
Background: {self.character.personality}
|
||||
Occupation: {self.character.occupation}
|
||||
|
||||
Relevant memories:
|
||||
{memory_context}
|
||||
|
||||
Plan your day in broad strokes (5-8 activities with times):
|
||||
1)"""
|
||||
|
||||
try:
|
||||
response = self.llm.chat([{"role": "user", "content": plan_prompt}], max_tokens=300)
|
||||
plan_steps = [f"1){response}"] if response else ["1) Go about my daily routine"]
|
||||
|
||||
# Add plan to memory
|
||||
plan_description = f"Daily plan: {'; '.join(plan_steps)}"
|
||||
self.memory_stream.add_observation(plan_description)
|
||||
|
||||
return plan_steps
|
||||
except:
|
||||
return ["1) Go about my daily routine"]
|
||||
|
||||
def react_to_situation(self, situation: str) -> str:
|
||||
"""Generate reaction based on memory and character"""
|
||||
# Retrieve relevant memories
|
||||
relevant_memories = self.memory_stream.retrieve_memories(situation, k=8)
|
||||
memory_context = "\n".join([f"- {m.description}" for m in relevant_memories])
|
||||
|
||||
reaction_prompt = f"""You are {self.character.name}.
|
||||
Age: {self.character.age}
|
||||
Personality: {self.character.personality}
|
||||
Current location: {self.character.location}
|
||||
|
||||
Relevant memories from your past:
|
||||
{memory_context}
|
||||
|
||||
Current situation: {situation}
|
||||
|
||||
How do you react? Stay completely in character and be specific about what you would do or say."""
|
||||
|
||||
try:
|
||||
response = self.llm.chat([{"role": "user", "content": reaction_prompt}])
|
||||
|
||||
# Add reaction to memory
|
||||
self.memory_stream.add_observation(f"I reacted to '{situation}' by: {response}")
|
||||
|
||||
return response
|
||||
except:
|
||||
return "I'm not sure how to respond to that."
|
||||
|
||||
def get_summary(self) -> str:
|
||||
"""Generate current summary based on memories and reflections"""
|
||||
reflections = [m for m in self.memory_stream.memories if m.memory_type == "reflection"]
|
||||
recent_observations = self.memory_stream.memories[-10:]
|
||||
|
||||
summary_memories = reflections[-3:] + recent_observations[-5:]
|
||||
memory_context = "\n".join([m.description for m in summary_memories])
|
||||
|
||||
summary_prompt = f"""Based on the following memories and reflections, provide a brief summary of who {self.character.name} is and what they care about:
|
||||
|
||||
{memory_context}
|
||||
|
||||
Summary:"""
|
||||
|
||||
try:
|
||||
return self.llm.chat([{"role": "user", "content": summary_prompt}], max_tokens=150)
|
||||
except:
|
||||
return f"{self.character.name} is a {self.character.age}-year-old {self.character.occupation}."
|
||||
|
||||
|
||||
class SceneManager:
|
||||
"""Enhanced scene manager with better context filtering"""
|
||||
|
||||
def __init__(self, main_llm: LLMAgent):
|
||||
self.main_llm = main_llm
|
||||
self.characters: Dict[str, Character] = {}
|
||||
self.agents: Dict[str, CharacterAgent] = {}
|
||||
self.scene_state = {
|
||||
"location": "cozy coffee shop",
|
||||
"time": "afternoon",
|
||||
"atmosphere": "quiet and peaceful",
|
||||
"active_conversations": [],
|
||||
"events": []
|
||||
}
|
||||
self.global_time = datetime.now()
|
||||
|
||||
def add_character(self, character: Character):
|
||||
self.characters[character.name] = character
|
||||
agent = CharacterAgent(character, LLMAgent("gpt-3.5-turbo", temperature=0.9))
|
||||
self.agents[character.name] = agent
|
||||
print(f"✓ Added {character.name} to the scene")
|
||||
|
||||
def advance_time(self, hours: int = 1):
|
||||
"""Advance scene time and trigger agent planning"""
|
||||
self.global_time += timedelta(hours=hours)
|
||||
self.scene_state["time"] = self.global_time.strftime("%I:%M %p")
|
||||
|
||||
# Each agent plans their next actions
|
||||
for name, agent in self.agents.items():
|
||||
agent.perceive(f"Time is now {self.scene_state['time']}")
|
||||
|
||||
def character_interaction(self, char1_name: str, char2_name: str, context: str) -> Dict[str, str]:
|
||||
"""Handle interaction between two characters"""
|
||||
if char1_name not in self.agents or char2_name not in self.agents:
|
||||
return {"error": "Character not found"}
|
||||
|
||||
char1_agent = self.agents[char1_name]
|
||||
char2_agent = self.agents[char2_name]
|
||||
|
||||
# Both characters observe the interaction context
|
||||
char1_agent.perceive(f"Interacting with {char2_name}: {context}")
|
||||
char2_agent.perceive(f"Interacting with {char1_name}: {context}")
|
||||
|
||||
# Generate responses
|
||||
char1_response = char1_agent.react_to_situation(f"You are talking with {char2_name}. Context: {context}")
|
||||
char2_response = char2_agent.react_to_situation(f"{char1_name} said: '{char1_response}'")
|
||||
|
||||
# Both remember the conversation
|
||||
char1_agent.perceive(f"Conversation with {char2_name}: I said '{char1_response}', they replied '{char2_response}'")
|
||||
char2_agent.perceive(f"Conversation with {char1_name}: They said '{char1_response}', I replied '{char2_response}'")
|
||||
|
||||
return {
|
||||
char1_name: char1_response,
|
||||
char2_name: char2_response
|
||||
}
|
||||
|
||||
|
||||
class EnhancedRoleplaySystem:
|
||||
def __init__(self):
|
||||
self.scene_manager = SceneManager(LLMAgent("gpt-4o-mini", temperature=0.7))
|
||||
self.setup_characters()
|
||||
|
||||
def setup_characters(self):
|
||||
# Create characters with rich backgrounds for testing memory
|
||||
alice = Character(
|
||||
name="Alice",
|
||||
age=23,
|
||||
personality="Introverted literature student who loves mystery novels and gets nervous in social situations but is very observant",
|
||||
occupation="Graduate student studying Victorian literature",
|
||||
location="coffee shop",
|
||||
relationships={
|
||||
"Professor Wilson": "My thesis advisor - supportive but demanding",
|
||||
"Emma": "Friendly barista I have a secret crush on"
|
||||
},
|
||||
goals=["Finish thesis chapter", "Work up courage to talk to Emma", "Find rare book for research"]
|
||||
)
|
||||
|
||||
bob = Character(
|
||||
name="Bob",
|
||||
age=28,
|
||||
personality="Confident software developer, outgoing and helpful, loves solving technical problems",
|
||||
occupation="Senior fullstack developer at local startup",
|
||||
location="coffee shop",
|
||||
relationships={
|
||||
"Alice": "Quiet regular I've seen around - seems nice",
|
||||
"Emma": "Friendly barista, always remembers my order"
|
||||
},
|
||||
goals=["Launch new feature this week", "Ask someone interesting on a date", "Learn more about AI"]
|
||||
)
|
||||
|
||||
emma = Character(
|
||||
name="Emma",
|
||||
age=25,
|
||||
personality="Energetic art student working as barista, cheerful and social, dreams of opening gallery",
|
||||
occupation="Barista and art student",
|
||||
location="coffee shop counter",
|
||||
relationships={
|
||||
"Alice": "Sweet regular who seems shy - orders same drink daily",
|
||||
"Bob": "Tech guy regular - always friendly and tips well"
|
||||
},
|
||||
goals=["Save money for art supplies", "Organize local art show", "Connect with more creative people"]
|
||||
)
|
||||
|
||||
for character in [alice, bob, emma]:
|
||||
self.scene_manager.add_character(character)
|
||||
|
||||
def get_character_response(self, character_name: str, user_input: str) -> str:
|
||||
if character_name not in self.scene_manager.agents:
|
||||
return f"❌ Character {character_name} not found!"
|
||||
|
||||
print(f"🧠 {character_name} accessing memories...")
|
||||
agent = self.scene_manager.agents[character_name]
|
||||
|
||||
# Agent perceives user interaction
|
||||
agent.perceive(f"Someone asked me: '{user_input}'")
|
||||
|
||||
# Generate response
|
||||
response = agent.react_to_situation(user_input)
|
||||
return response
|
||||
|
||||
def character_chat(self, char1: str, char2: str, context: str) -> str:
|
||||
"""Make two characters interact with each other"""
|
||||
interaction = self.scene_manager.character_interaction(char1, char2, context)
|
||||
|
||||
if "error" in interaction:
|
||||
return interaction["error"]
|
||||
|
||||
result = f"\n💬 **{char1}**: {interaction[char1]}\n💬 **{char2}**: {interaction[char2]}\n"
|
||||
return result
|
||||
|
||||
def advance_scene_time(self, hours: int = 1):
|
||||
"""Advance time and let characters plan"""
|
||||
self.scene_manager.advance_time(hours)
|
||||
return f"⏰ Advanced time by {hours} hour(s). Current time: {self.scene_manager.scene_state['time']}"
|
||||
|
||||
def get_character_memories(self, character_name: str, memory_type: str = "all") -> str:
|
||||
"""Show character's memory stream for debugging"""
|
||||
if character_name not in self.scene_manager.agents:
|
||||
return f"Character {character_name} not found"
|
||||
|
||||
agent = self.scene_manager.agents[character_name]
|
||||
memories = agent.memory_stream.memories
|
||||
|
||||
if memory_type != "all":
|
||||
memories = [m for m in memories if m.memory_type == memory_type]
|
||||
|
||||
result = f"\n🧠 {character_name}'s {memory_type} memories ({len(memories)} total):\n"
|
||||
for i, memory in enumerate(memories[-10:]): # Show last 10
|
||||
result += f"{i+1}. [{memory.memory_type}] {memory.description} (importance: {memory.importance_score})\n"
|
||||
|
||||
return result
|
||||
|
||||
def get_character_summary(self, character_name: str) -> str:
|
||||
"""Get AI-generated summary of character based on their memories"""
|
||||
if character_name not in self.scene_manager.agents:
|
||||
return f"Character {character_name} not found"
|
||||
|
||||
agent = self.scene_manager.agents[character_name]
|
||||
summary = agent.get_summary()
|
||||
|
||||
return f"\n📝 Current summary of {character_name}:\n{summary}\n"
|
||||
|
||||
|
||||
def main():
|
||||
print("🎭 Advanced Multi-Agent Roleplay with Stanford Memory Architecture")
|
||||
print("=" * 70)
|
||||
print("This implements Stanford's proven memory system:")
|
||||
print("• Memory Stream: observations, reflections, plans")
|
||||
print("• Smart Retrieval: recency + importance + relevance")
|
||||
print("• Auto Reflection: generates insights when importance threshold hit")
|
||||
print("• Natural Forgetting: older memories become less accessible")
|
||||
print()
|
||||
print("🎯 COMMANDS:")
|
||||
print(" talk <character> <message> - Character responds using their memories")
|
||||
print(" chat <char1> <char2> <context> - Two characters interact")
|
||||
print(" time <hours> - Advance time, triggers planning")
|
||||
print(" memories <character> [type] - Show character's memories")
|
||||
print(" summary <character> - AI summary of character")
|
||||
print(" status - Show scene status")
|
||||
print(" quit - Exit")
|
||||
print()
|
||||
|
||||
if not os.getenv("OPENAI_API_KEY"):
|
||||
print("⚠️ Set OPENAI_API_KEY environment variable to use real LLMs")
|
||||
print()
|
||||
|
||||
system = EnhancedRoleplaySystem()
|
||||
|
||||
# Give agents some initial experiences
|
||||
print("🌱 Setting up initial memories...")
|
||||
system.scene_manager.agents["Alice"].perceive("I spilled coffee on my notes yesterday - so embarrassing")
|
||||
system.scene_manager.agents["Alice"].perceive("Emma helped me clean up and was really sweet about it")
|
||||
system.scene_manager.agents["Bob"].perceive("Shipped a major feature at work - feeling accomplished")
|
||||
system.scene_manager.agents["Emma"].perceive("A shy regular (Alice) has been coming in every day this week")
|
||||
print("✓ Initial memories established")
|
||||
print()
|
||||
|
||||
print("🧪 TRY THESE EXPERIMENTS:")
|
||||
print("1. talk Alice How are you feeling today?")
|
||||
print("2. time 2 (advance time to trigger reflection)")
|
||||
print("3. memories Alice reflection (see generated insights)")
|
||||
print("4. chat Alice Emma You both seem to be here often")
|
||||
print("5. summary Alice (see how memories shaped character)")
|
||||
print()
|
||||
|
||||
while True:
|
||||
try:
|
||||
command = input("> ").strip()
|
||||
|
||||
if command == "quit":
|
||||
print("👋 Goodbye!")
|
||||
break
|
||||
elif command == "status":
|
||||
print(f"\n📍 Scene: {system.scene_manager.scene_state['location']}")
|
||||
print(f"⏰ Time: {system.scene_manager.scene_state['time']}")
|
||||
print(f"👥 Characters: {', '.join(system.scene_manager.characters.keys())}")
|
||||
for name, agent in system.scene_manager.agents.items():
|
||||
mem_count = len(agent.memory_stream.memories)
|
||||
reflections = len([m for m in agent.memory_stream.memories if m.memory_type == "reflection"])
|
||||
print(f" {name}: {mem_count} memories ({reflections} reflections)")
|
||||
print()
|
||||
|
||||
elif command.startswith("talk "):
|
||||
parts = command.split(" ", 2)
|
||||
if len(parts) >= 3:
|
||||
character, message = parts[1], parts[2]
|
||||
print(f"\n🗣️ You to {character}: {message}")
|
||||
response = system.get_character_response(character, message)
|
||||
print(f"💬 {character}: {response}\n")
|
||||
else:
|
||||
print("❓ Usage: talk <character> <message>")
|
||||
|
||||
elif command.startswith("chat "):
|
||||
parts = command.split(" ", 3)
|
||||
if len(parts) >= 4:
|
||||
char1, char2, context = parts[1], parts[2], parts[3]
|
||||
print(f"\n🎬 Setting up interaction: {context}")
|
||||
result = system.character_chat(char1, char2, context)
|
||||
print(result)
|
||||
else:
|
||||
print("❓ Usage: chat <character1> <character2> <context>")
|
||||
|
||||
elif command.startswith("time "):
|
||||
try:
|
||||
hours = int(command.split()[1])
|
||||
result = system.advance_scene_time(hours)
|
||||
print(result)
|
||||
# Show what characters are planning
|
||||
for name, agent in system.scene_manager.agents.items():
|
||||
plan = agent.plan_day()
|
||||
print(f"📅 {name}'s plan: {plan[0] if plan else 'No specific plans'}")
|
||||
except (IndexError, ValueError):
|
||||
print("❓ Usage: time <hours>")
|
||||
|
||||
elif command.startswith("memories "):
|
||||
parts = command.split()
|
||||
character = parts[1] if len(parts) > 1 else ""
|
||||
memory_type = parts[2] if len(parts) > 2 else "all"
|
||||
if character:
|
||||
result = system.get_character_memories(character, memory_type)
|
||||
print(result)
|
||||
else:
|
||||
print("❓ Usage: memories <character> [observation/reflection/plan/all]")
|
||||
|
||||
elif command.startswith("summary "):
|
||||
character = command.split()[1] if len(command.split()) > 1 else ""
|
||||
if character:
|
||||
result = system.get_character_summary(character)
|
||||
print(result)
|
||||
else:
|
||||
print("❓ Usage: summary <character>")
|
||||
|
||||
else:
|
||||
print("❓ Commands: talk, chat, time, memories, summary, status, quit")
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("\n👋 Goodbye!")
|
||||
break
|
||||
except Exception as e:
|
||||
print(f"💥 Error: {e}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
3
components/__init__.py
Normal file
3
components/__init__.py
Normal file
@@ -0,0 +1,3 @@
|
||||
from .async_element import AsyncElement
|
||||
|
||||
__all__ = ['AsyncElement']
|
||||
45
components/async_element.py
Normal file
45
components/async_element.py
Normal file
@@ -0,0 +1,45 @@
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Self, Any, Optional
|
||||
from nicegui import ui
|
||||
|
||||
|
||||
class AsyncElement(ui.element, ABC):
|
||||
"""Base class for UI elements with async initialization"""
|
||||
dialog: ui.dialog | None
|
||||
|
||||
def __init__(self, tag: str = 'div', dialog: Optional[ui.dialog] = None) -> None:
|
||||
super().__init__(tag)
|
||||
self.dialog = dialog
|
||||
|
||||
@abstractmethod
|
||||
async def build(self, *args, **kwargs) -> None:
|
||||
"""Build/setup the element - must be implemented by subclasses"""
|
||||
pass
|
||||
|
||||
@classmethod
|
||||
async def create(cls, *args, **kwargs) -> Self:
|
||||
"""Factory method to create and build an element instance"""
|
||||
instance = cls()
|
||||
await instance.build(*args, **kwargs)
|
||||
return instance
|
||||
|
||||
@classmethod
|
||||
async def as_dialog(cls, dialog_classes: str = '', card_classes: str = '', *args, **kwargs) -> Any:
|
||||
"""Create as dialog and return the awaited result"""
|
||||
with ui.dialog().classes(dialog_classes) as dialog:
|
||||
with ui.card().classes(card_classes):
|
||||
instance = cls(dialog=dialog)
|
||||
await instance.build(*args, **kwargs)
|
||||
|
||||
result = await dialog
|
||||
dialog.clear()
|
||||
return result
|
||||
|
||||
def submit(self, result: Any) -> None:
|
||||
if self.dialog:
|
||||
self.dialog.submit(result)
|
||||
|
||||
def close_dialog(self) -> None:
|
||||
"""Close the dialog with a result"""
|
||||
if self.dialog:
|
||||
self.dialog.close()
|
||||
4
llm_connector/__init__.py
Normal file
4
llm_connector/__init__.py
Normal file
@@ -0,0 +1,4 @@
|
||||
from . llm import get_response
|
||||
from .datatypes import LLMBackend, LLMMessage
|
||||
|
||||
__all__ = ['get_response', 'LLMBackend', 'LLMMessage']
|
||||
12
llm_connector/datatypes.py
Normal file
12
llm_connector/datatypes.py
Normal file
@@ -0,0 +1,12 @@
|
||||
from typing import TypedDict, Literal
|
||||
|
||||
|
||||
class LLMBackend(TypedDict):
|
||||
base_url: str
|
||||
api_token: str
|
||||
model: str
|
||||
|
||||
|
||||
class LLMMessage(TypedDict):
|
||||
role: Literal["system", "assistant", "user"]
|
||||
content: str
|
||||
118
llm_connector/llm.py
Normal file
118
llm_connector/llm.py
Normal file
@@ -0,0 +1,118 @@
|
||||
import json
|
||||
from typing import Union, AsyncGenerator, List
|
||||
import logging
|
||||
|
||||
import httpx
|
||||
|
||||
from .datatypes import LLMBackend, LLMMessage
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
async def get_response(backend: LLMBackend, messages: List[LLMMessage], stream: bool = False) -> Union[str, AsyncGenerator[str, None]]:
|
||||
|
||||
try:
|
||||
# Prepare the request parameters
|
||||
request_params = {
|
||||
"model": backend["model"],
|
||||
"messages": messages,
|
||||
"stream": stream,
|
||||
}
|
||||
# Prepare headers
|
||||
headers = {
|
||||
"Content-Type": "application/json"
|
||||
}
|
||||
if len(backend["api_token"]):
|
||||
# Prepare headers
|
||||
headers['Authorization'] = f"Bearer {backend['api_token']}"
|
||||
|
||||
print(request_params)
|
||||
print(headers)
|
||||
|
||||
# Create httpx client
|
||||
async with httpx.AsyncClient(timeout=30.0) as client:
|
||||
url = f"{backend['base_url']}/chat/completions"
|
||||
|
||||
if stream:
|
||||
# Stream the response
|
||||
async with client.stream(
|
||||
"POST",
|
||||
url,
|
||||
headers=headers,
|
||||
json=request_params,
|
||||
) as response:
|
||||
response.raise_for_status()
|
||||
|
||||
async for line in response.aiter_lines():
|
||||
line = line.strip()
|
||||
|
||||
# Skip empty lines and non-data lines
|
||||
if not line or not line.startswith("data: "):
|
||||
continue
|
||||
|
||||
# Remove "data: " prefix
|
||||
data = line[6:]
|
||||
|
||||
# Check for stream end
|
||||
if data == "[DONE]":
|
||||
break
|
||||
|
||||
try:
|
||||
# Parse JSON chunk
|
||||
chunk_data = json.loads(data)
|
||||
|
||||
if "choices" in chunk_data and chunk_data["choices"]:
|
||||
choice = chunk_data["choices"][0]
|
||||
delta = choice.get("delta", {})
|
||||
|
||||
# Handle reasoning content (for models that support it)
|
||||
if "reasoning_content" in delta and delta["reasoning_content"]:
|
||||
yield {'reasoning': delta["reasoning_content"]} # type: ignore
|
||||
|
||||
# Handle regular content
|
||||
if "content" in delta and delta["content"]:
|
||||
yield {'content': delta["content"]} # type: ignore
|
||||
|
||||
except json.JSONDecodeError:
|
||||
# Skip malformed JSON chunks
|
||||
continue
|
||||
else:
|
||||
# Non-streaming response
|
||||
response = await client.post(
|
||||
url,
|
||||
headers=headers,
|
||||
json=request_params,
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
response_data = response.json()
|
||||
content = ""
|
||||
|
||||
if "choices" in response_data and response_data["choices"]:
|
||||
message = response_data["choices"][0].get("message", {})
|
||||
content = message.get("content", "")
|
||||
|
||||
# FIX: Yield as dictionary to match streaming format
|
||||
if content:
|
||||
yield {'content': content} # type: ignore
|
||||
|
||||
except httpx.HTTPStatusError as e:
|
||||
error_msg = f"HTTP error getting LLM response: {e.response.status_code} - {e.response.text}"
|
||||
logger.error(error_msg)
|
||||
yield ""
|
||||
|
||||
except httpx.RequestError as e:
|
||||
error_msg = f"Request error getting LLM response: {str(e)}"
|
||||
logger.error(error_msg)
|
||||
yield ""
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Error getting LLM response: {str(e)}"
|
||||
logger.error(error_msg)
|
||||
yield ""
|
||||
|
||||
|
||||
async def _empty_async_generator() -> AsyncGenerator[str, None]:
|
||||
"""Helper function for empty async generator"""
|
||||
if False:
|
||||
yield ""
|
||||
22
main.py
Normal file
22
main.py
Normal file
@@ -0,0 +1,22 @@
|
||||
#!/usr/bin/env python3
|
||||
from dotenv import load_dotenv
|
||||
|
||||
from nicegui import ui
|
||||
|
||||
from pages.page_main import MainPage
|
||||
|
||||
load_dotenv()
|
||||
|
||||
# Run the application
|
||||
if __name__ in {"__main__", "__mp_main__"}:
|
||||
@ui.page('/')
|
||||
async def _():
|
||||
await MainPage.create()
|
||||
|
||||
ui.run(
|
||||
title='LivingAgents',
|
||||
favicon='🔒',
|
||||
show=False,
|
||||
dark=False,
|
||||
port=8080
|
||||
)
|
||||
367
pages/page_main.py
Normal file
367
pages/page_main.py
Normal file
@@ -0,0 +1,367 @@
|
||||
import os
|
||||
from typing import Optional
|
||||
from nicegui import ui
|
||||
from components import AsyncElement
|
||||
from llm_connector import LLMBackend
|
||||
|
||||
|
||||
class MainPage(AsyncElement):
|
||||
|
||||
backend: LLMBackend
|
||||
scene_manager = None # Will hold SceneManager instance
|
||||
selected_character: Optional[str] = None
|
||||
memory_viewer = None
|
||||
chat_container = None
|
||||
scene_info_container = None
|
||||
|
||||
async def build(self): # pylint: disable=W0221
|
||||
|
||||
backend: LLMBackend = {'base_url': os.environ['BACKEND_BASE_URL'],
|
||||
'api_token': os.environ['BACKEND_API_TOKEN'],
|
||||
'model': os.environ['BACKEND_MODEL']}
|
||||
|
||||
self.backend = backend
|
||||
|
||||
# Initialize mock scene manager (will be replaced with real one)
|
||||
await self._initialize_scene()
|
||||
|
||||
# Header
|
||||
with ui.header().classes('bg-gradient-to-r from-purple-600 to-indigo-600 text-white'):
|
||||
ui.label('🎭 Living Agents').classes('text-2xl font-bold')
|
||||
ui.label('Multi-Agent Roleplay with Stanford Memory Architecture').classes(
|
||||
'text-sm opacity-90')
|
||||
|
||||
self.classes('w-full')
|
||||
with self:
|
||||
|
||||
# Main container with three columns
|
||||
with ui.row().classes('w-full p-4 gap-4'):
|
||||
|
||||
# Left Panel - Scene Control & Characters
|
||||
with ui.column().classes('w-1/4 gap-4'):
|
||||
|
||||
# Scene Information Card
|
||||
with ui.card().classes('w-full'):
|
||||
ui.label('📍 Scene Control').classes('text-lg font-bold mb-2')
|
||||
|
||||
self.scene_info_container = ui.column().classes('w-full gap-2')
|
||||
with self.scene_info_container:
|
||||
self._create_scene_info()
|
||||
|
||||
ui.separator()
|
||||
|
||||
# Time controls
|
||||
with ui.row().classes('w-full gap-2 mt-2'):
|
||||
ui.button('⏰ +1 Hour', on_click=lambda: self._advance_time(1)).classes('flex-1')
|
||||
ui.button('📅 +1 Day', on_click=lambda: self._advance_time(24)).classes('flex-1')
|
||||
|
||||
# Characters List
|
||||
with ui.card().classes('w-full'):
|
||||
ui.label('👥 Characters').classes('text-lg font-bold mb-2')
|
||||
|
||||
# Character cards
|
||||
with ui.column().classes('w-full gap-2'):
|
||||
# Alice
|
||||
with ui.card().classes('w-full p-3 cursor-pointer hover:bg-gray-50').on(
|
||||
'click', lambda: self._select_character('Alice')):
|
||||
with ui.row().classes('items-center gap-2'):
|
||||
ui.icon('person', size='sm').classes('text-purple-500')
|
||||
with ui.column().classes('flex-1'):
|
||||
ui.label('Alice').classes('font-semibold')
|
||||
ui.label('Literature Student, 23').classes(
|
||||
'text-xs text-gray-500')
|
||||
with ui.row().classes('gap-1 mt-1'):
|
||||
ui.badge('📚 10 memories', color='purple').classes('text-xs')
|
||||
ui.badge('💭 0 reflections', color='indigo').classes('text-xs')
|
||||
|
||||
# Bob
|
||||
with ui.card().classes('w-full p-3 cursor-pointer hover:bg-gray-50').on(
|
||||
'click', lambda: self._select_character('Bob')):
|
||||
with ui.row().classes('items-center gap-2'):
|
||||
ui.icon('person', size='sm').classes('text-blue-500')
|
||||
with ui.column().classes('flex-1'):
|
||||
ui.label('Bob').classes('font-semibold')
|
||||
ui.label('Software Developer, 28').classes(
|
||||
'text-xs text-gray-500')
|
||||
with ui.row().classes('gap-1 mt-1'):
|
||||
ui.badge('📚 8 memories', color='purple').classes('text-xs')
|
||||
ui.badge('💭 0 reflections', color='indigo').classes('text-xs')
|
||||
|
||||
# Emma
|
||||
with ui.card().classes('w-full p-3 cursor-pointer hover:bg-gray-50').on(
|
||||
'click', lambda: self._select_character('Emma')):
|
||||
with ui.row().classes('items-center gap-2'):
|
||||
ui.icon('person', size='sm').classes('text-pink-500')
|
||||
with ui.column().classes('flex-1'):
|
||||
ui.label('Emma').classes('font-semibold')
|
||||
ui.label('Barista & Artist, 25').classes(
|
||||
'text-xs text-gray-500')
|
||||
with ui.row().classes('gap-1 mt-1'):
|
||||
ui.badge('📚 7 memories', color='purple').classes('text-xs')
|
||||
ui.badge('💭 0 reflections', color='indigo').classes('text-xs')
|
||||
|
||||
# Character Summary - moved here to be under Characters
|
||||
with ui.card().classes('w-full'):
|
||||
ui.label('📝 Character Summary').classes('text-lg font-bold mb-2')
|
||||
self.character_summary = ui.column().classes('w-full')
|
||||
with self.character_summary:
|
||||
ui.label('Select a character to see their summary').classes(
|
||||
'text-sm text-gray-500 italic')
|
||||
|
||||
# Middle Panel - Interaction & Chat
|
||||
with ui.column().classes('w-1/2 gap-4'):
|
||||
|
||||
# Interaction Controls
|
||||
with ui.card().classes('w-full'):
|
||||
ui.label('💬 Interactions').classes('text-lg font-bold mb-2')
|
||||
|
||||
# Character-to-User interaction
|
||||
with ui.column().classes('w-full gap-2'):
|
||||
ui.label('Talk to Character').classes('font-semibold text-sm')
|
||||
with ui.row().classes('w-full gap-2'):
|
||||
self.user_input = ui.input(
|
||||
placeholder='Say something to the selected character...'
|
||||
).classes('flex-1')
|
||||
ui.button('Send', on_click=self._send_to_character).props(
|
||||
'icon=send color=primary')
|
||||
|
||||
ui.separator()
|
||||
|
||||
# Character-to-Character interaction
|
||||
with ui.column().classes('w-full gap-2 mt-2'):
|
||||
ui.label('Character Interaction').classes('font-semibold text-sm')
|
||||
with ui.row().classes('w-full gap-2'):
|
||||
self.char1_select = ui.select(
|
||||
['Alice', 'Bob', 'Emma'],
|
||||
label='Character 1',
|
||||
value='Alice'
|
||||
).classes('flex-1')
|
||||
self.char2_select = ui.select(
|
||||
['Alice', 'Bob', 'Emma'],
|
||||
label='Character 2',
|
||||
value='Bob'
|
||||
).classes('flex-1')
|
||||
self.interaction_context = ui.input(
|
||||
placeholder='Context for interaction...'
|
||||
).classes('w-full')
|
||||
ui.button(
|
||||
'Make them interact',
|
||||
on_click=self._character_interaction
|
||||
).props('icon=forum color=secondary').classes('w-full')
|
||||
|
||||
# Chat History
|
||||
with ui.card().classes('w-full flex-1'):
|
||||
with ui.row().classes('w-full items-center mb-2'):
|
||||
ui.label('🗨️ Conversation History').classes('text-lg font-bold')
|
||||
ui.space()
|
||||
ui.button(icon='delete', on_click=self._clear_chat).props('flat round size=sm')
|
||||
|
||||
# Scrollable chat container
|
||||
with ui.scroll_area().classes('w-full h-96 border rounded p-2'):
|
||||
self.chat_container = ui.column().classes('w-full gap-2')
|
||||
with self.chat_container:
|
||||
# Welcome message
|
||||
with ui.chat_message(name='System', sent=False).classes('w-full'):
|
||||
ui.label(
|
||||
'Welcome to the Living Agents roleplay system! '
|
||||
'Select a character and start interacting.'
|
||||
).classes('text-sm')
|
||||
|
||||
# Right Panel - Memory Stream
|
||||
with ui.column().classes('w-1/4 gap-4'):
|
||||
|
||||
# Memory Stream Viewer
|
||||
with ui.card().classes('w-full flex-1'):
|
||||
with ui.row().classes('w-full items-center mb-2'):
|
||||
ui.label('🧠 Memory Stream').classes('text-lg font-bold')
|
||||
ui.space()
|
||||
# Memory type filter
|
||||
self.memory_filter = ui.select(
|
||||
['all', 'observation', 'reflection', 'plan'],
|
||||
value='all',
|
||||
on_change=self._update_memory_view
|
||||
).props('dense outlined').classes('w-24')
|
||||
|
||||
# Scrollable memory list
|
||||
with ui.scroll_area().classes('w-full h-96 border rounded p-2'):
|
||||
self.memory_viewer = ui.column().classes('w-full gap-2')
|
||||
with self.memory_viewer:
|
||||
ui.label('Select a character to view memories').classes('text-sm text-gray-500 italic')
|
||||
|
||||
# Footer with stats
|
||||
with ui.footer().classes('bg-gray-100 text-gray-600 text-sm'):
|
||||
with ui.row().classes('w-full justify-center items-center gap-4'):
|
||||
ui.label('🎯 Stanford Memory Architecture')
|
||||
ui.label('|')
|
||||
self.stats_label = ui.label('Total Memories: 0 | Reflections: 0')
|
||||
ui.label('|')
|
||||
ui.label('⚡ Powered by Custom LLM Connector')
|
||||
|
||||
def _create_scene_info(self):
|
||||
"""Create scene information display"""
|
||||
with ui.row().classes('w-full justify-between'):
|
||||
ui.label('Location:').classes('text-sm font-semibold')
|
||||
ui.label('Cozy Coffee Shop').classes('text-sm')
|
||||
with ui.row().classes('w-full justify-between'):
|
||||
ui.label('Time:').classes('text-sm font-semibold')
|
||||
ui.label('2:30 PM').classes('text-sm')
|
||||
with ui.row().classes('w-full justify-between'):
|
||||
ui.label('Atmosphere:').classes('text-sm font-semibold')
|
||||
ui.label('Quiet and peaceful').classes('text-sm')
|
||||
|
||||
async def _initialize_scene(self):
|
||||
"""Initialize the scene with mock data (will be replaced with real SceneManager)"""
|
||||
# This will be replaced with actual SceneManager initialization
|
||||
ui.notify('🎬 Scene initialized with 3 characters', type='positive')
|
||||
|
||||
async def _select_character(self, character_name: str):
|
||||
"""Select a character and update UI"""
|
||||
self.selected_character = character_name
|
||||
ui.notify(f'Selected: {character_name}', type='info')
|
||||
|
||||
# Update character summary
|
||||
self.character_summary.clear()
|
||||
with self.character_summary:
|
||||
ui.label(f'{character_name}').classes('font-bold text-lg')
|
||||
ui.separator()
|
||||
|
||||
if character_name == 'Alice':
|
||||
ui.label('Age: 23').classes('text-sm')
|
||||
ui.label('Occupation: Graduate student').classes('text-sm')
|
||||
ui.label('Personality: Introverted, observant, loves mystery novels').classes('text-sm mt-2')
|
||||
ui.label('Current Goal: Finish thesis chapter').classes('text-sm text-blue-600 mt-2')
|
||||
elif character_name == 'Bob':
|
||||
ui.label('Age: 28').classes('text-sm')
|
||||
ui.label('Occupation: Senior Developer').classes('text-sm')
|
||||
ui.label('Personality: Confident, helpful, technical').classes('text-sm mt-2')
|
||||
ui.label('Current Goal: Launch new feature').classes('text-sm text-blue-600 mt-2')
|
||||
elif character_name == 'Emma':
|
||||
ui.label('Age: 25').classes('text-sm')
|
||||
ui.label('Occupation: Barista & Art Student').classes('text-sm')
|
||||
ui.label('Personality: Energetic, social, creative').classes('text-sm mt-2')
|
||||
ui.label('Current Goal: Organize art show').classes('text-sm text-blue-600 mt-2')
|
||||
|
||||
# Update memory viewer
|
||||
await self._update_memory_view()
|
||||
|
||||
async def _update_memory_view(self):
|
||||
"""Update the memory stream viewer"""
|
||||
if not self.selected_character:
|
||||
return
|
||||
|
||||
self.memory_viewer.clear()
|
||||
with self.memory_viewer:
|
||||
# Mock memories for demonstration
|
||||
memories = [
|
||||
('observation', 'Arrived at the coffee shop', 8, '10:00 AM'),
|
||||
('observation', 'Ordered my usual latte', 3, '10:05 AM'),
|
||||
('observation', 'Saw a familiar face by the window', 6, '10:30 AM'),
|
||||
('reflection', 'I seem to come here when I need to focus', 7, '11:00 AM'),
|
||||
('plan', 'Work on thesis for 2 hours', 5, '11:30 AM'),
|
||||
]
|
||||
|
||||
filter_type = self.memory_filter.value
|
||||
for mem_type, description, importance, time in memories:
|
||||
if filter_type == 'all' or filter_type == mem_type:
|
||||
with ui.card().classes('w-full p-2'):
|
||||
with ui.row().classes('w-full items-start gap-2'):
|
||||
# Memory type icon
|
||||
if mem_type == 'observation':
|
||||
ui.icon('visibility', size='xs').classes('text-blue-500 mt-1')
|
||||
elif mem_type == 'reflection':
|
||||
ui.icon('psychology', size='xs').classes('text-purple-500 mt-1')
|
||||
else:
|
||||
ui.icon('event', size='xs').classes('text-green-500 mt-1')
|
||||
|
||||
with ui.column().classes('flex-1'):
|
||||
ui.label(description).classes('text-sm')
|
||||
with ui.row().classes('gap-2 mt-1'):
|
||||
ui.badge(f'⭐ {importance}', color='orange').classes('text-xs')
|
||||
ui.label(time).classes('text-xs text-gray-500')
|
||||
|
||||
async def _send_to_character(self):
|
||||
"""Send message to selected character"""
|
||||
if not self.selected_character:
|
||||
ui.notify('Please select a character first', type='warning')
|
||||
return
|
||||
|
||||
if not self.user_input.value:
|
||||
return
|
||||
|
||||
message = self.user_input.value
|
||||
self.user_input.value = ''
|
||||
|
||||
# Add user message to chat
|
||||
with self.chat_container:
|
||||
with ui.chat_message(name='You', sent=True).classes('w-full'):
|
||||
ui.label(message).classes('text-sm')
|
||||
|
||||
# Mock response (will be replaced with actual agent response)
|
||||
with self.chat_container:
|
||||
with ui.chat_message(name=self.selected_character, sent=False).classes('w-full'):
|
||||
spinner = ui.spinner('dots')
|
||||
|
||||
# Simulate thinking
|
||||
await ui.run_javascript('window.scrollTo(0, document.body.scrollHeight)')
|
||||
ui.notify(f'🧠 {self.selected_character} is thinking...', type='info')
|
||||
|
||||
# Mock response after delay
|
||||
await ui.timer(1.5, lambda: self._add_character_response(spinner))
|
||||
|
||||
def _add_character_response(self, spinner):
|
||||
"""Add character response to chat"""
|
||||
spinner.delete()
|
||||
parent = spinner.parent_slot.parent
|
||||
with parent:
|
||||
if self.selected_character == 'Alice':
|
||||
ui.label("*nervously adjusts glasses* Oh, um, hello there. I was just working on my thesis chapter about Victorian gothic literature. The coffee here helps me concentrate.").classes('text-sm')
|
||||
elif self.selected_character == 'Bob':
|
||||
ui.label("Hey! Yeah, I'm actually debugging some code right now. This new feature is giving me some trouble, but I think I'm close to solving it. How's your day going?").classes('text-sm')
|
||||
else:
|
||||
ui.label("Hi! Welcome to our little coffee shop! I just finished a new sketch during my break - been trying to capture the afternoon light through the windows. Can I get you anything?").classes('text-sm')
|
||||
|
||||
async def _character_interaction(self):
|
||||
"""Make two characters interact"""
|
||||
char1 = self.char1_select.value
|
||||
char2 = self.char2_select.value
|
||||
context = self.interaction_context.value or "meeting at the coffee shop"
|
||||
|
||||
if char1 == char2:
|
||||
ui.notify("Characters can't interact with themselves", type='warning')
|
||||
return
|
||||
|
||||
# Add interaction to chat
|
||||
with self.chat_container:
|
||||
with ui.chat_message(name='Scene', sent=False).classes('w-full'):
|
||||
ui.label(f'🎬 {char1} and {char2} interact: {context}').classes('text-sm italic text-gray-600')
|
||||
|
||||
# Mock interaction
|
||||
with ui.chat_message(name=char1, sent=False).classes('w-full'):
|
||||
ui.label("Oh, hi there! I didn't expect to see you here today.").classes('text-sm')
|
||||
|
||||
with ui.chat_message(name=char2, sent=False).classes('w-full'):
|
||||
ui.label("Hey! Yeah, this is my usual spot. How have you been?").classes('text-sm')
|
||||
|
||||
ui.notify(f'💬 {char1} and {char2} had an interaction', type='positive')
|
||||
|
||||
def _advance_time(self, hours: int):
|
||||
"""Advance scene time"""
|
||||
ui.notify(f'⏰ Advanced time by {hours} hour(s)', type='info')
|
||||
|
||||
# Update scene info
|
||||
self.scene_info_container.clear()
|
||||
with self.scene_info_container:
|
||||
self._create_scene_info()
|
||||
|
||||
# Add time advancement to chat
|
||||
with self.chat_container:
|
||||
with ui.chat_message(name='System', sent=False).classes('w-full'):
|
||||
ui.label(f'⏰ Time advanced by {hours} hour(s). Characters update their plans...').classes('text-sm italic text-gray-600')
|
||||
|
||||
def _clear_chat(self):
|
||||
"""Clear chat history"""
|
||||
self.chat_container.clear()
|
||||
with self.chat_container:
|
||||
with ui.chat_message(name='System', sent=False).classes('w-full'):
|
||||
ui.label('Chat history cleared. Ready for new interactions!').classes('text-sm')
|
||||
ui.notify('Chat cleared', type='info')
|
||||
37
project_description.md
Normal file
37
project_description.md
Normal file
@@ -0,0 +1,37 @@
|
||||
# Multi-Agent Roleplay System with Stanford Memory Architecture
|
||||
|
||||
This is a Python-based multi-agent roleplay system that implements Stanford's proven "Generative Agents" memory architecture for creating believable AI characters with long-term memory, reflection, and emergent behaviors.
|
||||
|
||||
## Core Architecture
|
||||
|
||||
### Memory System (Stanford's Approach)
|
||||
- **Memory Stream**: Each agent maintains observations, reflections, and plans
|
||||
- **Smart Retrieval**: Combines recency (exponential decay), importance (1-10 scale), and relevance (cosine similarity)
|
||||
- **Auto-Reflection**: When importance threshold (150) is hit, generates higher-level insights
|
||||
- **Natural Forgetting**: Older memories become less accessible over time
|
||||
|
||||
### Agent Components
|
||||
- **Character**: Core personality, background, relationships, goals
|
||||
- **CharacterAgent**: Handles memory, planning, reactions based on Stanford architecture
|
||||
- **MemoryStream**: Implements the full memory/reflection/planning system
|
||||
- **SceneManager**: Orchestrates multi-agent interactions and scene state
|
||||
|
||||
## Key Features
|
||||
- Real-time character interactions with persistent memory
|
||||
- Automatic insight generation (reflections) from accumulated experiences
|
||||
- Time advancement that triggers planning and memory decay
|
||||
- Character-to-character conversations with relationship memory
|
||||
- Emergent behaviors through memory-driven decision making
|
||||
|
||||
## Technology Stack
|
||||
- Python 3.8+
|
||||
- OpenAI GPT API (gpt-3.5-turbo for agents, gpt-4o-mini for scene management)
|
||||
- OpenAI Embeddings API for memory relevance scoring
|
||||
- scikit-learn for cosine similarity calculations
|
||||
- Rich character personalities with background relationships and goals
|
||||
|
||||
## Research Basis
|
||||
Based on Stanford's 2023 "Generative Agents" paper that successfully created 25 AI agents in a virtual town who formed relationships, spread information, and coordinated group activities entirely through emergent behavior.
|
||||
|
||||
## Development Focus
|
||||
The system emphasizes psychological realism over game mechanics - agents should behave like real people with genuine memory limitations, emotional consistency, and relationship development over time.
|
||||
9
pyproject.toml
Normal file
9
pyproject.toml
Normal file
@@ -0,0 +1,9 @@
|
||||
[project]
|
||||
name = "livingagents"
|
||||
version = "0.1.0"
|
||||
description = "Add your description here"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.12"
|
||||
dependencies = [
|
||||
"nicegui>=2.23.3",
|
||||
]
|
||||
Reference in New Issue
Block a user