too much
This commit is contained in:
87
docs/memory-system.md
Normal file
87
docs/memory-system.md
Normal file
@@ -0,0 +1,87 @@
|
||||
# Memory Architecture
|
||||
|
||||
The memory system is inspired by Stanford's "Generative Agents" research, implementing a sophisticated memory model that
|
||||
enables realistic long-term character development.
|
||||
|
||||
## 🧠 Memory Types
|
||||
|
||||
### Observations
|
||||
|
||||
- **What**: Direct experiences and perceptions
|
||||
- **Examples**: "I spilled coffee", "Emma smiled at me", "It's raining outside"
|
||||
- **Importance**: Usually 1-5 for mundane events, 8-10 for significant experiences
|
||||
- **Purpose**: Raw building blocks of character experience
|
||||
|
||||
### Reflections
|
||||
|
||||
- **What**: Higher-level insights generated from observation patterns
|
||||
- **Examples**: "I have romantic feelings for Emma", "I'm naturally shy in social situations"
|
||||
- **Importance**: Usually 6-10 (insights are more valuable than raw observations)
|
||||
- **Purpose**: Character self-understanding and behavioral consistency
|
||||
|
||||
### Plans
|
||||
|
||||
- **What**: Future intentions and goals
|
||||
- **Examples**: "I want to ask Emma about her art", "I should finish my thesis chapter"
|
||||
- **Importance**: 3-10 depending on goal significance
|
||||
- **Purpose**: Drive future behavior and maintain character consistency
|
||||
|
||||
## 🔍 Memory Retrieval
|
||||
|
||||
### Smart Retrieval Algorithm
|
||||
|
||||
Memories are scored using three factors:
|
||||
|
||||
1. **Recency** - Recent memories are more accessible
|
||||
```python
|
||||
recency = 0.995 ** hours_since_last_accessed
|
||||
```
|
||||
|
||||
2. **Importance** - Significant events stay memorable longer
|
||||
```python
|
||||
importance = memory.importance_score / 10.0
|
||||
```
|
||||
|
||||
3. **Relevance** - Contextually similar memories surface together
|
||||
```python
|
||||
relevance = cosine_similarity(query_embedding, memory_embedding)
|
||||
```
|
||||
|
||||
### Final Score
|
||||
|
||||
```python
|
||||
score = recency + importance + relevance
|
||||
```
|
||||
|
||||
## 🎯 Automatic Reflection Generation
|
||||
|
||||
When accumulated importance of recent memories exceeds threshold (150):
|
||||
|
||||
1. **Analyze Recent Experiences**: Get last 20 observations
|
||||
2. **Generate Insights**: Use LLM to identify patterns and higher-level understanding
|
||||
3. **Create Reflections**: Store insights as new reflection memories
|
||||
4. **Link Evidence**: Connect reflections to supporting observations
|
||||
|
||||
## 💾 Memory Storage
|
||||
|
||||
Each memory contains:
|
||||
|
||||
- `description`: Natural language content
|
||||
- `creation_time`: When the memory was formed
|
||||
- `last_accessed`: When it was last retrieved (affects recency)
|
||||
- `importance_score`: 1-10 significance rating
|
||||
- `embedding`: Vector representation for similarity matching
|
||||
- `memory_type`: observation/reflection/plan
|
||||
- `related_memories`: Links to supporting evidence
|
||||
|
||||
## 🔄 Memory Lifecycle
|
||||
|
||||
1. **Creation**: New experience becomes observation
|
||||
2. **Scoring**: LLM rates importance 1-10
|
||||
3. **Storage**: Added to memory stream with embedding
|
||||
4. **Retrieval**: Accessed during relevant conversations
|
||||
5. **Reflection**: Patterns trigger insight generation
|
||||
6. **Evolution**: Older memories naturally fade unless repeatedly accessed
|
||||
|
||||
This creates realistic, human-like memory behavior where important experiences remain accessible while mundane details
|
||||
naturally fade over time.
|
||||
Reference in New Issue
Block a user