191 lines
5.4 KiB
Markdown
191 lines
5.4 KiB
Markdown
# Prompt Management System
|
|
|
|
A centralized system for managing LLM prompts with templating, structured outputs, and easy editing capabilities.
|
|
|
|
## 🎯 System Goals
|
|
|
|
- **Centralized Management**: All prompts in one organized location
|
|
- **Easy Editing**: Modify prompts without touching code
|
|
- **Template Support**: Dynamic variable substitution
|
|
- **Structured Outputs**: JSON schemas for reliable responses
|
|
- **Type Safety**: Validation of required variables
|
|
|
|
## 📁 File Organization
|
|
|
|
```
|
|
living_agents/prompts/
|
|
├── react_to_situation.md # Character response generation
|
|
├── score_observation_importance.md # Observation memory scoring
|
|
├── score_reflection_importance.md # Reflection memory scoring
|
|
├── score_plan_importance.md # Plan memory scoring
|
|
├── extract_character_from_memories.md # Character data extraction
|
|
├── extract_character_from_memories.json # Character data schema
|
|
├── generate_reflection.md # Reflection generation prompt
|
|
├── generate_reflection.json # Reflection schema
|
|
├── assess_trait_impact.md # Trait analysis prompt
|
|
├── assess_trait_impact.json # Trait update schema
|
|
└── character_summary.md # Character summary generation
|
|
```
|
|
|
|
## 📝 Prompt Templates
|
|
|
|
### Template Syntax
|
|
|
|
Use `{{variable_name}}` for dynamic substitution:
|
|
|
|
```markdown
|
|
You are {{character_name}}.
|
|
Age: {{character_age}}
|
|
Personality: {{character_personality}}
|
|
|
|
Relevant memories:
|
|
{{memory_context}}
|
|
|
|
Current situation: {{situation}}
|
|
|
|
Respond as {{character_name}} in first person past tense.
|
|
```
|
|
|
|
### Variable Extraction
|
|
|
|
System automatically detects required variables:
|
|
|
|
- Parses `{{variable}}` patterns
|
|
- Validates all variables are provided
|
|
- Warns about missing or extra variables
|
|
- Ensures template completeness
|
|
|
|
## 🏗️ JSON Schemas
|
|
|
|
### Structured Output Support
|
|
|
|
Pair `.md` prompts with `.json` schemas for reliable structured responses:
|
|
|
|
**Example Schema** (assess_trait_impact.json):
|
|
|
|
```json
|
|
{
|
|
"type": "object",
|
|
"properties": {
|
|
"trait_updates": {
|
|
"type": "array",
|
|
"items": {
|
|
"type": "object",
|
|
"properties": {
|
|
"trait_name": {
|
|
"type": "string",
|
|
"pattern": "^[a-zA-Z]+$"
|
|
},
|
|
"action": {
|
|
"type": "string",
|
|
"enum": ["create", "strengthen", "weaken"]
|
|
},
|
|
"new_strength": {
|
|
"type": "integer",
|
|
"minimum": 1,
|
|
"maximum": 10
|
|
},
|
|
"description": {
|
|
"type": "string"
|
|
},
|
|
"reasoning": {
|
|
"type": "string"
|
|
}
|
|
}
|
|
}
|
|
}
|
|
}
|
|
}
|
|
```
|
|
|
|
### Schema Benefits
|
|
|
|
- **Guaranteed Structure**: Always get expected JSON format
|
|
- **Type Validation**: Ensures correct data types
|
|
- **Field Requirements**: Specify required vs optional fields
|
|
- **Value Constraints**: Set min/max values, string lengths
|
|
- **Consistent Parsing**: No more JSON parsing errors
|
|
|
|
## 🔧 API Usage
|
|
|
|
### Basic Prompt Retrieval
|
|
|
|
```python
|
|
from living_agents.prompt_manager import PromptManager
|
|
|
|
# Simple template substitution
|
|
prompt = PromptManager.get_prompt('react_to_situation', {
|
|
'character_name': 'Alice',
|
|
'character_age': 23,
|
|
'situation': 'Someone asks how you are feeling'
|
|
})
|
|
```
|
|
|
|
### Structured Output
|
|
|
|
```python
|
|
from living_agents.prompt_manager import PromptManager
|
|
|
|
# Get both prompt and schema
|
|
prompt, schema = PromptManager.get_prompt_with_schema('assess_trait_impact', {
|
|
'observation': 'I felt nervous talking to Emma',
|
|
'traits_summary': 'shy (8/10), romantic (7/10)'
|
|
})
|
|
|
|
# Use with LLM structured output
|
|
if schema:
|
|
response = await llm.get_structured_response(messages, schema)
|
|
else:
|
|
response = await llm.chat(messages)
|
|
```
|
|
|
|
### Development Helpers
|
|
|
|
```python
|
|
# List all available prompts
|
|
prompts = PromptManager.list_prompts()
|
|
print(prompts) # Shows prompt names and required variables
|
|
|
|
# Get prompt details
|
|
info = PromptManager.get_prompt_info('react_to_situation')
|
|
print(f"Variables needed: {info['variables']}")
|
|
|
|
# Reload during development
|
|
PromptManager.reload_prompts() # Refresh from files
|
|
```
|
|
|
|
## 🎯 Prompt Design Guidelines
|
|
|
|
### Template Best Practices
|
|
|
|
- **Clear Instructions**: Specify exactly what you want
|
|
- **Consistent Formatting**: Use standard variable naming
|
|
- **Context Provision**: Give LLM necessary background
|
|
- **Output Specification**: Define expected response format
|
|
- **Example Inclusion**: Show desired output style
|
|
|
|
### Schema Design
|
|
|
|
- **Minimal Required Fields**: Only require truly essential data
|
|
- **Reasonable Constraints**: Set realistic min/max values
|
|
- **Clear Descriptions**: Help LLM understand field purposes
|
|
- **Flexible Structure**: Allow for natural language variation
|
|
- **Error Prevention**: Design to minimize parsing failures
|
|
|
|
## 🔄 System Benefits
|
|
|
|
### For Developers
|
|
|
|
- **Easy Maintenance**: Edit prompts without code changes
|
|
- **Type Safety**: Automatic variable validation
|
|
- **Consistent Structure**: Standardized prompt format
|
|
- **Debugging Support**: Clear error messages for missing variables
|
|
|
|
### For LLM Performance
|
|
|
|
- **Structured Outputs**: Eliminates JSON parsing errors
|
|
- **Consistent Prompting**: Reduces response variance
|
|
- **Context Optimization**: Templates ensure complete context
|
|
- **Schema Guidance**: Helps LLM generate correct format
|
|
|
|
This system makes prompt management scalable, maintainable, and reliable across the entire roleplay system. |