tooling and docs
This commit is contained in:
565
docs/TOOL_CREATION.md
Normal file
565
docs/TOOL_CREATION.md
Normal file
@@ -0,0 +1,565 @@
|
||||
# Tool Creation Guide
|
||||
|
||||
This guide walks you through creating custom tools for the ArchGPU Frontend platform. Tools are plugins that extend the application with custom AI testing capabilities.
|
||||
|
||||
## 🎯 Overview
|
||||
|
||||
Tools in ArchGPU Frontend are:
|
||||
- **Auto-discovered** from the `src/tools/` directory
|
||||
- **Self-contained** with their own routing and pages
|
||||
- **Context-aware** with access to system monitors
|
||||
- **Easily toggleable** via enable/disable properties
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
### 1. Create Tool Structure
|
||||
|
||||
```bash
|
||||
mkdir src/tools/my_tool
|
||||
touch src/tools/my_tool/__init__.py
|
||||
touch src/tools/my_tool/tool.py
|
||||
```
|
||||
|
||||
### 2. Basic Tool Implementation
|
||||
|
||||
Create `src/tools/my_tool/tool.py`:
|
||||
|
||||
```python
|
||||
from typing import Dict, Callable, Awaitable
|
||||
from nicegui import ui
|
||||
from tools.base_tool import BaseTool, BasePage
|
||||
|
||||
class MyTool(BaseTool):
|
||||
@property
|
||||
def name(self) -> str:
|
||||
return "My Testing Tool"
|
||||
|
||||
@property
|
||||
def description(self) -> str:
|
||||
return "A custom tool for AI testing"
|
||||
|
||||
@property
|
||||
def icon(self) -> str:
|
||||
return "science" # Material Design icon name
|
||||
|
||||
@property
|
||||
def enabled(self) -> bool:
|
||||
return True # Set to False to disable
|
||||
|
||||
@property
|
||||
def routes(self) -> Dict[str, Callable[[], Awaitable]]:
|
||||
return {
|
||||
'': lambda: MainPage().create(self),
|
||||
}
|
||||
|
||||
class MainPage(BasePage):
|
||||
async def content(self):
|
||||
ui.label(f"Welcome to {self.tool.name}!").classes('text-2xl font-bold text-white')
|
||||
|
||||
# Access system monitors via context
|
||||
cpu_usage = self.tool.context.system_monitor.cpu_percent
|
||||
ui.label(f"Current CPU usage: {cpu_usage:.1f}%").classes('text-white')
|
||||
```
|
||||
|
||||
### 3. Run and Test
|
||||
|
||||
Start the application:
|
||||
```bash
|
||||
APP_PORT=8081 uv run python src/main.py
|
||||
```
|
||||
|
||||
Your tool will automatically appear in the sidebar under "TOOLS" section!
|
||||
|
||||
## 📚 Detailed Guide
|
||||
|
||||
### Tool Base Class
|
||||
|
||||
Every tool must inherit from `BaseTool` and implement these required properties:
|
||||
|
||||
```python
|
||||
class MyTool(BaseTool):
|
||||
@property
|
||||
def name(self) -> str:
|
||||
"""Display name in the sidebar"""
|
||||
return "My Tool"
|
||||
|
||||
@property
|
||||
def description(self) -> str:
|
||||
"""Tool description for documentation"""
|
||||
return "What this tool does"
|
||||
|
||||
@property
|
||||
def icon(self) -> str:
|
||||
"""Material Design icon name"""
|
||||
return "build"
|
||||
|
||||
@property
|
||||
def routes(self) -> Dict[str, Callable[[], Awaitable]]:
|
||||
"""Route mapping for sub-pages"""
|
||||
return {'': lambda: MainPage().create(self)}
|
||||
```
|
||||
|
||||
Optional properties:
|
||||
|
||||
```python
|
||||
@property
|
||||
def enabled(self) -> bool:
|
||||
"""Enable/disable tool (default: True)"""
|
||||
return True # or False to disable
|
||||
```
|
||||
|
||||
### Route System
|
||||
|
||||
Tools can have multiple pages using sub-routes:
|
||||
|
||||
```python
|
||||
@property
|
||||
def routes(self):
|
||||
return {
|
||||
'': lambda: MainPage().create(self), # /my-tool
|
||||
'/settings': lambda: SettingsPage().create(self), # /my-tool/settings
|
||||
'/results': lambda: ResultsPage().create(self), # /my-tool/results
|
||||
'/history': lambda: HistoryPage().create(self), # /my-tool/history
|
||||
}
|
||||
```
|
||||
|
||||
**Route naming**: Tool directory `my_tool` becomes route `/my-tool` (underscores → hyphens)
|
||||
|
||||
### Page Classes
|
||||
|
||||
All pages should inherit from `BasePage` which provides:
|
||||
- Standard layout structure
|
||||
- `main-content` CSS class
|
||||
- Access to the tool instance via `self.tool`
|
||||
|
||||
```python
|
||||
class MainPage(BasePage):
|
||||
async def content(self):
|
||||
# This method contains your page content
|
||||
ui.label("Page title").classes('text-2xl font-bold text-white')
|
||||
|
||||
with ui.card().classes('metric-card p-6'):
|
||||
ui.label("Card content").classes('text-white')
|
||||
```
|
||||
|
||||
### Accessing System Data
|
||||
|
||||
Use the tool context to access system monitors:
|
||||
|
||||
```python
|
||||
class MainPage(BasePage):
|
||||
async def content(self):
|
||||
# Access system monitors
|
||||
sys_mon = self.tool.context.system_monitor
|
||||
gpu_mon = self.tool.context.gpu_monitor
|
||||
ollama_mon = self.tool.context.ollama_monitor
|
||||
|
||||
# Display live data
|
||||
ui.label().classes('text-white').bind_text_from(
|
||||
sys_mon, 'cpu_percent',
|
||||
backward=lambda x: f'CPU: {x:.1f}%'
|
||||
)
|
||||
|
||||
ui.label().classes('text-white').bind_text_from(
|
||||
gpu_mon, 'temperature',
|
||||
backward=lambda x: f'GPU: {x:.0f}°C'
|
||||
)
|
||||
|
||||
ui.label().classes('text-white').bind_text_from(
|
||||
ollama_mon, 'active_models',
|
||||
backward=lambda x: f'Models: {len(x)}'
|
||||
)
|
||||
```
|
||||
|
||||
### Navigation Between Pages
|
||||
|
||||
Create navigation buttons to move between tool pages:
|
||||
|
||||
```python
|
||||
class MainPage(BasePage):
|
||||
async def content(self):
|
||||
ui.label("Main Page").classes('text-2xl font-bold text-white')
|
||||
|
||||
# Navigation buttons
|
||||
with ui.row().classes('gap-2'):
|
||||
ui.button('Settings', icon='settings',
|
||||
on_click=lambda: ui.navigate.to(f'{self.tool.baseroute}/settings'))
|
||||
ui.button('Results', icon='analytics',
|
||||
on_click=lambda: ui.navigate.to(f'{self.tool.baseroute}/results'))
|
||||
|
||||
class SettingsPage(BasePage):
|
||||
async def content(self):
|
||||
# Back button
|
||||
with ui.row().classes('items-center gap-4 mb-4'):
|
||||
ui.button(icon='arrow_back',
|
||||
on_click=lambda: ui.navigate.to(self.tool.baseroute)).props('flat round')
|
||||
ui.label("Settings").classes('text-2xl font-bold text-white')
|
||||
```
|
||||
|
||||
## 🛠️ Advanced Features
|
||||
|
||||
### Dynamic Content with Refreshable
|
||||
|
||||
Use `@ui.refreshable` for content that updates periodically:
|
||||
|
||||
```python
|
||||
class MainPage(BasePage):
|
||||
async def content(self):
|
||||
ui.label("Live Model Status").classes('text-xl font-bold text-white mb-4')
|
||||
|
||||
@ui.refreshable
|
||||
def model_status():
|
||||
models = self.tool.context.ollama_monitor.active_models
|
||||
if not models:
|
||||
ui.label("No models running").classes('text-gray-400')
|
||||
else:
|
||||
for model in models:
|
||||
with ui.row().classes('items-center gap-2'):
|
||||
ui.icon('circle', color='green', size='sm')
|
||||
ui.label(model.get('name', 'Unknown')).classes('text-white')
|
||||
|
||||
model_status()
|
||||
ui.timer(2.0, model_status.refresh) # Update every 2 seconds
|
||||
```
|
||||
|
||||
### Form Handling
|
||||
|
||||
Create interactive forms for user input:
|
||||
|
||||
```python
|
||||
class SettingsPage(BasePage):
|
||||
async def content(self):
|
||||
ui.label("Tool Settings").classes('text-xl font-bold text-white mb-4')
|
||||
|
||||
with ui.card().classes('metric-card p-6'):
|
||||
with ui.column().classes('gap-4'):
|
||||
# Text input
|
||||
prompt_input = ui.input('Custom Prompt').props('outlined')
|
||||
|
||||
# Number input
|
||||
batch_size = ui.number('Batch Size', value=10, min=1, max=100).props('outlined')
|
||||
|
||||
# Select dropdown
|
||||
model_select = ui.select(
|
||||
options=['gpt-3.5-turbo', 'gpt-4', 'claude-3'],
|
||||
value='gpt-3.5-turbo'
|
||||
).props('outlined')
|
||||
|
||||
# Checkbox
|
||||
enable_logging = ui.checkbox('Enable Logging', value=True)
|
||||
|
||||
# Save button
|
||||
ui.button('Save Settings', icon='save',
|
||||
on_click=lambda: self.save_settings(
|
||||
prompt_input.value,
|
||||
batch_size.value,
|
||||
model_select.value,
|
||||
enable_logging.value
|
||||
)).props('color=primary')
|
||||
|
||||
def save_settings(self, prompt, batch_size, model, logging):
|
||||
# Handle form submission
|
||||
ui.notify(f'Settings saved: {prompt}, {batch_size}, {model}, {logging}')
|
||||
```
|
||||
|
||||
### File Operations
|
||||
|
||||
Handle file uploads and downloads:
|
||||
|
||||
```python
|
||||
class MainPage(BasePage):
|
||||
async def content(self):
|
||||
ui.label("File Operations").classes('text-xl font-bold text-white mb-4')
|
||||
|
||||
with ui.card().classes('metric-card p-6'):
|
||||
# File upload
|
||||
ui.upload(
|
||||
label='Upload Test Data',
|
||||
on_upload=self.handle_upload,
|
||||
max_file_size=10_000_000 # 10MB
|
||||
).props('accept=.txt,.json,.csv')
|
||||
|
||||
# Download button
|
||||
ui.button('Download Results', icon='download',
|
||||
on_click=self.download_results)
|
||||
|
||||
def handle_upload(self, e):
|
||||
"""Handle file upload"""
|
||||
with open(f'uploads/{e.name}', 'wb') as f:
|
||||
f.write(e.content.read())
|
||||
ui.notify(f'Uploaded {e.name}')
|
||||
|
||||
def download_results(self):
|
||||
"""Generate and download results"""
|
||||
content = "Sample results data..."
|
||||
ui.download(content.encode(), 'results.txt')
|
||||
```
|
||||
|
||||
### Working with Ollama Models
|
||||
|
||||
Interact with Ollama models in your tool:
|
||||
|
||||
```python
|
||||
from utils import ollama
|
||||
|
||||
class MainPage(BasePage):
|
||||
async def content(self):
|
||||
ui.label("Model Testing").classes('text-xl font-bold text-white mb-4')
|
||||
|
||||
with ui.card().classes('metric-card p-6'):
|
||||
# Model selection
|
||||
models = await ollama.available_models()
|
||||
model_names = [m['name'] for m in models]
|
||||
selected_model = ui.select(model_names, label='Select Model').props('outlined')
|
||||
|
||||
# Prompt input
|
||||
prompt_input = ui.textarea('Enter prompt').props('outlined')
|
||||
|
||||
# Test button
|
||||
ui.button('Test Model', icon='play_arrow',
|
||||
on_click=lambda: self.test_model(selected_model.value, prompt_input.value))
|
||||
|
||||
# Results display
|
||||
self.results_area = ui.html()
|
||||
|
||||
async def test_model(self, model_name, prompt):
|
||||
"""Test a model with the given prompt"""
|
||||
if not model_name or not prompt:
|
||||
ui.notify('Please select a model and enter a prompt', type='warning')
|
||||
return
|
||||
|
||||
try:
|
||||
# Call Ollama API
|
||||
response = await ollama.generate(model_name, prompt)
|
||||
self.results_area.content = f'<pre class="text-white">{response}</pre>'
|
||||
except Exception as e:
|
||||
ui.notify(f'Error: {str(e)}', type='negative')
|
||||
```
|
||||
|
||||
## 🎨 Styling Guidelines
|
||||
|
||||
### CSS Classes
|
||||
|
||||
Use these standard classes for consistent styling:
|
||||
|
||||
```python
|
||||
# Text styles
|
||||
ui.label("Title").classes('text-2xl font-bold text-white')
|
||||
ui.label("Subtitle").classes('text-lg font-bold text-white')
|
||||
ui.label("Body text").classes('text-sm text-white')
|
||||
ui.label("Muted text").classes('text-xs text-grey-5')
|
||||
|
||||
# Cards and containers
|
||||
ui.card().classes('metric-card p-6')
|
||||
ui.row().classes('items-center gap-4')
|
||||
ui.column().classes('gap-4')
|
||||
|
||||
# Buttons
|
||||
ui.button('Primary').props('color=primary')
|
||||
ui.button('Secondary').props('color=secondary')
|
||||
ui.button('Icon', icon='icon_name').props('round flat')
|
||||
```
|
||||
|
||||
### Color Scheme
|
||||
|
||||
The application uses a dark theme with these accent colors:
|
||||
- **Primary**: Cyan (`#06b6d4`)
|
||||
- **Success**: Green (`#10b981`)
|
||||
- **Warning**: Orange (`#f97316`)
|
||||
- **Error**: Red (`#ef4444`)
|
||||
- **Purple**: (`#e879f9`)
|
||||
|
||||
## 🔧 Tool Configuration
|
||||
|
||||
### Environment-Based Enabling
|
||||
|
||||
Enable tools based on environment variables:
|
||||
|
||||
```python
|
||||
import os
|
||||
|
||||
class MyTool(BaseTool):
|
||||
@property
|
||||
def enabled(self) -> bool:
|
||||
return os.getenv('ENABLE_EXPERIMENTAL_TOOLS', 'false').lower() == 'true'
|
||||
```
|
||||
|
||||
### Configuration Files
|
||||
|
||||
Store tool-specific configuration:
|
||||
|
||||
```python
|
||||
import json
|
||||
import os
|
||||
|
||||
class MyTool(BaseTool):
|
||||
def __init__(self):
|
||||
self.config_file = 'config/my_tool.json'
|
||||
self.config = self.load_config()
|
||||
|
||||
def load_config(self):
|
||||
if os.path.exists(self.config_file):
|
||||
with open(self.config_file, 'r') as f:
|
||||
return json.load(f)
|
||||
return {'enabled': True, 'max_batch_size': 100}
|
||||
|
||||
def save_config(self):
|
||||
os.makedirs(os.path.dirname(self.config_file), exist_ok=True)
|
||||
with open(self.config_file, 'w') as f:
|
||||
json.dump(self.config, f, indent=2)
|
||||
```
|
||||
|
||||
## 🐛 Debugging
|
||||
|
||||
### Logging
|
||||
|
||||
Add logging to your tools:
|
||||
|
||||
```python
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class MainPage(BasePage):
|
||||
async def content(self):
|
||||
logger.info("MainPage loaded")
|
||||
|
||||
try:
|
||||
# Your code here
|
||||
pass
|
||||
except Exception as e:
|
||||
logger.error(f"Error in MainPage: {e}")
|
||||
ui.notify(f"Error: {str(e)}", type='negative')
|
||||
```
|
||||
|
||||
### Development Tips
|
||||
|
||||
1. **Use port 8081** for development to avoid conflicts
|
||||
2. **Check browser console** for JavaScript errors
|
||||
3. **Monitor server logs** for Python exceptions
|
||||
4. **Use ui.notify()** for user feedback
|
||||
5. **Test with different screen sizes** for responsiveness
|
||||
|
||||
## 📖 Examples
|
||||
|
||||
### Complete Tool Example
|
||||
|
||||
Here's a complete example of a model comparison tool:
|
||||
|
||||
```python
|
||||
from typing import Dict, Callable, Awaitable
|
||||
from nicegui import ui
|
||||
from tools.base_tool import BaseTool, BasePage
|
||||
from utils import ollama
|
||||
import asyncio
|
||||
|
||||
class ModelCompareTool(BaseTool):
|
||||
@property
|
||||
def name(self) -> str:
|
||||
return "Model Compare"
|
||||
|
||||
@property
|
||||
def description(self) -> str:
|
||||
return "Compare responses from different AI models"
|
||||
|
||||
@property
|
||||
def icon(self) -> str:
|
||||
return "compare"
|
||||
|
||||
@property
|
||||
def routes(self) -> Dict[str, Callable[[], Awaitable]]:
|
||||
return {
|
||||
'': lambda: ComparePage().create(self),
|
||||
'/history': lambda: HistoryPage().create(self),
|
||||
}
|
||||
|
||||
class ComparePage(BasePage):
|
||||
async def content(self):
|
||||
ui.label("Model Comparison").classes('text-2xl font-bold text-white mb-4')
|
||||
|
||||
# Get available models
|
||||
models = await ollama.available_models()
|
||||
model_names = [m['name'] for m in models]
|
||||
|
||||
with ui.row().classes('w-full gap-4'):
|
||||
# Left panel - inputs
|
||||
with ui.card().classes('metric-card p-6 flex-1'):
|
||||
ui.label("Setup").classes('text-lg font-bold text-white mb-4')
|
||||
|
||||
self.model1 = ui.select(model_names, label='Model 1').props('outlined')
|
||||
self.model2 = ui.select(model_names, label='Model 2').props('outlined')
|
||||
self.prompt = ui.textarea('Prompt', placeholder='Enter your prompt here...').props('outlined')
|
||||
|
||||
ui.button('Compare Models', icon='play_arrow',
|
||||
on_click=self.compare_models).props('color=primary')
|
||||
|
||||
# Right panel - results
|
||||
with ui.card().classes('metric-card p-6 flex-1'):
|
||||
ui.label("Results").classes('text-lg font-bold text-white mb-4')
|
||||
self.results_container = ui.column().classes('gap-4')
|
||||
|
||||
async def compare_models(self):
|
||||
if not all([self.model1.value, self.model2.value, self.prompt.value]):
|
||||
ui.notify('Please fill all fields', type='warning')
|
||||
return
|
||||
|
||||
self.results_container.clear()
|
||||
|
||||
with self.results_container:
|
||||
ui.label("Comparing models...").classes('text-white')
|
||||
|
||||
# Run both models concurrently
|
||||
tasks = [
|
||||
ollama.generate(self.model1.value, self.prompt.value),
|
||||
ollama.generate(self.model2.value, self.prompt.value)
|
||||
]
|
||||
|
||||
try:
|
||||
results = await asyncio.gather(*tasks)
|
||||
|
||||
# Display results side by side
|
||||
with ui.row().classes('w-full gap-4'):
|
||||
for i, (model_name, result) in enumerate(zip([self.model1.value, self.model2.value], results)):
|
||||
with ui.card().classes('metric-card p-4 flex-1'):
|
||||
ui.label(model_name).classes('text-lg font-bold text-white mb-2')
|
||||
ui.html(f'<pre class="text-white text-sm">{result}</pre>')
|
||||
|
||||
except Exception as e:
|
||||
ui.notify(f'Error: {str(e)}', type='negative')
|
||||
|
||||
class HistoryPage(BasePage):
|
||||
async def content(self):
|
||||
with ui.row().classes('items-center gap-4 mb-4'):
|
||||
ui.button(icon='arrow_back',
|
||||
on_click=lambda: ui.navigate.to(self.tool.baseroute)).props('flat round')
|
||||
ui.label("Comparison History").classes('text-2xl font-bold text-white')
|
||||
|
||||
# History implementation here
|
||||
ui.label("History feature coming soon...").classes('text-grey-5')
|
||||
```
|
||||
|
||||
## 🚀 Publishing Your Tool
|
||||
|
||||
### Code Quality
|
||||
- Follow Python PEP 8 style guidelines
|
||||
- Add type hints to your methods
|
||||
- Include docstrings for complex functions
|
||||
- Handle errors gracefully with try/catch blocks
|
||||
|
||||
### Testing
|
||||
- Test your tool with different models
|
||||
- Verify responsive design on different screen sizes
|
||||
- Test enable/disable functionality
|
||||
- Ensure proper cleanup of resources
|
||||
|
||||
### Documentation
|
||||
- Add comments to complex logic
|
||||
- Create usage examples
|
||||
- Document any configuration options
|
||||
- Include screenshots if helpful
|
||||
|
||||
Your tool is now ready to be shared with the ArchGPU Frontend community!
|
||||
|
||||
---
|
||||
|
||||
**Happy tool building! 🛠️**
|
||||
Reference in New Issue
Block a user