tooling and docs
This commit is contained in:
172
CLAUDE.md
172
CLAUDE.md
@@ -9,22 +9,26 @@ This is a NiceGUI-based web platform for testing and managing AI models through
|
||||
A streamlined interface for testing AI models locally, managing Ollama models, and running various AI-related testing tools.
|
||||
|
||||
### Main Features:
|
||||
1. **Model Manager** - Complete Ollama model management interface
|
||||
1. **Comprehensive System Monitoring** - Real-time resource tracking for AI workloads
|
||||
- Live dashboard with GPU, CPU, memory, disk, and network monitoring
|
||||
- Process monitoring with real-time top processes display
|
||||
- Enhanced header with critical metrics (GPU load, VRAM, RAM, disk space)
|
||||
- Detailed tooltips showing active Ollama models
|
||||
|
||||
2. **Model Manager** - Complete Ollama model management interface
|
||||
- Download, delete, create, and test models
|
||||
- Support for Hugging Face models via Ollama pull syntax
|
||||
- Rich model metadata display
|
||||
- Quick in-app chat testing
|
||||
- Rich model metadata display with size, quantization, context length
|
||||
- Quick in-app chat testing interface
|
||||
|
||||
2. **System Monitoring** - Resource tracking for AI workloads
|
||||
- Real-time GPU monitoring (AMD/NVIDIA) to track model performance
|
||||
- CPU and memory usage during model inference
|
||||
- System metrics dashboard
|
||||
3. **Plugin-Based Tool System** - Extensible framework for AI testing tools
|
||||
- Auto-discovery of tools from `src/tools/` directory
|
||||
- Each tool can have multiple sub-pages with routing
|
||||
- Tools have access to system monitors via ToolContext
|
||||
- Enable/disable tools via simple property override
|
||||
|
||||
3. **AI Testing Tools**:
|
||||
- **Censor** - Text content filtering/censoring tool for testing AI outputs
|
||||
- Additional testing tools to be added as needed
|
||||
|
||||
4. **Settings** - Application configuration and refresh intervals
|
||||
4. **External Integrations** - Quick access to related services
|
||||
- Direct link to Open WebUI for advanced model interactions
|
||||
|
||||
## Development Commands
|
||||
|
||||
@@ -70,38 +74,56 @@ uv sync
|
||||
```
|
||||
src/
|
||||
├── main.py # Entry point, NiceGUI app configuration with all routes
|
||||
├── pages/ # Page components (inheriting NiceGUI elements)
|
||||
│ ├── dashboard.py # Main dashboard with system metrics
|
||||
│ ├── ollama_manager.py # Ollama model management interface (AsyncColumn)
|
||||
│ ├── system_overview.py # System information page
|
||||
│ └── welcome.py # Welcome/landing page
|
||||
├── pages/ # Core page components
|
||||
│ ├── dashboard.py # Comprehensive system monitoring dashboard
|
||||
│ └── ollama_manager.py # Ollama model management interface (AsyncColumn)
|
||||
├── components/ # Reusable UI components
|
||||
│ ├── circular_progress.py # Circular progress indicators
|
||||
│ ├── header.py # App header with live status
|
||||
│ ├── sidebar.py # Navigation sidebar with updated menu structure
|
||||
│ ├── header.py # Enhanced header with critical metrics and tooltips
|
||||
│ ├── sidebar.py # Navigation sidebar with auto-populated tools
|
||||
│ ├── bottom_nav.py # Mobile bottom navigation
|
||||
│ ├── ollama_downloader.py # Ollama model downloader component (AsyncCard)
|
||||
│ ├── ollama_model_creation.py # Model creation component (AsyncCard)
|
||||
│ └── ollama_quick_test.py # Model testing component (AsyncCard)
|
||||
├── tools/ # Plugin system for extensible tools
|
||||
│ ├── __init__.py # Auto-discovery and tool registry
|
||||
│ ├── base_tool.py # BaseTool and BasePage classes, ToolContext
|
||||
│ └── example_tool/ # Example tool demonstrating plugin system
|
||||
│ ├── __init__.py
|
||||
│ └── tool.py # ExampleTool with main, settings, history pages
|
||||
├── utils/ # Utility modules
|
||||
│ ├── gpu_monitor.py # GPU monitoring (AMD/NVIDIA auto-detect)
|
||||
│ ├── system_monitor.py # System resource monitoring
|
||||
│ ├── ollama_monitor.py # Ollama status monitoring (bindable dataclass)
|
||||
│ ├── system_monitor.py # Comprehensive system resource monitoring
|
||||
│ ├── ollama_monitor.py # Ollama status and active models monitoring
|
||||
│ └── ollama.py # Ollama API client functions
|
||||
└── static/ # Static assets (CSS, images)
|
||||
└── style.css # Custom dark theme styles
|
||||
```
|
||||
|
||||
### Key Design Patterns
|
||||
1. **Async Components**: Uses custom `niceguiasyncelement` framework for async page/component construction
|
||||
- `AsyncColumn`, `AsyncCard` base classes for complex components
|
||||
- `OllamaManagerPage(AsyncColumn)` for full page async initialization
|
||||
- Async component dialogs with `await component.create()` pattern
|
||||
2. **Bindable Dataclasses**: Monitor classes use `@binding.bindable_dataclass` for reactive data binding
|
||||
- `SystemMonitor`, `GPUMonitor`, `OllamaMonitor` for real-time data updates
|
||||
3. **Environment Configuration**: All app settings are managed via `.env` file and loaded with python-dotenv
|
||||
4. **Centralized Routing**: All routes defined in main.py with layout creation pattern
|
||||
5. **Real-time Updates**: Timer-based updates every 2 seconds for all monitor instances
|
||||
1. **Plugin Architecture**: Extensible tool system with auto-discovery
|
||||
- Tools are auto-discovered from `src/tools/` directory
|
||||
- Each tool inherits from `BaseTool` and defines routes for sub-pages
|
||||
- Tools can be enabled/disabled via simple property override
|
||||
- Sub-routes support: tools can have multiple pages (main, settings, etc.)
|
||||
|
||||
2. **Async Components**: Uses custom `niceguiasyncelement` framework
|
||||
- `BasePage(AsyncColumn)` for consistent tool page structure
|
||||
- `AsyncCard` base classes for complex components
|
||||
- All tool pages inherit from `BasePage` to eliminate boilerplate
|
||||
|
||||
3. **Context Pattern**: Shared resource access via ToolContext
|
||||
- `ToolContext` provides access to system monitors from any tool
|
||||
- Global context initialized in main.py and accessible via `tool.context`
|
||||
- Clean separation between tools and system resources
|
||||
|
||||
4. **Bindable Dataclasses**: Monitor classes use `@binding.bindable_dataclass`
|
||||
- Real-time UI updates with 2-second refresh intervals
|
||||
- `SystemMonitor`, `GPUMonitor`, `OllamaMonitor` for live data
|
||||
|
||||
5. **Enhanced Header**: Critical metrics display with detailed tooltips
|
||||
- GPU load, VRAM usage, system RAM, disk space badges
|
||||
- Active model tooltip with detailed model information
|
||||
- Clean metric formatting with proper units
|
||||
|
||||
## Component Architecture
|
||||
|
||||
@@ -148,7 +170,7 @@ The Ollama API client (`src/utils/ollama.py`) provides async functions:
|
||||
- `model_info()`: Get detailed model information and Modelfile
|
||||
- `stream_chat()`: Stream chat responses
|
||||
|
||||
### AI Model Testing Features:
|
||||
## Tools Plugin System\n\nThe application features an extensible plugin system for AI testing tools:\n\n### Creating a New Tool\n\n1. **Create tool directory**: `src/tools/my_tool/`\n2. **Create tool class**: `src/tools/my_tool/tool.py`\n\n```python\nfrom tools.base_tool import BaseTool, BasePage\nfrom typing import Dict, Callable, Awaitable\n\nclass MyTool(BaseTool):\n @property\n def name(self) -> str:\n return \"My Tool\"\n \n @property\n def description(self) -> str:\n return \"Description of what this tool does\"\n \n @property\n def icon(self) -> str:\n return \"build\" # Material icon name\n \n @property\n def enabled(self) -> bool:\n return True # Set to False to disable\n \n @property\n def routes(self) -> Dict[str, Callable[[], Awaitable]]:\n return {\n '': lambda: MainPage().create(self),\n '/settings': lambda: SettingsPage().create(self),\n }\n\nclass MainPage(BasePage):\n async def content(self):\n # Access system monitors via context\n cpu_usage = self.tool.context.system_monitor.cpu_percent\n active_models = self.tool.context.ollama_monitor.active_models\n \n # Your tool UI here\n ui.label(f\"CPU: {cpu_usage}%\")\n```\n\n### Tool Features:\n- **Auto-discovery**: Tools are automatically found and loaded\n- **Sub-routes**: Tools can have multiple pages (/, /settings, /history, etc.)\n- **Context Access**: Access to system monitors via `self.tool.context`\n- **Enable/Disable**: Control tool visibility via `enabled` property\n- **Consistent Layout**: `BasePage` handles standard layout structure\n\n### AI Model Testing Features:
|
||||
- **Model Discovery & Management**:
|
||||
- Browse and pull models from Ollama library
|
||||
- Support for HuggingFace models via Ollama syntax
|
||||
@@ -219,17 +241,31 @@ Custom dark theme with:
|
||||
- Live data binding for all metrics
|
||||
- Smooth transitions and animations
|
||||
|
||||
## Enhanced Dashboard Features
|
||||
|
||||
The dashboard provides comprehensive real-time monitoring specifically designed for AI workload testing:
|
||||
|
||||
### Primary Monitoring Sections:
|
||||
- **GPU Performance**: Large circular progress for GPU load, VRAM usage bar, temperature & power draw
|
||||
- **CPU & Memory**: Dual circular progress with detailed specs and frequency info
|
||||
- **Ollama Service**: Live status, version, and grid display of active models with metadata
|
||||
- **Storage & Network**: Disk usage bars and real-time network I/O monitoring
|
||||
- **Process Monitoring**: Live table of top processes with CPU%, memory usage, and status
|
||||
- **System Information**: OS details, uptime, load average, hardware specifications
|
||||
|
||||
### Header Enhancements:
|
||||
- **Critical Metrics Badges**: GPU load, VRAM usage, system RAM, disk space with live updates
|
||||
- **Active Models Tooltip**: Detailed grid showing running models with context length, size, VRAM usage
|
||||
- **Live Status Indicators**: Ollama service status with version information
|
||||
|
||||
## NiceGUI Patterns
|
||||
- **Data Binding**: Use `bind_text_from()` and `bind_value_from()` for reactive updates
|
||||
- **Page Routing**: Navigation via `ui.navigate.to(route)` with centralized route handling
|
||||
- **Async Components**: Custom `niceguiasyncelement` framework for complex async initialization
|
||||
- `AsyncColumn.create()` for async page construction
|
||||
- `AsyncCard.create()` for dialog components
|
||||
- `@ui.refreshable` decorators for dynamic content updates
|
||||
- **Timer Updates**: `app.timer()` for periodic data refresh (2-second intervals)
|
||||
- **Dialog Patterns**: Modal dialogs with `await dialog` for user interactions
|
||||
- **Component Layout**: `create_layout(route)` pattern for consistent page structure
|
||||
- **Dark Mode**: Forced dark mode with custom CSS overrides
|
||||
- **Plugin-Based Routing**: Tools auto-register their routes with sub-page support
|
||||
- **Context Pattern**: Shared monitor access via `tool.context` for all plugins
|
||||
- **BasePage Pattern**: Consistent tool page structure with `BasePage(AsyncColumn)`
|
||||
- **Data Binding**: Reactive UI updates with `bind_text_from()` and `bind_value_from()`
|
||||
- **Async Components**: `niceguiasyncelement` framework with `@ui.refreshable` decorators
|
||||
- **Timer Updates**: 2-second intervals for real-time monitoring data
|
||||
- **Dark Mode**: Comprehensive dark theme with custom metric colors
|
||||
|
||||
## Environment Variables
|
||||
Configured in `.env`:
|
||||
@@ -246,25 +282,39 @@ Configured in `.env`:
|
||||
- Use browser DevTools for WebSocket debugging
|
||||
|
||||
## Current Route Structure
|
||||
From main.py routing:
|
||||
- `/` - Dashboard (system metrics for monitoring AI workloads)
|
||||
- `/system` - System Overview (detailed resource information)
|
||||
- `/ollama` - Model Manager (primary interface for AI model testing)
|
||||
- `/censor` - Censor tool (AI output filtering/testing)
|
||||
- `/settings` - Settings (refresh intervals, app configuration)
|
||||
|
||||
### Placeholder Routes (may be repurposed for AI tools):
|
||||
- `/processes` - Reserved for future AI tools
|
||||
- `/network` - Reserved for future AI tools
|
||||
- `/packages` - Reserved for future AI tools
|
||||
- `/logs` - Reserved for future AI tools
|
||||
- `/info` - Reserved for future AI tools
|
||||
### Core Application Routes:
|
||||
- `/` - Comprehensive system monitoring dashboard
|
||||
- `/ollama` - Advanced model manager (download, test, create, manage)
|
||||
- `/settings` - Application configuration and monitoring intervals
|
||||
|
||||
### Plugin System Routes (Auto-Generated):
|
||||
- `/example-tool` - Example tool demonstrating plugin capabilities
|
||||
- `/example-tool/settings` - Tool-specific settings page
|
||||
- `/example-tool/history` - Tool-specific history page
|
||||
- **Dynamic Discovery**: Additional tool routes auto-discovered from `src/tools/` directory
|
||||
|
||||
### External Integrations:
|
||||
- Direct link to Open WebUI for advanced model interactions
|
||||
|
||||
## Tool Development Guide
|
||||
|
||||
### Quick Start:
|
||||
1. Create `src/tools/my_tool/` directory
|
||||
2. Add `tool.py` with class inheriting from `BaseTool`
|
||||
3. Define routes dictionary mapping paths to page classes
|
||||
4. Create page classes inheriting from `BasePage`
|
||||
5. Tool automatically appears in sidebar and routes are registered
|
||||
|
||||
### Advanced Features:
|
||||
- **Context Access**: Access system monitors via `self.tool.context.system_monitor`
|
||||
- **Sub-routing**: Multiple pages per tool (main, settings, config, etc.)
|
||||
- **Enable/Disable**: Control tool visibility via `enabled` property
|
||||
- **Live Data**: Bind to real-time system metrics and Ollama status
|
||||
|
||||
## Future Enhancements
|
||||
- Enhanced model chat interface with conversation history
|
||||
- Model performance benchmarking tools
|
||||
- Batch testing capabilities for multiple models
|
||||
- Output comparison tools between different models
|
||||
- Integration with more AI model formats
|
||||
- Advanced prompt testing and optimization tools
|
||||
- Model fine-tuning interface
|
||||
- Local AI model testing capabilities that prioritize privacy and security
|
||||
- Tools for testing model behaviors that external providers might restrict
|
||||
- Advanced local prompt engineering and safety testing frameworks
|
||||
- Private data processing and analysis tools using local models
|
||||
- Additional testing capabilities as needs are discovered through usage
|
||||
245
README.md
245
README.md
@@ -1,2 +1,247 @@
|
||||
# ArchGPU Frontend
|
||||
|
||||
A comprehensive web-based platform for local AI model testing and system monitoring. Built with NiceGUI and designed for privacy-focused AI experimentation on Arch Linux systems with GPU support.
|
||||
|
||||
## 🎯 Core Purpose
|
||||
|
||||
**Local AI Testing Environment** - Test AI models locally with complete privacy and security, enabling experimentation that external providers might restrict or monitor.
|
||||
|
||||
### Key Features
|
||||
|
||||
- **🖥️ Comprehensive System Monitoring** - Real-time tracking of AI workloads
|
||||
- **🤖 Advanced Ollama Integration** - Complete model management and testing
|
||||
- **🔧 Extensible Plugin System** - Add custom testing tools easily
|
||||
- **🔒 Privacy-First Design** - All processing happens locally
|
||||
- **⚡ Real-Time Performance Tracking** - Monitor resource usage during inference
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Python 3.13+
|
||||
- uv package manager
|
||||
- Ollama installed and running on port 11434
|
||||
- GPU drivers (AMD or NVIDIA)
|
||||
|
||||
### Installation
|
||||
|
||||
```bash
|
||||
# Clone the repository
|
||||
git clone <repository-url>
|
||||
cd ArchGPUFrontend
|
||||
|
||||
# Install dependencies
|
||||
uv sync
|
||||
|
||||
# Run the application
|
||||
APP_PORT=8081 uv run python src/main.py
|
||||
```
|
||||
|
||||
Open your browser to `http://localhost:8081` (or 8080 for production).
|
||||
|
||||
## 📊 System Monitoring
|
||||
|
||||
### Dashboard Features
|
||||
|
||||
The dashboard provides real-time monitoring specifically designed for AI workload analysis:
|
||||
|
||||
#### Primary Metrics
|
||||
- **GPU Performance**: Load percentage, VRAM usage, temperature, power draw
|
||||
- **CPU & Memory**: Usage percentages, frequency, detailed specifications
|
||||
- **Ollama Service**: Status, version, active models with metadata
|
||||
- **Storage & Network**: Disk usage, real-time I/O monitoring
|
||||
|
||||
#### Enhanced Header
|
||||
- **Critical Metrics Badges**: GPU load, VRAM, RAM, disk space
|
||||
- **Active Models Tooltip**: Detailed model information on hover
|
||||
- **Live Status Indicators**: Service health and version info
|
||||
|
||||
#### Process Monitoring
|
||||
- Real-time table of top processes
|
||||
- CPU and memory usage per process
|
||||
- Process status and PID tracking
|
||||
|
||||
## 🤖 Ollama Integration
|
||||
|
||||
### Model Management
|
||||
- **Browse & Download**: Pull models from Ollama library and Hugging Face
|
||||
- **Rich Metadata**: View size, quantization, parameters, context length
|
||||
- **Quick Testing**: In-app chat interface for immediate model testing
|
||||
- **Custom Models**: Create models from custom Modelfiles
|
||||
- **Performance Tracking**: Monitor VRAM usage and inference speed
|
||||
|
||||
### Supported Operations
|
||||
- Model discovery and installation
|
||||
- Real-time active model monitoring
|
||||
- Model deletion and management
|
||||
- Custom model creation
|
||||
- Chat testing interface
|
||||
|
||||
## 🔧 Plugin System
|
||||
|
||||
The application features an extensible plugin architecture for creating custom AI testing tools.
|
||||
|
||||
### Available Tools
|
||||
- **Example Tool** - Demonstrates plugin capabilities with sub-pages
|
||||
|
||||
### Creating Tools
|
||||
See our [Tool Creation Guide](docs/TOOL_CREATION.md) for detailed instructions on building custom tools.
|
||||
|
||||
Quick example:
|
||||
```python
|
||||
from tools.base_tool import BaseTool, BasePage
|
||||
|
||||
class MyTool(BaseTool):
|
||||
@property
|
||||
def name(self) -> str:
|
||||
return "My Testing Tool"
|
||||
|
||||
@property
|
||||
def routes(self):
|
||||
return {'': lambda: MainPage().create(self)}
|
||||
|
||||
class MainPage(BasePage):
|
||||
async def content(self):
|
||||
# Access system monitors
|
||||
cpu = self.tool.context.system_monitor.cpu_percent
|
||||
models = self.tool.context.ollama_monitor.active_models
|
||||
|
||||
# Build your testing interface
|
||||
ui.label(f"CPU: {cpu}% | Models: {len(models)}")
|
||||
```
|
||||
|
||||
## 🏗️ Architecture
|
||||
|
||||
### Technology Stack
|
||||
- **Frontend**: NiceGUI (FastAPI + Vue.js)
|
||||
- **Backend**: Python 3.13 with async/await
|
||||
- **System Monitoring**: psutil
|
||||
- **GPU Monitoring**: rocm-smi / nvidia-smi
|
||||
- **AI Integration**: Ollama API
|
||||
- **Package Manager**: uv
|
||||
|
||||
### Project Structure
|
||||
```
|
||||
src/
|
||||
├── main.py # Application entry point
|
||||
├── pages/ # Core application pages
|
||||
│ ├── dashboard.py # System monitoring dashboard
|
||||
│ └── ollama_manager.py # Model management interface
|
||||
├── components/ # Reusable UI components
|
||||
│ ├── header.py # Enhanced header with metrics
|
||||
│ └── sidebar.py # Navigation with auto-populated tools
|
||||
├── tools/ # Plugin system
|
||||
│ ├── base_tool.py # BaseTool and BasePage classes
|
||||
│ └── example_tool/ # Example plugin implementation
|
||||
├── utils/ # System monitoring utilities
|
||||
│ ├── system_monitor.py # CPU, memory, disk monitoring
|
||||
│ ├── gpu_monitor.py # GPU performance tracking
|
||||
│ └── ollama_monitor.py # Ollama service monitoring
|
||||
└── static/ # CSS and assets
|
||||
```
|
||||
|
||||
### Key Design Patterns
|
||||
- **Plugin Architecture**: Auto-discovery of tools from `src/tools/`
|
||||
- **Context Pattern**: Shared resource access via `ToolContext`
|
||||
- **Async Components**: Custom `niceguiasyncelement` framework
|
||||
- **Real-time Binding**: Live data updates with NiceGUI binding system
|
||||
|
||||
## ⚙️ Configuration
|
||||
|
||||
### Environment Variables
|
||||
Create a `.env` file in the project root:
|
||||
|
||||
```env
|
||||
# Application settings
|
||||
APP_PORT=8080
|
||||
APP_TITLE=ArchGPU Frontend
|
||||
APP_SHOW=false
|
||||
APP_STORAGE_SECRET=your-secret-key
|
||||
|
||||
# Monitoring settings
|
||||
MONITORING_UPDATE_INTERVAL=2
|
||||
```
|
||||
|
||||
### GPU Support
|
||||
The application automatically detects and supports:
|
||||
- **AMD GPUs**: Via rocm-smi or sysfs fallback
|
||||
- **NVIDIA GPUs**: Via nvidia-smi
|
||||
- **Multi-GPU**: Supports multiple GPU monitoring
|
||||
|
||||
## 🔒 Privacy & Security
|
||||
|
||||
### Local-First Design
|
||||
- All AI processing happens on your local machine
|
||||
- No data sent to external providers
|
||||
- Complete control over model interactions
|
||||
- Secure testing of sensitive data
|
||||
|
||||
### Use Cases
|
||||
- Testing model behaviors that providers restrict
|
||||
- Private data analysis and processing
|
||||
- Security research and safety testing
|
||||
- Custom prompt engineering without external logging
|
||||
- Unrestricted local AI experimentation
|
||||
|
||||
## 🛠️ Development
|
||||
|
||||
### Running in Development
|
||||
```bash
|
||||
# Development server (port 8081 to avoid conflicts)
|
||||
APP_PORT=8081 uv run python src/main.py
|
||||
|
||||
# Production server
|
||||
uv run python src/main.py
|
||||
```
|
||||
|
||||
### Adding Dependencies
|
||||
```bash
|
||||
# Add runtime dependency
|
||||
uv add package-name
|
||||
|
||||
# Add development dependency
|
||||
uv add --dev package-name
|
||||
|
||||
# Update all dependencies
|
||||
uv sync
|
||||
```
|
||||
|
||||
### Creating Tools
|
||||
1. Create tool directory: `src/tools/my_tool/`
|
||||
2. Implement tool class inheriting from `BaseTool`
|
||||
3. Define routes and page classes
|
||||
4. Tool automatically appears in navigation
|
||||
|
||||
See [Tool Creation Guide](docs/TOOL_CREATION.md) for detailed instructions.
|
||||
|
||||
## 🤝 Contributing
|
||||
|
||||
We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.
|
||||
|
||||
### Development Setup
|
||||
1. Fork the repository
|
||||
2. Create a feature branch
|
||||
3. Install dependencies with `uv sync`
|
||||
4. Make your changes
|
||||
5. Test with `APP_PORT=8081 uv run python src/main.py`
|
||||
6. Submit a pull request
|
||||
|
||||
## 📄 License
|
||||
|
||||
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
|
||||
|
||||
## 🆘 Support
|
||||
|
||||
- **Issues**: Create GitHub issues for bugs and feature requests
|
||||
- **Documentation**: Check the `docs/` directory for detailed guides
|
||||
|
||||
## 🙏 Acknowledgments
|
||||
|
||||
- [NiceGUI](https://nicegui.io/) - Excellent Python web framework
|
||||
- [Ollama](https://ollama.ai/) - Local AI model serving
|
||||
- [psutil](https://psutil.readthedocs.io/) - System monitoring
|
||||
- The open-source AI community
|
||||
|
||||
---
|
||||
|
||||
**Built for privacy-focused AI experimentation and local model testing.**
|
||||
|
||||
565
docs/TOOL_CREATION.md
Normal file
565
docs/TOOL_CREATION.md
Normal file
@@ -0,0 +1,565 @@
|
||||
# Tool Creation Guide
|
||||
|
||||
This guide walks you through creating custom tools for the ArchGPU Frontend platform. Tools are plugins that extend the application with custom AI testing capabilities.
|
||||
|
||||
## 🎯 Overview
|
||||
|
||||
Tools in ArchGPU Frontend are:
|
||||
- **Auto-discovered** from the `src/tools/` directory
|
||||
- **Self-contained** with their own routing and pages
|
||||
- **Context-aware** with access to system monitors
|
||||
- **Easily toggleable** via enable/disable properties
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
### 1. Create Tool Structure
|
||||
|
||||
```bash
|
||||
mkdir src/tools/my_tool
|
||||
touch src/tools/my_tool/__init__.py
|
||||
touch src/tools/my_tool/tool.py
|
||||
```
|
||||
|
||||
### 2. Basic Tool Implementation
|
||||
|
||||
Create `src/tools/my_tool/tool.py`:
|
||||
|
||||
```python
|
||||
from typing import Dict, Callable, Awaitable
|
||||
from nicegui import ui
|
||||
from tools.base_tool import BaseTool, BasePage
|
||||
|
||||
class MyTool(BaseTool):
|
||||
@property
|
||||
def name(self) -> str:
|
||||
return "My Testing Tool"
|
||||
|
||||
@property
|
||||
def description(self) -> str:
|
||||
return "A custom tool for AI testing"
|
||||
|
||||
@property
|
||||
def icon(self) -> str:
|
||||
return "science" # Material Design icon name
|
||||
|
||||
@property
|
||||
def enabled(self) -> bool:
|
||||
return True # Set to False to disable
|
||||
|
||||
@property
|
||||
def routes(self) -> Dict[str, Callable[[], Awaitable]]:
|
||||
return {
|
||||
'': lambda: MainPage().create(self),
|
||||
}
|
||||
|
||||
class MainPage(BasePage):
|
||||
async def content(self):
|
||||
ui.label(f"Welcome to {self.tool.name}!").classes('text-2xl font-bold text-white')
|
||||
|
||||
# Access system monitors via context
|
||||
cpu_usage = self.tool.context.system_monitor.cpu_percent
|
||||
ui.label(f"Current CPU usage: {cpu_usage:.1f}%").classes('text-white')
|
||||
```
|
||||
|
||||
### 3. Run and Test
|
||||
|
||||
Start the application:
|
||||
```bash
|
||||
APP_PORT=8081 uv run python src/main.py
|
||||
```
|
||||
|
||||
Your tool will automatically appear in the sidebar under "TOOLS" section!
|
||||
|
||||
## 📚 Detailed Guide
|
||||
|
||||
### Tool Base Class
|
||||
|
||||
Every tool must inherit from `BaseTool` and implement these required properties:
|
||||
|
||||
```python
|
||||
class MyTool(BaseTool):
|
||||
@property
|
||||
def name(self) -> str:
|
||||
"""Display name in the sidebar"""
|
||||
return "My Tool"
|
||||
|
||||
@property
|
||||
def description(self) -> str:
|
||||
"""Tool description for documentation"""
|
||||
return "What this tool does"
|
||||
|
||||
@property
|
||||
def icon(self) -> str:
|
||||
"""Material Design icon name"""
|
||||
return "build"
|
||||
|
||||
@property
|
||||
def routes(self) -> Dict[str, Callable[[], Awaitable]]:
|
||||
"""Route mapping for sub-pages"""
|
||||
return {'': lambda: MainPage().create(self)}
|
||||
```
|
||||
|
||||
Optional properties:
|
||||
|
||||
```python
|
||||
@property
|
||||
def enabled(self) -> bool:
|
||||
"""Enable/disable tool (default: True)"""
|
||||
return True # or False to disable
|
||||
```
|
||||
|
||||
### Route System
|
||||
|
||||
Tools can have multiple pages using sub-routes:
|
||||
|
||||
```python
|
||||
@property
|
||||
def routes(self):
|
||||
return {
|
||||
'': lambda: MainPage().create(self), # /my-tool
|
||||
'/settings': lambda: SettingsPage().create(self), # /my-tool/settings
|
||||
'/results': lambda: ResultsPage().create(self), # /my-tool/results
|
||||
'/history': lambda: HistoryPage().create(self), # /my-tool/history
|
||||
}
|
||||
```
|
||||
|
||||
**Route naming**: Tool directory `my_tool` becomes route `/my-tool` (underscores → hyphens)
|
||||
|
||||
### Page Classes
|
||||
|
||||
All pages should inherit from `BasePage` which provides:
|
||||
- Standard layout structure
|
||||
- `main-content` CSS class
|
||||
- Access to the tool instance via `self.tool`
|
||||
|
||||
```python
|
||||
class MainPage(BasePage):
|
||||
async def content(self):
|
||||
# This method contains your page content
|
||||
ui.label("Page title").classes('text-2xl font-bold text-white')
|
||||
|
||||
with ui.card().classes('metric-card p-6'):
|
||||
ui.label("Card content").classes('text-white')
|
||||
```
|
||||
|
||||
### Accessing System Data
|
||||
|
||||
Use the tool context to access system monitors:
|
||||
|
||||
```python
|
||||
class MainPage(BasePage):
|
||||
async def content(self):
|
||||
# Access system monitors
|
||||
sys_mon = self.tool.context.system_monitor
|
||||
gpu_mon = self.tool.context.gpu_monitor
|
||||
ollama_mon = self.tool.context.ollama_monitor
|
||||
|
||||
# Display live data
|
||||
ui.label().classes('text-white').bind_text_from(
|
||||
sys_mon, 'cpu_percent',
|
||||
backward=lambda x: f'CPU: {x:.1f}%'
|
||||
)
|
||||
|
||||
ui.label().classes('text-white').bind_text_from(
|
||||
gpu_mon, 'temperature',
|
||||
backward=lambda x: f'GPU: {x:.0f}°C'
|
||||
)
|
||||
|
||||
ui.label().classes('text-white').bind_text_from(
|
||||
ollama_mon, 'active_models',
|
||||
backward=lambda x: f'Models: {len(x)}'
|
||||
)
|
||||
```
|
||||
|
||||
### Navigation Between Pages
|
||||
|
||||
Create navigation buttons to move between tool pages:
|
||||
|
||||
```python
|
||||
class MainPage(BasePage):
|
||||
async def content(self):
|
||||
ui.label("Main Page").classes('text-2xl font-bold text-white')
|
||||
|
||||
# Navigation buttons
|
||||
with ui.row().classes('gap-2'):
|
||||
ui.button('Settings', icon='settings',
|
||||
on_click=lambda: ui.navigate.to(f'{self.tool.baseroute}/settings'))
|
||||
ui.button('Results', icon='analytics',
|
||||
on_click=lambda: ui.navigate.to(f'{self.tool.baseroute}/results'))
|
||||
|
||||
class SettingsPage(BasePage):
|
||||
async def content(self):
|
||||
# Back button
|
||||
with ui.row().classes('items-center gap-4 mb-4'):
|
||||
ui.button(icon='arrow_back',
|
||||
on_click=lambda: ui.navigate.to(self.tool.baseroute)).props('flat round')
|
||||
ui.label("Settings").classes('text-2xl font-bold text-white')
|
||||
```
|
||||
|
||||
## 🛠️ Advanced Features
|
||||
|
||||
### Dynamic Content with Refreshable
|
||||
|
||||
Use `@ui.refreshable` for content that updates periodically:
|
||||
|
||||
```python
|
||||
class MainPage(BasePage):
|
||||
async def content(self):
|
||||
ui.label("Live Model Status").classes('text-xl font-bold text-white mb-4')
|
||||
|
||||
@ui.refreshable
|
||||
def model_status():
|
||||
models = self.tool.context.ollama_monitor.active_models
|
||||
if not models:
|
||||
ui.label("No models running").classes('text-gray-400')
|
||||
else:
|
||||
for model in models:
|
||||
with ui.row().classes('items-center gap-2'):
|
||||
ui.icon('circle', color='green', size='sm')
|
||||
ui.label(model.get('name', 'Unknown')).classes('text-white')
|
||||
|
||||
model_status()
|
||||
ui.timer(2.0, model_status.refresh) # Update every 2 seconds
|
||||
```
|
||||
|
||||
### Form Handling
|
||||
|
||||
Create interactive forms for user input:
|
||||
|
||||
```python
|
||||
class SettingsPage(BasePage):
|
||||
async def content(self):
|
||||
ui.label("Tool Settings").classes('text-xl font-bold text-white mb-4')
|
||||
|
||||
with ui.card().classes('metric-card p-6'):
|
||||
with ui.column().classes('gap-4'):
|
||||
# Text input
|
||||
prompt_input = ui.input('Custom Prompt').props('outlined')
|
||||
|
||||
# Number input
|
||||
batch_size = ui.number('Batch Size', value=10, min=1, max=100).props('outlined')
|
||||
|
||||
# Select dropdown
|
||||
model_select = ui.select(
|
||||
options=['gpt-3.5-turbo', 'gpt-4', 'claude-3'],
|
||||
value='gpt-3.5-turbo'
|
||||
).props('outlined')
|
||||
|
||||
# Checkbox
|
||||
enable_logging = ui.checkbox('Enable Logging', value=True)
|
||||
|
||||
# Save button
|
||||
ui.button('Save Settings', icon='save',
|
||||
on_click=lambda: self.save_settings(
|
||||
prompt_input.value,
|
||||
batch_size.value,
|
||||
model_select.value,
|
||||
enable_logging.value
|
||||
)).props('color=primary')
|
||||
|
||||
def save_settings(self, prompt, batch_size, model, logging):
|
||||
# Handle form submission
|
||||
ui.notify(f'Settings saved: {prompt}, {batch_size}, {model}, {logging}')
|
||||
```
|
||||
|
||||
### File Operations
|
||||
|
||||
Handle file uploads and downloads:
|
||||
|
||||
```python
|
||||
class MainPage(BasePage):
|
||||
async def content(self):
|
||||
ui.label("File Operations").classes('text-xl font-bold text-white mb-4')
|
||||
|
||||
with ui.card().classes('metric-card p-6'):
|
||||
# File upload
|
||||
ui.upload(
|
||||
label='Upload Test Data',
|
||||
on_upload=self.handle_upload,
|
||||
max_file_size=10_000_000 # 10MB
|
||||
).props('accept=.txt,.json,.csv')
|
||||
|
||||
# Download button
|
||||
ui.button('Download Results', icon='download',
|
||||
on_click=self.download_results)
|
||||
|
||||
def handle_upload(self, e):
|
||||
"""Handle file upload"""
|
||||
with open(f'uploads/{e.name}', 'wb') as f:
|
||||
f.write(e.content.read())
|
||||
ui.notify(f'Uploaded {e.name}')
|
||||
|
||||
def download_results(self):
|
||||
"""Generate and download results"""
|
||||
content = "Sample results data..."
|
||||
ui.download(content.encode(), 'results.txt')
|
||||
```
|
||||
|
||||
### Working with Ollama Models
|
||||
|
||||
Interact with Ollama models in your tool:
|
||||
|
||||
```python
|
||||
from utils import ollama
|
||||
|
||||
class MainPage(BasePage):
|
||||
async def content(self):
|
||||
ui.label("Model Testing").classes('text-xl font-bold text-white mb-4')
|
||||
|
||||
with ui.card().classes('metric-card p-6'):
|
||||
# Model selection
|
||||
models = await ollama.available_models()
|
||||
model_names = [m['name'] for m in models]
|
||||
selected_model = ui.select(model_names, label='Select Model').props('outlined')
|
||||
|
||||
# Prompt input
|
||||
prompt_input = ui.textarea('Enter prompt').props('outlined')
|
||||
|
||||
# Test button
|
||||
ui.button('Test Model', icon='play_arrow',
|
||||
on_click=lambda: self.test_model(selected_model.value, prompt_input.value))
|
||||
|
||||
# Results display
|
||||
self.results_area = ui.html()
|
||||
|
||||
async def test_model(self, model_name, prompt):
|
||||
"""Test a model with the given prompt"""
|
||||
if not model_name or not prompt:
|
||||
ui.notify('Please select a model and enter a prompt', type='warning')
|
||||
return
|
||||
|
||||
try:
|
||||
# Call Ollama API
|
||||
response = await ollama.generate(model_name, prompt)
|
||||
self.results_area.content = f'<pre class="text-white">{response}</pre>'
|
||||
except Exception as e:
|
||||
ui.notify(f'Error: {str(e)}', type='negative')
|
||||
```
|
||||
|
||||
## 🎨 Styling Guidelines
|
||||
|
||||
### CSS Classes
|
||||
|
||||
Use these standard classes for consistent styling:
|
||||
|
||||
```python
|
||||
# Text styles
|
||||
ui.label("Title").classes('text-2xl font-bold text-white')
|
||||
ui.label("Subtitle").classes('text-lg font-bold text-white')
|
||||
ui.label("Body text").classes('text-sm text-white')
|
||||
ui.label("Muted text").classes('text-xs text-grey-5')
|
||||
|
||||
# Cards and containers
|
||||
ui.card().classes('metric-card p-6')
|
||||
ui.row().classes('items-center gap-4')
|
||||
ui.column().classes('gap-4')
|
||||
|
||||
# Buttons
|
||||
ui.button('Primary').props('color=primary')
|
||||
ui.button('Secondary').props('color=secondary')
|
||||
ui.button('Icon', icon='icon_name').props('round flat')
|
||||
```
|
||||
|
||||
### Color Scheme
|
||||
|
||||
The application uses a dark theme with these accent colors:
|
||||
- **Primary**: Cyan (`#06b6d4`)
|
||||
- **Success**: Green (`#10b981`)
|
||||
- **Warning**: Orange (`#f97316`)
|
||||
- **Error**: Red (`#ef4444`)
|
||||
- **Purple**: (`#e879f9`)
|
||||
|
||||
## 🔧 Tool Configuration
|
||||
|
||||
### Environment-Based Enabling
|
||||
|
||||
Enable tools based on environment variables:
|
||||
|
||||
```python
|
||||
import os
|
||||
|
||||
class MyTool(BaseTool):
|
||||
@property
|
||||
def enabled(self) -> bool:
|
||||
return os.getenv('ENABLE_EXPERIMENTAL_TOOLS', 'false').lower() == 'true'
|
||||
```
|
||||
|
||||
### Configuration Files
|
||||
|
||||
Store tool-specific configuration:
|
||||
|
||||
```python
|
||||
import json
|
||||
import os
|
||||
|
||||
class MyTool(BaseTool):
|
||||
def __init__(self):
|
||||
self.config_file = 'config/my_tool.json'
|
||||
self.config = self.load_config()
|
||||
|
||||
def load_config(self):
|
||||
if os.path.exists(self.config_file):
|
||||
with open(self.config_file, 'r') as f:
|
||||
return json.load(f)
|
||||
return {'enabled': True, 'max_batch_size': 100}
|
||||
|
||||
def save_config(self):
|
||||
os.makedirs(os.path.dirname(self.config_file), exist_ok=True)
|
||||
with open(self.config_file, 'w') as f:
|
||||
json.dump(self.config, f, indent=2)
|
||||
```
|
||||
|
||||
## 🐛 Debugging
|
||||
|
||||
### Logging
|
||||
|
||||
Add logging to your tools:
|
||||
|
||||
```python
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class MainPage(BasePage):
|
||||
async def content(self):
|
||||
logger.info("MainPage loaded")
|
||||
|
||||
try:
|
||||
# Your code here
|
||||
pass
|
||||
except Exception as e:
|
||||
logger.error(f"Error in MainPage: {e}")
|
||||
ui.notify(f"Error: {str(e)}", type='negative')
|
||||
```
|
||||
|
||||
### Development Tips
|
||||
|
||||
1. **Use port 8081** for development to avoid conflicts
|
||||
2. **Check browser console** for JavaScript errors
|
||||
3. **Monitor server logs** for Python exceptions
|
||||
4. **Use ui.notify()** for user feedback
|
||||
5. **Test with different screen sizes** for responsiveness
|
||||
|
||||
## 📖 Examples
|
||||
|
||||
### Complete Tool Example
|
||||
|
||||
Here's a complete example of a model comparison tool:
|
||||
|
||||
```python
|
||||
from typing import Dict, Callable, Awaitable
|
||||
from nicegui import ui
|
||||
from tools.base_tool import BaseTool, BasePage
|
||||
from utils import ollama
|
||||
import asyncio
|
||||
|
||||
class ModelCompareTool(BaseTool):
|
||||
@property
|
||||
def name(self) -> str:
|
||||
return "Model Compare"
|
||||
|
||||
@property
|
||||
def description(self) -> str:
|
||||
return "Compare responses from different AI models"
|
||||
|
||||
@property
|
||||
def icon(self) -> str:
|
||||
return "compare"
|
||||
|
||||
@property
|
||||
def routes(self) -> Dict[str, Callable[[], Awaitable]]:
|
||||
return {
|
||||
'': lambda: ComparePage().create(self),
|
||||
'/history': lambda: HistoryPage().create(self),
|
||||
}
|
||||
|
||||
class ComparePage(BasePage):
|
||||
async def content(self):
|
||||
ui.label("Model Comparison").classes('text-2xl font-bold text-white mb-4')
|
||||
|
||||
# Get available models
|
||||
models = await ollama.available_models()
|
||||
model_names = [m['name'] for m in models]
|
||||
|
||||
with ui.row().classes('w-full gap-4'):
|
||||
# Left panel - inputs
|
||||
with ui.card().classes('metric-card p-6 flex-1'):
|
||||
ui.label("Setup").classes('text-lg font-bold text-white mb-4')
|
||||
|
||||
self.model1 = ui.select(model_names, label='Model 1').props('outlined')
|
||||
self.model2 = ui.select(model_names, label='Model 2').props('outlined')
|
||||
self.prompt = ui.textarea('Prompt', placeholder='Enter your prompt here...').props('outlined')
|
||||
|
||||
ui.button('Compare Models', icon='play_arrow',
|
||||
on_click=self.compare_models).props('color=primary')
|
||||
|
||||
# Right panel - results
|
||||
with ui.card().classes('metric-card p-6 flex-1'):
|
||||
ui.label("Results").classes('text-lg font-bold text-white mb-4')
|
||||
self.results_container = ui.column().classes('gap-4')
|
||||
|
||||
async def compare_models(self):
|
||||
if not all([self.model1.value, self.model2.value, self.prompt.value]):
|
||||
ui.notify('Please fill all fields', type='warning')
|
||||
return
|
||||
|
||||
self.results_container.clear()
|
||||
|
||||
with self.results_container:
|
||||
ui.label("Comparing models...").classes('text-white')
|
||||
|
||||
# Run both models concurrently
|
||||
tasks = [
|
||||
ollama.generate(self.model1.value, self.prompt.value),
|
||||
ollama.generate(self.model2.value, self.prompt.value)
|
||||
]
|
||||
|
||||
try:
|
||||
results = await asyncio.gather(*tasks)
|
||||
|
||||
# Display results side by side
|
||||
with ui.row().classes('w-full gap-4'):
|
||||
for i, (model_name, result) in enumerate(zip([self.model1.value, self.model2.value], results)):
|
||||
with ui.card().classes('metric-card p-4 flex-1'):
|
||||
ui.label(model_name).classes('text-lg font-bold text-white mb-2')
|
||||
ui.html(f'<pre class="text-white text-sm">{result}</pre>')
|
||||
|
||||
except Exception as e:
|
||||
ui.notify(f'Error: {str(e)}', type='negative')
|
||||
|
||||
class HistoryPage(BasePage):
|
||||
async def content(self):
|
||||
with ui.row().classes('items-center gap-4 mb-4'):
|
||||
ui.button(icon='arrow_back',
|
||||
on_click=lambda: ui.navigate.to(self.tool.baseroute)).props('flat round')
|
||||
ui.label("Comparison History").classes('text-2xl font-bold text-white')
|
||||
|
||||
# History implementation here
|
||||
ui.label("History feature coming soon...").classes('text-grey-5')
|
||||
```
|
||||
|
||||
## 🚀 Publishing Your Tool
|
||||
|
||||
### Code Quality
|
||||
- Follow Python PEP 8 style guidelines
|
||||
- Add type hints to your methods
|
||||
- Include docstrings for complex functions
|
||||
- Handle errors gracefully with try/catch blocks
|
||||
|
||||
### Testing
|
||||
- Test your tool with different models
|
||||
- Verify responsive design on different screen sizes
|
||||
- Test enable/disable functionality
|
||||
- Ensure proper cleanup of resources
|
||||
|
||||
### Documentation
|
||||
- Add comments to complex logic
|
||||
- Create usage examples
|
||||
- Document any configuration options
|
||||
- Include screenshots if helpful
|
||||
|
||||
Your tool is now ready to be shared with the ArchGPU Frontend community!
|
||||
|
||||
---
|
||||
|
||||
**Happy tool building! 🛠️**
|
||||
@@ -66,14 +66,11 @@ class Header(ui.header):
|
||||
value_label.bind_text_from(monitor, attr, backward=formatter)
|
||||
|
||||
def _format_memory(self, used: int, total: int, base=3) -> str:
|
||||
print(f"{used} / {total}")
|
||||
"""Format RAM usage in GB"""
|
||||
if total == 0:
|
||||
return "N/A"
|
||||
used_gb = used / (1024**base)
|
||||
total_gb = total / (1024**base)
|
||||
formatted = f"{used_gb:.1f}/{total_gb:.0f}GB"
|
||||
print(formatted)
|
||||
return f"{used_gb:.1f}/{total_gb:.0f}GB"
|
||||
|
||||
def _format_disk(self, used: int, total: int) -> str:
|
||||
@@ -101,18 +98,15 @@ class Header(ui.header):
|
||||
if not models:
|
||||
ui.label('No models loaded').classes('text-xs text-gray-400')
|
||||
else:
|
||||
with ui.column().classes('gap-2 p-2'):
|
||||
ui.label('Active Models').classes('text-sm font-bold text-white mb-1')
|
||||
for model in models:
|
||||
with ui.row().classes('items-center gap-2'):
|
||||
ui.icon('circle', size='xs').props('color=green')
|
||||
with ui.column().classes('gap-0'):
|
||||
ui.label(model.get('name', 'Unknown')).classes('text-xs text-white font-medium')
|
||||
vram_gb = model.get('size_vram', 0) / (1024**3)
|
||||
ui.label(f'VRAM: {vram_gb:.2f} GB').classes('text-xs text-gray-400')
|
||||
if 'size' in model:
|
||||
size_gb = model.get('size', 0) / (1024**3)
|
||||
ui.label(f'Size: {size_gb:.2f} GB').classes('text-xs text-gray-400')
|
||||
header = ['Name', 'Model', 'Context', 'Size', 'VRAM']
|
||||
with ui.grid(columns=len(header)).classes('items-center gap-2 w-full'):
|
||||
[ui.label(item) for item in header]
|
||||
for model in ollama_monitor.active_models:
|
||||
ui.label(model.get('name', 'Unknown')).classes('text-xs text-white')
|
||||
ui.label(model.get('model', 'Unknown')).classes('text-xs text-white')
|
||||
ui.label(f'{model.get('context_length', 'Unknown')}ctx').classes('text-xs text-white')
|
||||
ui.label(f'{model.get('size', 0) / (1024**3):.1f}GB').classes('text-xs text-grey-6')
|
||||
ui.label(f'{model.get('size_vram', 0) / (1024**3):.1f}GB').classes('text-xs text-grey-6')
|
||||
|
||||
# Display initial content
|
||||
tooltip_content()
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
from nicegui import ui
|
||||
from tools import TOOLS
|
||||
|
||||
|
||||
class Sidebar:
|
||||
@@ -10,14 +11,18 @@ class Sidebar:
|
||||
|
||||
with ui.column().classes('gap-1 mb-6'):
|
||||
self._nav_item('Dashboard', 'dashboard', '/', active=(current_route == '/'))
|
||||
self._nav_item('System Overview', 'monitor', '/system', active=(current_route == '/system'))
|
||||
|
||||
ui.label('TOOLS').classes('text-xs text-grey-5 font-bold tracking-wide mb-2')
|
||||
|
||||
with ui.column().classes('gap-1 mb-6'):
|
||||
self._nav_item('Censor', 'description', '/censor', active=(current_route == '/censor'))
|
||||
for tool in TOOLS.values():
|
||||
self._nav_item(tool.name, tool.icon, tool.baseroute, active=(current_route == tool.baseroute))
|
||||
|
||||
ui.space()
|
||||
|
||||
ui.label('EXTERNAL').classes('text-xs text-grey-5 font-bold tracking-wide mb-2')
|
||||
self._nav_item_external('Open WebUI', 'view_in_ar', 'https://webui.project-insanity.de/', active=(current_route == '/ollama'))
|
||||
|
||||
ui.separator().classes('my-4')
|
||||
self._nav_item('Model Manager', 'view_in_ar', '/ollama', active=(current_route == '/ollama'))
|
||||
# Bottom section
|
||||
ui.separator().classes('my-4')
|
||||
@@ -31,6 +36,14 @@ class Sidebar:
|
||||
text_color = 'text-cyan' if active else 'text-grey-5 hover:text-white'
|
||||
icon_color = 'cyan' if active else 'grey-5'
|
||||
|
||||
with ui.row().classes(f'w-full items-center gap-3 px-3 py-2 rounded-lg cursor-pointer {bg_class}').on('click', navigate):
|
||||
with ui.row().classes(f'w-full items-center gap-3 px-3 py-2 rounded-lg cursor-pointer {bg_class} hover:bg-cyan-600/30').on('click', navigate):
|
||||
ui.icon(icon, size='sm', color=icon_color)
|
||||
ui.label(label).classes(f'text-sm {text_color}')
|
||||
|
||||
def _nav_item_external(self, label: str, icon: str, url: str, active: bool = False):
|
||||
def navigate():
|
||||
ui.navigate.to(url, new_tab=True)
|
||||
|
||||
with ui.row().classes(f'w-full items-center gap-3 px-3 py-2 rounded-lg cursor-pointer hover:bg-cyan-600/30').on('click', navigate):
|
||||
ui.icon(icon, size='sm')
|
||||
ui.label(label).classes(f'text-sm')
|
||||
|
||||
90
src/main.py
90
src/main.py
@@ -8,6 +8,9 @@ from pages import DashboardPage, OllamaManagerPage
|
||||
from utils import GPUMonitor, SystemMonitor, OllamaMonitor
|
||||
import logging
|
||||
|
||||
from tools import TOOLS
|
||||
from tools.base_tool import ToolContext, set_tool_context
|
||||
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||
@@ -28,6 +31,14 @@ app.timer(2.0, system_monitor.update)
|
||||
app.timer(2.0, gpu_monitor.update)
|
||||
app.timer(2.0, ollama_monitor.update)
|
||||
|
||||
# Initialize tool context
|
||||
tool_context = ToolContext(
|
||||
system_monitor=system_monitor,
|
||||
gpu_monitor=gpu_monitor,
|
||||
ollama_monitor=ollama_monitor
|
||||
)
|
||||
set_tool_context(tool_context)
|
||||
|
||||
|
||||
def create_layout(current_route='/'):
|
||||
# Force dark mode
|
||||
@@ -41,22 +52,31 @@ def create_layout(current_route='/'):
|
||||
Sidebar(current_route)
|
||||
|
||||
|
||||
# Create tool routes with sub-pages support
|
||||
for tool_baseroute, tool in TOOLS.items():
|
||||
# Register all routes defined by the tool
|
||||
for sub_path, handler in tool.routes.items():
|
||||
# Construct full route path
|
||||
full_route = tool.baseroute + sub_path if sub_path else tool.baseroute
|
||||
|
||||
# Create a closure to capture the current handler and route
|
||||
def create_route_handler(route, handler_func):
|
||||
@ui.page(route)
|
||||
async def tool_page():
|
||||
create_layout(route)
|
||||
await handler_func()
|
||||
return tool_page
|
||||
|
||||
# Register the route
|
||||
create_route_handler(full_route, handler)
|
||||
|
||||
|
||||
@ui.page('/')
|
||||
async def index_page():
|
||||
create_layout('/')
|
||||
DashboardPage(system_monitor, gpu_monitor, ollama_monitor)
|
||||
|
||||
|
||||
@ui.page('/system')
|
||||
async def system_page():
|
||||
create_layout('/system')
|
||||
with ui.element('div').classes('main-content w-full'):
|
||||
with ui.column().classes('w-full max-w-4xl mx-auto p-6 gap-6'):
|
||||
ui.label('System Overview').classes('text-2xl font-bold text-white mb-4')
|
||||
with ui.card().classes('metric-card p-6'):
|
||||
ui.label('Detailed system information will be displayed here...').classes('text-grey-5')
|
||||
|
||||
|
||||
@ui.page('/ollama')
|
||||
async def ollama_page():
|
||||
create_layout('/ollama')
|
||||
@@ -65,56 +85,6 @@ async def ollama_page():
|
||||
# await page._load_models()
|
||||
|
||||
|
||||
@ui.page('/processes')
|
||||
async def processes_page():
|
||||
create_layout('/processes')
|
||||
with ui.element('div').classes('main-content w-full'):
|
||||
with ui.column().classes('w-full max-w-4xl mx-auto p-6 gap-6'):
|
||||
ui.label('Process Manager').classes('text-2xl font-bold text-white')
|
||||
with ui.card().classes('metric-card p-6'):
|
||||
ui.label('Process management coming soon...').classes('text-grey-5')
|
||||
|
||||
|
||||
@ui.page('/network')
|
||||
async def network_page():
|
||||
create_layout('/network')
|
||||
with ui.element('div').classes('main-content w-full'):
|
||||
with ui.column().classes('w-full max-w-4xl mx-auto p-6 gap-6'):
|
||||
ui.label('Network Monitor').classes('text-2xl font-bold text-white')
|
||||
with ui.card().classes('metric-card p-6'):
|
||||
ui.label('Network monitoring coming soon...').classes('text-grey-5')
|
||||
|
||||
|
||||
@ui.page('/packages')
|
||||
async def packages_page():
|
||||
create_layout('/packages')
|
||||
with ui.element('div').classes('main-content w-full'):
|
||||
with ui.column().classes('w-full max-w-4xl mx-auto p-6 gap-6'):
|
||||
ui.label('Package Manager').classes('text-2xl font-bold text-white')
|
||||
with ui.card().classes('metric-card p-6'):
|
||||
ui.label('Package management coming soon...').classes('text-grey-5')
|
||||
|
||||
|
||||
@ui.page('/logs')
|
||||
async def logs_page():
|
||||
create_layout('/logs')
|
||||
with ui.element('div').classes('main-content w-full'):
|
||||
with ui.column().classes('w-full max-w-4xl mx-auto p-6 gap-6'):
|
||||
ui.label('Log Viewer').classes('text-2xl font-bold text-white')
|
||||
with ui.card().classes('metric-card p-6'):
|
||||
ui.label('Log viewing coming soon...').classes('text-grey-5')
|
||||
|
||||
|
||||
@ui.page('/info')
|
||||
async def info_page():
|
||||
create_layout('/info')
|
||||
with ui.element('div').classes('main-content w-full'):
|
||||
with ui.column().classes('w-full max-w-4xl mx-auto p-6 gap-6'):
|
||||
ui.label('System Information').classes('text-2xl font-bold text-white')
|
||||
with ui.card().classes('metric-card p-6'):
|
||||
ui.label('Detailed system information coming soon...').classes('text-grey-5')
|
||||
|
||||
|
||||
@ui.page('/settings')
|
||||
async def settings_page():
|
||||
create_layout('/settings')
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
from .welcome import WelcomePage
|
||||
from .system_overview import SystemOverviewPage
|
||||
from .ollama_manager import OllamaManagerPage
|
||||
from .dashboard import DashboardPage
|
||||
|
||||
__all__ = ['WelcomePage', 'SystemOverviewPage', 'OllamaManagerPage', 'DashboardPage']
|
||||
__all__ = ['WelcomePage', 'OllamaManagerPage', 'DashboardPage']
|
||||
|
||||
@@ -1,50 +1,7 @@
|
||||
from typing import Literal
|
||||
from nicegui import ui
|
||||
from components.circular_progress import MetricCircle, LargeMetricCircle, ColorfulMetricCard, MetricCircleAdv
|
||||
from utils import SystemMonitor, GPUMonitor, OllamaMonitor
|
||||
|
||||
"""
|
||||
with ui.element('div').classes('main-content w-full'):
|
||||
with ui.column().classes('w-full max-w-4xl mx-auto p-6 gap-6'):
|
||||
ui.label('Ollama Manager').classes('text-2xl font-bold text-white mb-4')
|
||||
|
||||
# Status cards
|
||||
with ui.row().classes('w-full gap-4 mb-6'):
|
||||
with ui.card().classes('metric-card flex-grow p-4'):
|
||||
with ui.row().classes('items-center gap-2'):
|
||||
ui.icon('check_circle', color='green')
|
||||
ui.label('Status: Online').classes('font-medium text-white')
|
||||
|
||||
with ui.card().classes('metric-card flex-grow p-4'):
|
||||
ui.label('Version: 0.11.11').classes('font-medium text-white')
|
||||
|
||||
# Models list
|
||||
with ui.card().classes('metric-card p-6'):
|
||||
ui.label('Installed Models').classes('text-lg font-bold text-white mb-4')
|
||||
|
||||
models = [
|
||||
('llama3.2:3b', '2.0 GB', 'Q4_0'),
|
||||
('mistral:7b', '4.1 GB', 'Q4_0'),
|
||||
('codellama:13b', '7.4 GB', 'Q4_K_M'),
|
||||
('phi3:mini', '2.3 GB', 'Q4_0'),
|
||||
]
|
||||
|
||||
for name, size, quant in models:
|
||||
with ui.card().classes('metric-card p-4 mb-2'):
|
||||
with ui.row().classes('w-full items-center'):
|
||||
with ui.column().classes('gap-1'):
|
||||
ui.label(name).classes('font-bold text-white')
|
||||
with ui.row().classes('gap-2'):
|
||||
ui.chip(size, icon='storage').props('outline dense color=cyan')
|
||||
ui.chip(quant, icon='memory').props('outline dense color=orange')
|
||||
|
||||
ui.space()
|
||||
|
||||
with ui.row().classes('gap-2'):
|
||||
ui.button(icon='play_arrow').props('round flat color=green').tooltip('Run')
|
||||
ui.button(icon='info').props('round flat color=blue').tooltip('Info')
|
||||
ui.button(icon='delete').props('round flat color=red').tooltip('Delete')
|
||||
"""
|
||||
from pprint import pprint
|
||||
|
||||
|
||||
class DashboardPage(ui.column):
|
||||
@@ -53,204 +10,291 @@ class DashboardPage(ui.column):
|
||||
super().__init__(wrap=wrap, align_items=align_items)
|
||||
self.system_monitor = system_monitor
|
||||
self.gpu_monitor = gpu_monitor
|
||||
self.ollama_monitor = ollama_monitor
|
||||
|
||||
self.classes('main-content w-full')
|
||||
# Main content area with proper viewport handling
|
||||
self.classes('main-content')
|
||||
with self:
|
||||
with ui.column().classes('w-full max-w-6xl mx-auto p-6 gap-6'):
|
||||
with ui.grid(columns=4).classes('w-full gap-4'):
|
||||
MetricCircleAdv('CPU', system_monitor, 'cpu_percent', '', icon='memory', formatting='percent', color='#e879f9')
|
||||
# Top stats grid
|
||||
with ui.grid(columns=4).classes('w-full gap-4'):
|
||||
# CPU metric with binding
|
||||
with ui.card().classes('metric-card p-4 text-center'):
|
||||
with ui.column().classes('items-center gap-2'):
|
||||
ui.icon('memory', size='md', color='#e879f9')
|
||||
ui.label('CPU').classes('text-sm text-grey-5 font-medium')
|
||||
ui.circular_progress(size='60px', color='#e879f9').bind_value_from(
|
||||
system_monitor, 'cpu_percent', lambda x: x / 100)
|
||||
ui.label().classes('text-lg font-bold text-white').bind_text_from(
|
||||
system_monitor, 'cpu_percent', lambda x: f'{x:.1f}%')
|
||||
# Page title
|
||||
ui.label('System Monitor').classes('text-2xl font-bold text-white mb-2')
|
||||
|
||||
# Memory metric with binding
|
||||
with ui.card().classes('metric-card p-4 text-center'):
|
||||
with ui.column().classes('items-center gap-2'):
|
||||
ui.icon('storage', size='md', color='#10b981')
|
||||
ui.label('Memory').classes('text-sm text-grey-5 font-medium')
|
||||
ui.circular_progress(size='60px', color='#10b981').bind_value_from(
|
||||
system_monitor, 'memory_percent', lambda x: x / 100)
|
||||
ui.label().classes('text-lg font-bold text-white').bind_text_from(
|
||||
system_monitor, 'memory_used', lambda x: f'{x / (1024**3):.1f}GB')
|
||||
|
||||
# GPU metric with conditional rendering
|
||||
with ui.card().classes('metric-card p-4 text-center'):
|
||||
with ui.column().classes('items-center gap-2'):
|
||||
ui.icon('gpu_on', size='md', color='#f97316')
|
||||
ui.label('GPU').classes('text-sm text-grey-5 font-medium')
|
||||
ui.circular_progress(size='60px', color='#f97316').bind_value_from(
|
||||
gpu_monitor, 'usage', lambda x: x / 100 if gpu_monitor.available else 0)
|
||||
ui.label().classes('text-lg font-bold text-white').bind_text_from(
|
||||
gpu_monitor, 'usage', lambda x: f'{x:.1f}%' if gpu_monitor.available else 'N/A')
|
||||
|
||||
# Temperature metric
|
||||
with ui.card().classes('metric-card p-4 text-center'):
|
||||
with ui.column().classes('items-center gap-2'):
|
||||
ui.icon('thermostat', size='md', color='#06b6d4')
|
||||
ui.label('Temp').classes('text-sm text-grey-5 font-medium')
|
||||
ui.circular_progress(size='60px', color='#06b6d4').bind_value_from(
|
||||
gpu_monitor, 'temperature', lambda x: x / 100 if gpu_monitor.available else 0)
|
||||
ui.label().classes('text-lg font-bold text-white').bind_text_from(
|
||||
gpu_monitor, 'temperature', lambda x: f'{x:.1f}°C' if gpu_monitor.available else 'N/A')
|
||||
|
||||
# Main dashboard content
|
||||
with ui.row().classes('w-full gap-6'):
|
||||
# Left column - charts and graphs
|
||||
with ui.column().classes('flex-grow gap-4'):
|
||||
# Performance chart card
|
||||
with ui.card().classes('chart-area p-6'):
|
||||
ui.label('System Performance').classes('text-lg font-bold text-white mb-4')
|
||||
|
||||
# Simulated chart area
|
||||
with ui.element('div').classes('h-48 w-full relative').style('background: linear-gradient(45deg, #1a1d2e 0%, #2a2d3e 100%); border-radius: 8px'):
|
||||
# Chart lines simulation
|
||||
with ui.element('svg').classes('absolute inset-0 w-full h-full'):
|
||||
ui.html('''
|
||||
<svg viewBox="0 0 400 200" class="w-full h-full">
|
||||
<defs>
|
||||
<linearGradient id="gradient1" x1="0%" y1="0%" x2="0%" y2="100%">
|
||||
<stop offset="0%" style="stop-color:#e879f9;stop-opacity:0.3" />
|
||||
<stop offset="100%" style="stop-color:#e879f9;stop-opacity:0" />
|
||||
</linearGradient>
|
||||
<linearGradient id="gradient2" x1="0%" y1="0%" x2="0%" y2="100%">
|
||||
<stop offset="0%" style="stop-color:#10b981;stop-opacity:0.3" />
|
||||
<stop offset="100%" style="stop-color:#10b981;stop-opacity:0" />
|
||||
</linearGradient>
|
||||
</defs>
|
||||
<path d="M 20 100 Q 100 50 200 80 T 380 60" stroke="#e879f9" stroke-width="2" fill="none"/>
|
||||
<path d="M 20 100 Q 100 50 200 80 T 380 60 L 380 180 L 20 180 Z" fill="url(#gradient1)"/>
|
||||
<path d="M 20 120 Q 100 90 200 110 T 380 100" stroke="#10b981" stroke-width="2" fill="none"/>
|
||||
<path d="M 20 120 Q 100 90 200 110 T 380 100 L 380 180 L 20 180 Z" fill="url(#gradient2)"/>
|
||||
</svg>
|
||||
''')
|
||||
|
||||
# Chart legend
|
||||
with ui.row().classes('gap-6 mt-4'):
|
||||
with ui.row().classes('items-center gap-2'):
|
||||
ui.element('div').classes('w-3 h-3 rounded-full').style('background: #e879f9')
|
||||
ui.label('CPU Usage').classes('text-sm text-grey-5')
|
||||
with ui.row().classes('items-center gap-2'):
|
||||
ui.element('div').classes('w-3 h-3 rounded-full').style('background: #10b981')
|
||||
ui.label('Memory Usage').classes('text-sm text-grey-5')
|
||||
|
||||
# Quick actions
|
||||
# PRIMARY METRICS ROW - Large cards for critical monitoring
|
||||
with ui.row().classes('w-full gap-4'):
|
||||
ColorfulMetricCard('Process Manager', 'terminal', '#e879f9')
|
||||
ColorfulMetricCard('Network Monitor', 'router', '#10b981')
|
||||
ColorfulMetricCard('Log Viewer', 'description', '#f97316')
|
||||
# System Details
|
||||
self._create_system_details_section()
|
||||
self._create_ollama_section()
|
||||
|
||||
# Right column - system info and GPU details
|
||||
with ui.column().classes('w-80 gap-4'):
|
||||
# Large GPU usage circle with binding
|
||||
with ui.card().classes('metric-card p-6 text-center'):
|
||||
with ui.column().classes('items-center gap-3'):
|
||||
ui.label('GPU Usage').classes('text-sm text-grey-5 font-medium uppercase tracking-wide')
|
||||
ui.circular_progress(size='120px', color='#f97316').bind_value_from(
|
||||
gpu_monitor, 'usage', lambda x: x / 100 if gpu_monitor.available else 0)
|
||||
with ui.row().classes('w-full gap-4'):
|
||||
# GPU Section - Most important for AI workloads
|
||||
self._create_gpu_section()
|
||||
|
||||
# CPU & Memory Section
|
||||
self._create_cpu_memory_section()
|
||||
# Storage & Network
|
||||
self._create_storage_network_section()
|
||||
|
||||
# PROCESS MONITORING
|
||||
self._create_process_section()
|
||||
|
||||
def _create_gpu_section(self):
|
||||
"""Create GPU monitoring section"""
|
||||
with ui.card().classes('metric-card p-6 flex-1'):
|
||||
ui.label('GPU Performance').classes('text-lg font-bold text-white mb-4')
|
||||
|
||||
with ui.row().classes('gap-6 items-center'):
|
||||
# GPU Load Circle
|
||||
with ui.column().classes('items-center gap-2'):
|
||||
with ui.circular_progress(size='100px', color='#f97316', show_value=False).bind_value_from(
|
||||
self.gpu_monitor, 'usage', lambda x: x / 100 if self.gpu_monitor.available else 0):
|
||||
ui.label().classes('text-2xl font-bold text-white').bind_text_from(
|
||||
gpu_monitor, 'usage', lambda x: f'{int(x)}%' if gpu_monitor.available else '0%')
|
||||
ui.label().classes('text-xs text-grey-5').bind_text_from(
|
||||
gpu_monitor, 'gpu_name', lambda x: x if gpu_monitor.available else 'No GPU Detected')
|
||||
self.gpu_monitor, 'usage', lambda x: f'{x:.0f}%' if self.gpu_monitor.available else 'N/A')
|
||||
ui.label('GPU Load').classes('text-xs text-grey-5 uppercase')
|
||||
|
||||
# System info card with bindings
|
||||
with ui.card().classes('metric-card p-4'):
|
||||
ui.label('System Info').classes('text-sm font-bold text-white mb-3')
|
||||
# GPU Metrics
|
||||
with ui.column().classes('flex-1 gap-3'):
|
||||
# GPU Name
|
||||
ui.label().classes('text-sm text-white font-medium').bind_text_from(
|
||||
self.gpu_monitor, 'gpu_name', lambda x: x if self.gpu_monitor.available else 'No GPU Detected')
|
||||
|
||||
# VRAM Bar
|
||||
with ui.column().classes('gap-1'):
|
||||
with ui.row().classes('justify-between'):
|
||||
ui.label('VRAM').classes('text-xs text-grey-5')
|
||||
ui.label().classes('text-xs text-white').bind_text_from(
|
||||
self.gpu_monitor, 'memory_used',
|
||||
lambda x: f'{x / 1024:.1f} / {self.gpu_monitor.memory_total / 1024:.1f} GB')
|
||||
ui.linear_progress(size='4px', color='purple', show_value=False).bind_value_from(
|
||||
self.gpu_monitor, 'memory_percent', lambda x: x / 100)
|
||||
|
||||
# Temperature & Power
|
||||
with ui.row().classes('gap-4'):
|
||||
with ui.row().classes('items-center gap-2'):
|
||||
ui.icon('thermostat', size='xs', color='red')
|
||||
ui.label().classes('text-sm text-white').bind_text_from(
|
||||
self.gpu_monitor, 'temperature', lambda x: f'{x:.0f}°C' if x > 0 else 'N/A')
|
||||
|
||||
with ui.row().classes('items-center gap-2'):
|
||||
ui.icon('bolt', size='xs', color='yellow')
|
||||
ui.label().classes('text-sm text-white').bind_text_from(
|
||||
self.gpu_monitor, 'power_draw', lambda x: f'{x:.0f}W' if x > 0 else 'N/A')
|
||||
|
||||
def _create_cpu_memory_section(self):
|
||||
"""Create CPU and Memory monitoring section"""
|
||||
with ui.card().classes('metric-card p-6 flex-1'):
|
||||
ui.label('CPU & Memory').classes('text-lg font-bold text-white mb-4')
|
||||
|
||||
with ui.row().classes('gap-6'):
|
||||
# CPU Section
|
||||
with ui.column().classes('flex-1 gap-3'):
|
||||
# CPU Usage Circle
|
||||
with ui.column().classes('items-center gap-4'):
|
||||
with ui.circular_progress(size='80px', color='#e879f9', show_value=False).bind_value_from(
|
||||
self.system_monitor, 'cpu_percent', lambda x: x / 100):
|
||||
ui.label().classes('text-xl font-bold text-white').bind_text_from(
|
||||
self.system_monitor, 'cpu_percent', lambda x: f'{x:.1f}%')
|
||||
ui.label('CPU Usage').classes('text-xs text-grey-5')
|
||||
|
||||
# CPU Details
|
||||
with ui.column().classes('gap-2'):
|
||||
ui.label().classes('text-xs text-grey-5').bind_text_from(
|
||||
self.system_monitor, 'cpu_model', lambda x: x[:30] + '...' if len(x) > 30 else x)
|
||||
with ui.row().classes('gap-3'):
|
||||
ui.label().classes('text-xs text-white').bind_text_from(
|
||||
self.system_monitor, 'cpu_count', lambda x: f'{x} cores')
|
||||
ui.label().classes('text-xs text-white').bind_text_from(
|
||||
self.system_monitor, 'cpu_frequency', lambda x: f'{x:.1f} GHz' if x else 'N/A')
|
||||
|
||||
# Memory Section
|
||||
with ui.column().classes('flex-1 gap-3'):
|
||||
# RAM Usage Circle
|
||||
with ui.column().classes('items-center gap-4'):
|
||||
with ui.circular_progress(size='80px', color='#10b981', show_value=False).bind_value_from(
|
||||
self.system_monitor, 'memory_percent', lambda x: x / 100):
|
||||
ui.label().classes('text-xl font-bold text-white').bind_text_from(
|
||||
self.system_monitor, 'memory_percent', lambda x: f'{x:.1f}%')
|
||||
ui.label('Memory').classes('text-xs text-grey-5')
|
||||
|
||||
# Memory Details
|
||||
with ui.column().classes('gap-2'):
|
||||
ui.label().classes('text-xs text-white').bind_text_from(
|
||||
self.system_monitor, 'memory_used',
|
||||
lambda x: f'{x / (1024**3):.1f} / {self.system_monitor.memory_total / (1024**3):.0f} GB')
|
||||
|
||||
# Swap if available
|
||||
ui.label().classes('text-xs text-grey-5').bind_text_from(
|
||||
self.system_monitor, 'swap_used',
|
||||
lambda x: f'Swap: {x / (1024**3):.1f} / {self.system_monitor.swap_total / (1024**3):.0f} GB' if self.system_monitor.swap_total > 0 else '')
|
||||
|
||||
def _create_ollama_section(self):
|
||||
"""Create Ollama status section"""
|
||||
with ui.card().classes('metric-card p-6'):
|
||||
ui.label('Ollama Service').classes('text-lg font-bold text-white mb-4')
|
||||
|
||||
# Status indicator
|
||||
with ui.row().classes('items-center gap-3 mb-4'):
|
||||
# Status icon with conditional color
|
||||
@ui.refreshable
|
||||
def status_icon():
|
||||
color = 'green' if self.ollama_monitor.status else 'red'
|
||||
ui.icon('circle', size='sm', color=color)
|
||||
status_icon()
|
||||
ui.timer(2.0, status_icon.refresh)
|
||||
ui.label().classes('text-sm text-white font-medium').bind_text_from(
|
||||
self.ollama_monitor, 'status', lambda x: 'Online' if x else 'Offline')
|
||||
ui.label().classes('text-xs text-grey-5').bind_text_from(
|
||||
self.ollama_monitor, 'version', lambda x: f'v{x}' if x != 'Unknown' else '')
|
||||
|
||||
# Active models
|
||||
with ui.column().classes('gap-2'):
|
||||
ui.label('Active Models').classes('text-sm text-grey-5 mb-1')
|
||||
|
||||
@ui.refreshable
|
||||
def model_list():
|
||||
if not self.ollama_monitor.active_models:
|
||||
ui.label('No models loaded').classes('text-xs text-grey-6 italic')
|
||||
else:
|
||||
# active_models_table = ui.table(columns=columns, rows=[]).classes('w-full bg-transparent border-0 shadow-none')
|
||||
header = ['Name', 'Model', 'Context', 'Size', 'VRAM']
|
||||
with ui.grid(columns=len(header)).classes('items-center gap-2 w-full'):
|
||||
[ui.label(item) for item in header]
|
||||
for model in self.ollama_monitor.active_models:
|
||||
ui.label(model.get('name', 'Unknown')).classes('text-xs text-white')
|
||||
ui.label(model.get('model', 'Unknown')).classes('text-xs text-white')
|
||||
ui.label(f'{model.get('context_length', 'Unknown')}ctx').classes('text-xs text-white')
|
||||
ui.label(f'{model.get('size', 0) / (1024**3):.1f}GB').classes('text-xs text-grey-6')
|
||||
ui.label(f'{model.get('size_vram', 0) / (1024**3):.1f}GB').classes('text-xs text-grey-6')
|
||||
|
||||
model_list()
|
||||
ui.timer(2.0, model_list.refresh)
|
||||
|
||||
def _create_storage_network_section(self):
|
||||
"""Create storage and network monitoring section"""
|
||||
with ui.card().classes('metric-card p-6 flex-1'):
|
||||
ui.label('Storage & Network').classes('text-lg font-bold text-white mb-4')
|
||||
|
||||
with ui.row().classes('gap-6'):
|
||||
# Disk Usage
|
||||
with ui.column().classes('flex-1 gap-3'):
|
||||
ui.label('Primary Disk').classes('text-sm text-grey-5')
|
||||
|
||||
# Disk usage bar
|
||||
with ui.column().classes('gap-1'):
|
||||
with ui.row().classes('justify-between'):
|
||||
ui.label().classes('text-sm text-white font-medium').bind_text_from(
|
||||
self.system_monitor, 'disk_percent', lambda x: f'{x:.0f}% Used')
|
||||
ui.label().classes('text-xs text-grey-5').bind_text_from(
|
||||
self.system_monitor, 'disk_free',
|
||||
lambda x: f'{x / (1024**3):.0f} GB Free')
|
||||
ui.linear_progress(size='8px', color='green', show_value=False).bind_value_from(
|
||||
self.system_monitor, 'disk_percent', lambda x: x / 100)
|
||||
|
||||
ui.label().classes('text-xs text-grey-6').bind_text_from(
|
||||
self.system_monitor, 'disk_used',
|
||||
lambda x: f'{x / (1024**3):.0f} / {self.system_monitor.disk_total / (1024**3):.0f} GB')
|
||||
|
||||
# Network I/O
|
||||
with ui.column().classes('flex-1 gap-3'):
|
||||
ui.label('Network I/O').classes('text-sm text-grey-5')
|
||||
|
||||
with ui.column().classes('gap-2'):
|
||||
# OS
|
||||
with ui.row().classes('w-full justify-between'):
|
||||
ui.label('OS').classes('text-xs text-grey-5')
|
||||
ui.label().classes('text-xs text-white font-medium').bind_text_from(
|
||||
system_monitor, 'os_name')
|
||||
# Kernel
|
||||
with ui.row().classes('w-full justify-between'):
|
||||
# Download
|
||||
with ui.row().classes('items-center gap-2'):
|
||||
ui.icon('download', size='xs', color='cyan')
|
||||
ui.label().classes('text-sm text-white').bind_text_from(
|
||||
self.system_monitor, 'network_bytes_recv',
|
||||
lambda x: f'↓ {self._format_bytes(x)}')
|
||||
|
||||
# Upload
|
||||
with ui.row().classes('items-center gap-2'):
|
||||
ui.icon('upload', size='xs', color='orange')
|
||||
ui.label().classes('text-sm text-white').bind_text_from(
|
||||
self.system_monitor, 'network_bytes_sent',
|
||||
lambda x: f'↑ {self._format_bytes(x)}')
|
||||
|
||||
def _create_system_details_section(self):
|
||||
"""Create system information section"""
|
||||
with ui.card().classes('metric-card p-6 w-96'):
|
||||
ui.label('System Information').classes('text-lg font-bold text-white mb-4')
|
||||
|
||||
with ui.column().classes('gap-3'):
|
||||
# OS Info
|
||||
with ui.row().classes('justify-between'):
|
||||
ui.label('Operating System').classes('text-xs text-grey-5')
|
||||
ui.label().classes('text-xs text-white').bind_text_from(
|
||||
self.system_monitor, 'os_name')
|
||||
|
||||
with ui.row().classes('justify-between'):
|
||||
ui.label('Kernel').classes('text-xs text-grey-5')
|
||||
ui.label().classes('text-xs text-white font-medium').bind_text_from(
|
||||
system_monitor, 'kernel')
|
||||
# CPU
|
||||
with ui.row().classes('w-full justify-between'):
|
||||
ui.label('CPU').classes('text-xs text-grey-5')
|
||||
ui.label().classes('text-xs text-white font-medium').bind_text_from(
|
||||
system_monitor, 'cpu_model')
|
||||
# GPU
|
||||
with ui.row().classes('w-full justify-between'):
|
||||
ui.label('GPU').classes('text-xs text-grey-5')
|
||||
ui.label().classes('text-xs text-white font-medium').bind_text_from(
|
||||
gpu_monitor, 'gpu_name', lambda x: x if gpu_monitor.available else 'No GPU')
|
||||
# Uptime
|
||||
with ui.row().classes('w-full justify-between'):
|
||||
ui.label().classes('text-xs text-white').bind_text_from(
|
||||
self.system_monitor, 'kernel')
|
||||
|
||||
with ui.row().classes('justify-between'):
|
||||
ui.label('Hostname').classes('text-xs text-grey-5')
|
||||
ui.label().classes('text-xs text-white').bind_text_from(
|
||||
self.system_monitor, 'hostname')
|
||||
|
||||
with ui.row().classes('justify-between'):
|
||||
ui.label('Architecture').classes('text-xs text-grey-5')
|
||||
ui.label().classes('text-xs text-white').bind_text_from(
|
||||
self.system_monitor, 'architecture')
|
||||
|
||||
ui.separator().classes('my-2')
|
||||
|
||||
with ui.row().classes('justify-between'):
|
||||
ui.label('Uptime').classes('text-xs text-grey-5')
|
||||
ui.label().classes('text-xs text-white font-medium').bind_text_from(
|
||||
system_monitor, 'uptime')
|
||||
ui.label().classes('text-xs text-white').bind_text_from(
|
||||
self.system_monitor, 'uptime')
|
||||
|
||||
# Ollama status card
|
||||
with ui.card().classes('metric-card p-4'):
|
||||
ui.label('Ollama Status').classes('text-sm font-bold text-white mb-3')
|
||||
|
||||
with ui.row().classes('items-center gap-2 mb-2'):
|
||||
ui.icon('check_circle', color='green', size='sm')
|
||||
ui.label('Online').classes('text-sm text-white')
|
||||
|
||||
with ui.column().classes('gap-1'):
|
||||
ui.label('4 models active').classes('text-xs text-grey-5')
|
||||
ui.label('llama3.2:3b, mistral:7b...').classes('text-xs text-grey-6')
|
||||
|
||||
# Bottom metrics row with bindings
|
||||
with ui.grid(columns=5).classes('w-full gap-4 mt-4'):
|
||||
# Processes
|
||||
with ui.card().classes('metric-card p-3 text-center'):
|
||||
with ui.column().classes('items-center gap-1'):
|
||||
ui.icon('dashboard', size='sm', color='grey-5')
|
||||
ui.label().classes('text-lg font-bold text-white').bind_text_from(
|
||||
system_monitor, 'process_count', lambda x: str(x))
|
||||
with ui.row().classes('justify-between'):
|
||||
ui.label('Processes').classes('text-xs text-grey-5')
|
||||
ui.label().classes('text-xs text-white').bind_text_from(
|
||||
self.system_monitor, 'process_count', str)
|
||||
|
||||
# Network
|
||||
with ui.card().classes('metric-card p-3 text-center'):
|
||||
with ui.column().classes('items-center gap-1'):
|
||||
ui.icon('wifi', size='sm', color='grey-5')
|
||||
ui.label().classes('text-lg font-bold text-white').bind_text_from(
|
||||
system_monitor, 'network_bytes_recv',
|
||||
lambda x: self._format_network(system_monitor.network_bytes_recv + system_monitor.network_bytes_sent))
|
||||
ui.label('Network').classes('text-xs text-grey-5')
|
||||
with ui.row().classes('justify-between'):
|
||||
ui.label('Load Average').classes('text-xs text-grey-5')
|
||||
ui.label().classes('text-xs text-white').bind_text_from(
|
||||
self.system_monitor, 'load_avg',
|
||||
lambda x: f'{x[0]:.2f}, {x[1]:.2f}, {x[2]:.2f}' if x else 'N/A')
|
||||
|
||||
# Disk
|
||||
with ui.card().classes('metric-card p-3 text-center'):
|
||||
with ui.column().classes('items-center gap-1'):
|
||||
ui.icon('storage', size='sm', color='grey-5')
|
||||
ui.label().classes('text-lg font-bold text-white').bind_text_from(
|
||||
system_monitor, 'disk_percent', lambda x: f'{x:.0f}%')
|
||||
ui.label('Disk').classes('text-xs text-grey-5')
|
||||
def _create_process_section(self):
|
||||
"""Create process monitoring section"""
|
||||
with ui.card().classes('metric-card p-6 w-full'):
|
||||
ui.label('Top Processes').classes('text-lg font-bold text-white mb-4')
|
||||
|
||||
# CPU Cores
|
||||
with ui.card().classes('metric-card p-3 text-center'):
|
||||
with ui.column().classes('items-center gap-1'):
|
||||
ui.icon('settings', size='sm', color='grey-5')
|
||||
ui.label().classes('text-lg font-bold text-white').bind_text_from(
|
||||
system_monitor, 'cpu_count', lambda x: str(x))
|
||||
ui.label('CPU Cores').classes('text-xs text-grey-5')
|
||||
# Process table header
|
||||
with ui.row().classes('w-full px-2 pb-2 border-b border-gray-700'):
|
||||
ui.label('Process').classes('text-xs text-grey-5 font-medium w-64')
|
||||
ui.label('PID').classes('text-xs text-grey-5 font-medium w-20')
|
||||
ui.label('CPU %').classes('text-xs text-grey-5 font-medium w-20')
|
||||
ui.label('Memory').classes('text-xs text-grey-5 font-medium w-24')
|
||||
ui.label('Status').classes('text-xs text-grey-5 font-medium w-20')
|
||||
|
||||
# Total RAM
|
||||
with ui.card().classes('metric-card p-3 text-center'):
|
||||
with ui.column().classes('items-center gap-1'):
|
||||
ui.icon('memory', size='sm', color='grey-5')
|
||||
ui.label().classes('text-lg font-bold text-white').bind_text_from(
|
||||
system_monitor, 'memory_total', lambda x: f'{x / (1024**3):.0f}GB')
|
||||
ui.label('Total RAM').classes('text-xs text-grey-5')
|
||||
|
||||
def _format_network(self, total_bytes: int) -> str:
|
||||
"""Format network bytes to human readable format"""
|
||||
mb = total_bytes / (1024 * 1024)
|
||||
if mb > 1024:
|
||||
return f"{mb/1024:.1f}GB"
|
||||
# Process list
|
||||
@ui.refreshable
|
||||
def process_list():
|
||||
processes = self.system_monitor.top_processes[:8] # Show top 8 processes
|
||||
if not processes:
|
||||
ui.label('No process data available').classes('text-xs text-grey-6 italic p-2')
|
||||
else:
|
||||
return f"{mb:.0f}MB"
|
||||
for proc in processes:
|
||||
with ui.row().classes('w-full px-2 py-1 hover:bg-gray-800 hover:bg-opacity-30'):
|
||||
ui.label(proc.get('name', 'Unknown')[:30]).classes('text-xs text-white w-64 truncate')
|
||||
ui.label(str(proc.get('pid', 0))).classes('text-xs text-grey-5 w-20')
|
||||
ui.label(f"{proc.get('cpu_percent', 0):.1f}%").classes('text-xs text-white w-20')
|
||||
mem_mb = proc.get('memory_info', {}).get('rss', 0) / (1024 * 1024)
|
||||
ui.label(f'{mem_mb:.0f} MB').classes('text-xs text-white w-24')
|
||||
status_color = 'green' if proc.get('status') == 'running' else 'grey-5'
|
||||
ui.label(proc.get('status', 'unknown')).classes(f'text-xs text-{status_color} w-20')
|
||||
|
||||
process_list()
|
||||
ui.timer(2.0, process_list.refresh)
|
||||
|
||||
def _format_bytes(self, bytes_val: int) -> str:
|
||||
"""Format bytes to human readable format"""
|
||||
if bytes_val < 1024:
|
||||
return f"{bytes_val} B"
|
||||
elif bytes_val < 1024 * 1024:
|
||||
return f"{bytes_val / 1024:.1f} KB"
|
||||
elif bytes_val < 1024 * 1024 * 1024:
|
||||
return f"{bytes_val / (1024 * 1024):.1f} MB"
|
||||
else:
|
||||
return f"{bytes_val / (1024 * 1024 * 1024):.1f} GB"
|
||||
|
||||
@@ -1,45 +0,0 @@
|
||||
from nicegui import ui
|
||||
|
||||
|
||||
class SystemOverviewPage(ui.column):
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
|
||||
with self.classes('w-full gap-6 p-6'):
|
||||
ui.label('System Overview').classes('text-h4 font-bold')
|
||||
|
||||
with ui.row().classes('w-full gap-4 flex-wrap'):
|
||||
self._create_stat_card('CPU Usage', '45%', 'memory', 'blue')
|
||||
self._create_stat_card('Memory', '8.2 / 16 GB', 'storage', 'green')
|
||||
self._create_stat_card('GPU Usage', '78%', 'gpu_on', 'orange')
|
||||
self._create_stat_card('GPU Memory', '6.1 / 8 GB', 'memory_alt', 'purple')
|
||||
|
||||
with ui.card().classes('w-full'):
|
||||
ui.label('System Information').classes('text-h6 font-bold mb-4')
|
||||
with ui.grid(columns=2).classes('w-full gap-4'):
|
||||
self._info_row('Operating System', 'Arch Linux')
|
||||
self._info_row('Kernel', '6.16.7-arch1-1')
|
||||
self._info_row('GPU', 'AMD Radeon RX 6700 XT')
|
||||
self._info_row('Driver', 'amdgpu')
|
||||
self._info_row('CPU', 'AMD Ryzen 7 5800X')
|
||||
self._info_row('Uptime', '2 days, 14:32:15')
|
||||
|
||||
with ui.card().classes('w-full'):
|
||||
ui.label('GPU Temperature').classes('text-h6 font-bold mb-4')
|
||||
with ui.row().classes('items-center gap-4'):
|
||||
ui.icon('thermostat', size='lg')
|
||||
ui.label('65°C').classes('text-h5')
|
||||
ui.linear_progress(value=0.65, show_value=False).classes('flex-grow')
|
||||
|
||||
def _create_stat_card(self, title: str, value: str, icon: str, color: str):
|
||||
with ui.card().classes(f'flex-grow min-w-[200px]'):
|
||||
with ui.row().classes('items-center gap-4'):
|
||||
ui.icon(icon, size='lg').classes(f'text-{color}')
|
||||
with ui.column().classes('gap-1'):
|
||||
ui.label(title).classes('text-caption text-gray-500')
|
||||
ui.label(value).classes('text-h6 font-bold')
|
||||
|
||||
def _info_row(self, label: str, value: str):
|
||||
with ui.row().classes('w-full'):
|
||||
ui.label(label).classes('font-medium')
|
||||
ui.label(value).classes('text-gray-600 dark:text-gray-400')
|
||||
36
src/tools/__init__.py
Normal file
36
src/tools/__init__.py
Normal file
@@ -0,0 +1,36 @@
|
||||
import os
|
||||
import importlib
|
||||
from typing import List, Dict
|
||||
from .base_tool import BaseTool
|
||||
|
||||
|
||||
def discover_tools() -> Dict[str, BaseTool]:
|
||||
"""Auto-discover and load all tools"""
|
||||
tools = {}
|
||||
tools_dir = os.path.dirname(__file__)
|
||||
|
||||
for item in os.listdir(tools_dir):
|
||||
tool_path = os.path.join(tools_dir, item)
|
||||
if os.path.isdir(tool_path) and not item.startswith('_'):
|
||||
try:
|
||||
# Import the tool module
|
||||
module = importlib.import_module(f'tools.{item}.tool')
|
||||
|
||||
# Find Tool class (should be named like CensorTool, etc.)
|
||||
for attr_name in dir(module):
|
||||
attr = getattr(module, attr_name)
|
||||
if (isinstance(attr, type) and
|
||||
issubclass(attr, BaseTool) and
|
||||
attr != BaseTool):
|
||||
tool_instance = attr()
|
||||
# Only register enabled tools
|
||||
if tool_instance.enabled:
|
||||
tools[tool_instance.baseroute] = tool_instance
|
||||
break
|
||||
except ImportError as e:
|
||||
print(f"Failed to load tool {item}: {e}")
|
||||
|
||||
return tools
|
||||
|
||||
|
||||
TOOLS = discover_tools()
|
||||
105
src/tools/base_tool.py
Normal file
105
src/tools/base_tool.py
Normal file
@@ -0,0 +1,105 @@
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Dict, Callable, Awaitable, Optional
|
||||
from nicegui import ui
|
||||
from niceguiasyncelement import AsyncColumn
|
||||
import inspect
|
||||
|
||||
|
||||
class ToolContext:
|
||||
"""Global context providing access to system monitors and shared resources"""
|
||||
def __init__(self, system_monitor=None, gpu_monitor=None, ollama_monitor=None):
|
||||
self.system_monitor = system_monitor
|
||||
self.gpu_monitor = gpu_monitor
|
||||
self.ollama_monitor = ollama_monitor
|
||||
|
||||
|
||||
# Global context instance
|
||||
_tool_context: Optional[ToolContext] = None
|
||||
|
||||
|
||||
def get_tool_context() -> ToolContext:
|
||||
"""Get the global tool context"""
|
||||
if _tool_context is None:
|
||||
raise RuntimeError("Tool context not initialized. Call set_tool_context() first.")
|
||||
return _tool_context
|
||||
|
||||
|
||||
def set_tool_context(context: ToolContext):
|
||||
"""Set the global tool context"""
|
||||
global _tool_context
|
||||
_tool_context = context
|
||||
|
||||
|
||||
class BaseTool(ABC):
|
||||
@property
|
||||
def context(self) -> ToolContext:
|
||||
"""Access to shared system monitors and resources"""
|
||||
return get_tool_context()
|
||||
|
||||
@property
|
||||
def baseroute(self) -> str:
|
||||
"""Auto-generate route from module name"""
|
||||
# Get the module path: tools.example_tool.tool
|
||||
module = inspect.getmodule(self)
|
||||
if module:
|
||||
module_name = module.__name__
|
||||
else:
|
||||
raise ValueError("no module name specified.")
|
||||
# Extract package name: example_tool
|
||||
package_name = module_name.split('.')[-2]
|
||||
# Convert to route: /example-tool
|
||||
return f"/{package_name.replace('_', '-')}"
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def name(self) -> str:
|
||||
"""Tool name for display"""
|
||||
pass
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def description(self) -> str:
|
||||
"""Tool description"""
|
||||
pass
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def icon(self) -> str:
|
||||
"""Material icon name"""
|
||||
pass
|
||||
|
||||
@property
|
||||
def enabled(self) -> bool:
|
||||
"""Whether this tool is enabled (default: True)"""
|
||||
return True
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def routes(self) -> Dict[str, Callable[[], Awaitable]]:
|
||||
"""Define sub-routes relative to baseroute
|
||||
Returns: Dict of {sub_path: handler_method}
|
||||
Example: {
|
||||
'': lambda: MainPage().create(self),
|
||||
'/settings': lambda: SettingsPage().create(self),
|
||||
'/history': lambda: HistoryPage().create(self)
|
||||
}
|
||||
"""
|
||||
pass
|
||||
|
||||
|
||||
class BasePage(AsyncColumn):
|
||||
"""Base class for all tool pages - handles common setup"""
|
||||
tool: 'BaseTool'
|
||||
|
||||
async def build(self, tool: 'BaseTool'):
|
||||
"""Common setup for all pages"""
|
||||
self.classes('main-content')
|
||||
self.tool = tool
|
||||
|
||||
with self:
|
||||
await self.content()
|
||||
|
||||
@abstractmethod
|
||||
async def content(self):
|
||||
"""Override this to provide page-specific content"""
|
||||
pass
|
||||
0
src/tools/example_tool/__init__.py
Normal file
0
src/tools/example_tool/__init__.py
Normal file
135
src/tools/example_tool/tool.py
Normal file
135
src/tools/example_tool/tool.py
Normal file
@@ -0,0 +1,135 @@
|
||||
from typing import Dict, Callable, Awaitable
|
||||
from nicegui import ui
|
||||
from tools.base_tool import BaseTool, BasePage
|
||||
|
||||
|
||||
class ExampleTool(BaseTool):
|
||||
@property
|
||||
def name(self) -> str:
|
||||
return "Example Tool"
|
||||
|
||||
@property
|
||||
def description(self) -> str:
|
||||
return "Shows how to build a tool with multiple pages."
|
||||
|
||||
@property
|
||||
def icon(self) -> str:
|
||||
return "extension"
|
||||
|
||||
@property
|
||||
def enabled(self) -> bool:
|
||||
"""Enable/disable this tool (set to False to hide from menu and disable routes)"""
|
||||
return True # Set to False to disable this tool
|
||||
|
||||
@property
|
||||
def routes(self) -> Dict[str, Callable[[], Awaitable]]:
|
||||
"""Define the routes for this tool"""
|
||||
return {
|
||||
'': lambda: MainPage().create(self),
|
||||
'/settings': lambda: SettingsPage().create(self),
|
||||
'/history': lambda: HistoryPage().create(self),
|
||||
}
|
||||
|
||||
|
||||
class MainPage(BasePage):
|
||||
"""Main page of the example tool"""
|
||||
|
||||
async def content(self):
|
||||
ui.label(self.tool.name).classes('text-2xl font-bold text-white mb-4')
|
||||
|
||||
# Description
|
||||
with ui.card().classes('metric-card p-4'):
|
||||
ui.label('Main Page').classes('text-lg font-bold text-white mb-2')
|
||||
ui.label(self.tool.description).classes('text-sm text-grey-5')
|
||||
|
||||
# Navigation to sub-pages
|
||||
ui.label('Navigate to:').classes('text-sm text-grey-5 mt-4')
|
||||
with ui.row().classes('gap-2'):
|
||||
ui.button('Settings', icon='settings',
|
||||
on_click=lambda: ui.navigate.to(f'{self.tool.baseroute}/settings')).props('color=primary')
|
||||
ui.button('History', icon='history',
|
||||
on_click=lambda: ui.navigate.to(f'{self.tool.baseroute}/history')).props('color=secondary')
|
||||
|
||||
# Example content with context usage
|
||||
with ui.card().classes('metric-card p-6 mt-4'):
|
||||
ui.label('Example Content').classes('text-lg font-bold text-white mb-2')
|
||||
ui.label('This is the main page of the example tool.').classes('text-sm text-grey-5')
|
||||
ui.label('Tools can have multiple pages using sub-routes!').classes('text-sm text-grey-5')
|
||||
|
||||
# Demonstrate context access
|
||||
with ui.card().classes('metric-card p-6 mt-4'):
|
||||
ui.label('Context Demo').classes('text-lg font-bold text-white mb-2')
|
||||
|
||||
# Access system monitors through context
|
||||
ui.label().classes('text-sm text-white').bind_text_from(
|
||||
self.tool.context.system_monitor, 'cpu_percent',
|
||||
backward=lambda x: f'CPU Usage: {x:.1f}%'
|
||||
)
|
||||
|
||||
ui.label().classes('text-sm text-white').bind_text_from(
|
||||
self.tool.context.gpu_monitor, 'temperature',
|
||||
backward=lambda x: f'GPU Temperature: {x:.0f}°C' if x > 0 else 'GPU Temperature: N/A'
|
||||
)
|
||||
|
||||
ui.label().classes('text-sm text-white').bind_text_from(
|
||||
self.tool.context.ollama_monitor, 'active_models',
|
||||
backward=lambda x: f'Active Models: {len(x)}'
|
||||
)
|
||||
|
||||
|
||||
class SettingsPage(BasePage):
|
||||
"""Settings sub-page"""
|
||||
|
||||
async def content(self):
|
||||
# Header with back button
|
||||
with ui.row().classes('items-center gap-4 mb-4'):
|
||||
ui.button(icon='arrow_back', on_click=lambda: ui.navigate.to(self.tool.baseroute)).props('flat round')
|
||||
ui.label(f'{self.tool.name} - Settings').classes('text-2xl font-bold text-white')
|
||||
|
||||
# Settings content
|
||||
with ui.card().classes('metric-card p-6'):
|
||||
ui.label('Tool Settings').classes('text-lg font-bold text-white mb-4')
|
||||
|
||||
with ui.column().classes('gap-4'):
|
||||
# Example settings
|
||||
with ui.row().classes('items-center justify-between'):
|
||||
ui.label('Enable feature').classes('text-sm text-white')
|
||||
ui.switch(value=True).props('color=cyan')
|
||||
|
||||
with ui.row().classes('items-center justify-between'):
|
||||
ui.label('Update interval').classes('text-sm text-white')
|
||||
ui.select(['1s', '2s', '5s', '10s'], value='2s').props('outlined dense color=cyan')
|
||||
|
||||
with ui.row().classes('items-center justify-between'):
|
||||
ui.label('Max items').classes('text-sm text-white')
|
||||
ui.number(value=100, min=1, max=1000).props('outlined dense')
|
||||
|
||||
|
||||
class HistoryPage(BasePage):
|
||||
"""History sub-page"""
|
||||
|
||||
async def content(self):
|
||||
# Header with back button
|
||||
with ui.row().classes('items-center gap-4 mb-4'):
|
||||
ui.button(icon='arrow_back', on_click=lambda: ui.navigate.to(self.tool.baseroute)).props('flat round')
|
||||
ui.label(f'{self.tool.name} - History').classes('text-2xl font-bold text-white')
|
||||
|
||||
# History content
|
||||
with ui.card().classes('metric-card p-6'):
|
||||
ui.label('Activity History').classes('text-lg font-bold text-white mb-4')
|
||||
|
||||
# Example history items
|
||||
history_items = [
|
||||
('Action performed', '2 minutes ago', 'check_circle', 'green'),
|
||||
('Settings updated', '15 minutes ago', 'settings', 'cyan'),
|
||||
('Process started', '1 hour ago', 'play_arrow', 'orange'),
|
||||
('Error occurred', '2 hours ago', 'error', 'red'),
|
||||
]
|
||||
|
||||
with ui.column().classes('gap-2'):
|
||||
for action, time, icon, color in history_items:
|
||||
with ui.row().classes('items-center gap-3 p-2 hover:bg-gray-800 hover:bg-opacity-30 rounded'):
|
||||
ui.icon(icon, size='sm', color=color)
|
||||
with ui.column().classes('flex-1 gap-0'):
|
||||
ui.label(action).classes('text-sm text-white')
|
||||
ui.label(time).classes('text-xs text-grey-5')
|
||||
Reference in New Issue
Block a user