tooling and docs
This commit is contained in:
247
README.md
247
README.md
@@ -1,2 +1,247 @@
|
||||
# ArchGPUFrontend
|
||||
# ArchGPU Frontend
|
||||
|
||||
A comprehensive web-based platform for local AI model testing and system monitoring. Built with NiceGUI and designed for privacy-focused AI experimentation on Arch Linux systems with GPU support.
|
||||
|
||||
## 🎯 Core Purpose
|
||||
|
||||
**Local AI Testing Environment** - Test AI models locally with complete privacy and security, enabling experimentation that external providers might restrict or monitor.
|
||||
|
||||
### Key Features
|
||||
|
||||
- **🖥️ Comprehensive System Monitoring** - Real-time tracking of AI workloads
|
||||
- **🤖 Advanced Ollama Integration** - Complete model management and testing
|
||||
- **🔧 Extensible Plugin System** - Add custom testing tools easily
|
||||
- **🔒 Privacy-First Design** - All processing happens locally
|
||||
- **⚡ Real-Time Performance Tracking** - Monitor resource usage during inference
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Python 3.13+
|
||||
- uv package manager
|
||||
- Ollama installed and running on port 11434
|
||||
- GPU drivers (AMD or NVIDIA)
|
||||
|
||||
### Installation
|
||||
|
||||
```bash
|
||||
# Clone the repository
|
||||
git clone <repository-url>
|
||||
cd ArchGPUFrontend
|
||||
|
||||
# Install dependencies
|
||||
uv sync
|
||||
|
||||
# Run the application
|
||||
APP_PORT=8081 uv run python src/main.py
|
||||
```
|
||||
|
||||
Open your browser to `http://localhost:8081` (or 8080 for production).
|
||||
|
||||
## 📊 System Monitoring
|
||||
|
||||
### Dashboard Features
|
||||
|
||||
The dashboard provides real-time monitoring specifically designed for AI workload analysis:
|
||||
|
||||
#### Primary Metrics
|
||||
- **GPU Performance**: Load percentage, VRAM usage, temperature, power draw
|
||||
- **CPU & Memory**: Usage percentages, frequency, detailed specifications
|
||||
- **Ollama Service**: Status, version, active models with metadata
|
||||
- **Storage & Network**: Disk usage, real-time I/O monitoring
|
||||
|
||||
#### Enhanced Header
|
||||
- **Critical Metrics Badges**: GPU load, VRAM, RAM, disk space
|
||||
- **Active Models Tooltip**: Detailed model information on hover
|
||||
- **Live Status Indicators**: Service health and version info
|
||||
|
||||
#### Process Monitoring
|
||||
- Real-time table of top processes
|
||||
- CPU and memory usage per process
|
||||
- Process status and PID tracking
|
||||
|
||||
## 🤖 Ollama Integration
|
||||
|
||||
### Model Management
|
||||
- **Browse & Download**: Pull models from Ollama library and Hugging Face
|
||||
- **Rich Metadata**: View size, quantization, parameters, context length
|
||||
- **Quick Testing**: In-app chat interface for immediate model testing
|
||||
- **Custom Models**: Create models from custom Modelfiles
|
||||
- **Performance Tracking**: Monitor VRAM usage and inference speed
|
||||
|
||||
### Supported Operations
|
||||
- Model discovery and installation
|
||||
- Real-time active model monitoring
|
||||
- Model deletion and management
|
||||
- Custom model creation
|
||||
- Chat testing interface
|
||||
|
||||
## 🔧 Plugin System
|
||||
|
||||
The application features an extensible plugin architecture for creating custom AI testing tools.
|
||||
|
||||
### Available Tools
|
||||
- **Example Tool** - Demonstrates plugin capabilities with sub-pages
|
||||
|
||||
### Creating Tools
|
||||
See our [Tool Creation Guide](docs/TOOL_CREATION.md) for detailed instructions on building custom tools.
|
||||
|
||||
Quick example:
|
||||
```python
|
||||
from tools.base_tool import BaseTool, BasePage
|
||||
|
||||
class MyTool(BaseTool):
|
||||
@property
|
||||
def name(self) -> str:
|
||||
return "My Testing Tool"
|
||||
|
||||
@property
|
||||
def routes(self):
|
||||
return {'': lambda: MainPage().create(self)}
|
||||
|
||||
class MainPage(BasePage):
|
||||
async def content(self):
|
||||
# Access system monitors
|
||||
cpu = self.tool.context.system_monitor.cpu_percent
|
||||
models = self.tool.context.ollama_monitor.active_models
|
||||
|
||||
# Build your testing interface
|
||||
ui.label(f"CPU: {cpu}% | Models: {len(models)}")
|
||||
```
|
||||
|
||||
## 🏗️ Architecture
|
||||
|
||||
### Technology Stack
|
||||
- **Frontend**: NiceGUI (FastAPI + Vue.js)
|
||||
- **Backend**: Python 3.13 with async/await
|
||||
- **System Monitoring**: psutil
|
||||
- **GPU Monitoring**: rocm-smi / nvidia-smi
|
||||
- **AI Integration**: Ollama API
|
||||
- **Package Manager**: uv
|
||||
|
||||
### Project Structure
|
||||
```
|
||||
src/
|
||||
├── main.py # Application entry point
|
||||
├── pages/ # Core application pages
|
||||
│ ├── dashboard.py # System monitoring dashboard
|
||||
│ └── ollama_manager.py # Model management interface
|
||||
├── components/ # Reusable UI components
|
||||
│ ├── header.py # Enhanced header with metrics
|
||||
│ └── sidebar.py # Navigation with auto-populated tools
|
||||
├── tools/ # Plugin system
|
||||
│ ├── base_tool.py # BaseTool and BasePage classes
|
||||
│ └── example_tool/ # Example plugin implementation
|
||||
├── utils/ # System monitoring utilities
|
||||
│ ├── system_monitor.py # CPU, memory, disk monitoring
|
||||
│ ├── gpu_monitor.py # GPU performance tracking
|
||||
│ └── ollama_monitor.py # Ollama service monitoring
|
||||
└── static/ # CSS and assets
|
||||
```
|
||||
|
||||
### Key Design Patterns
|
||||
- **Plugin Architecture**: Auto-discovery of tools from `src/tools/`
|
||||
- **Context Pattern**: Shared resource access via `ToolContext`
|
||||
- **Async Components**: Custom `niceguiasyncelement` framework
|
||||
- **Real-time Binding**: Live data updates with NiceGUI binding system
|
||||
|
||||
## ⚙️ Configuration
|
||||
|
||||
### Environment Variables
|
||||
Create a `.env` file in the project root:
|
||||
|
||||
```env
|
||||
# Application settings
|
||||
APP_PORT=8080
|
||||
APP_TITLE=ArchGPU Frontend
|
||||
APP_SHOW=false
|
||||
APP_STORAGE_SECRET=your-secret-key
|
||||
|
||||
# Monitoring settings
|
||||
MONITORING_UPDATE_INTERVAL=2
|
||||
```
|
||||
|
||||
### GPU Support
|
||||
The application automatically detects and supports:
|
||||
- **AMD GPUs**: Via rocm-smi or sysfs fallback
|
||||
- **NVIDIA GPUs**: Via nvidia-smi
|
||||
- **Multi-GPU**: Supports multiple GPU monitoring
|
||||
|
||||
## 🔒 Privacy & Security
|
||||
|
||||
### Local-First Design
|
||||
- All AI processing happens on your local machine
|
||||
- No data sent to external providers
|
||||
- Complete control over model interactions
|
||||
- Secure testing of sensitive data
|
||||
|
||||
### Use Cases
|
||||
- Testing model behaviors that providers restrict
|
||||
- Private data analysis and processing
|
||||
- Security research and safety testing
|
||||
- Custom prompt engineering without external logging
|
||||
- Unrestricted local AI experimentation
|
||||
|
||||
## 🛠️ Development
|
||||
|
||||
### Running in Development
|
||||
```bash
|
||||
# Development server (port 8081 to avoid conflicts)
|
||||
APP_PORT=8081 uv run python src/main.py
|
||||
|
||||
# Production server
|
||||
uv run python src/main.py
|
||||
```
|
||||
|
||||
### Adding Dependencies
|
||||
```bash
|
||||
# Add runtime dependency
|
||||
uv add package-name
|
||||
|
||||
# Add development dependency
|
||||
uv add --dev package-name
|
||||
|
||||
# Update all dependencies
|
||||
uv sync
|
||||
```
|
||||
|
||||
### Creating Tools
|
||||
1. Create tool directory: `src/tools/my_tool/`
|
||||
2. Implement tool class inheriting from `BaseTool`
|
||||
3. Define routes and page classes
|
||||
4. Tool automatically appears in navigation
|
||||
|
||||
See [Tool Creation Guide](docs/TOOL_CREATION.md) for detailed instructions.
|
||||
|
||||
## 🤝 Contributing
|
||||
|
||||
We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.
|
||||
|
||||
### Development Setup
|
||||
1. Fork the repository
|
||||
2. Create a feature branch
|
||||
3. Install dependencies with `uv sync`
|
||||
4. Make your changes
|
||||
5. Test with `APP_PORT=8081 uv run python src/main.py`
|
||||
6. Submit a pull request
|
||||
|
||||
## 📄 License
|
||||
|
||||
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
|
||||
|
||||
## 🆘 Support
|
||||
|
||||
- **Issues**: Create GitHub issues for bugs and feature requests
|
||||
- **Documentation**: Check the `docs/` directory for detailed guides
|
||||
|
||||
## 🙏 Acknowledgments
|
||||
|
||||
- [NiceGUI](https://nicegui.io/) - Excellent Python web framework
|
||||
- [Ollama](https://ollama.ai/) - Local AI model serving
|
||||
- [psutil](https://psutil.readthedocs.io/) - System monitoring
|
||||
- The open-source AI community
|
||||
|
||||
---
|
||||
|
||||
**Built for privacy-focused AI experimentation and local model testing.**
|
||||
|
||||
Reference in New Issue
Block a user