290 lines
11 KiB
Markdown
290 lines
11 KiB
Markdown
# CLAUDE.md
|
|
|
|
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
|
|
|
## Project Overview
|
|
|
|
An Actix-web REST API for serving images and videos from a filesystem with automatic thumbnail generation, EXIF extraction, tag organization, and a memories feature for browsing photos by date. Uses SQLite/Diesel ORM for data persistence and ffmpeg for video processing.
|
|
|
|
## Development Commands
|
|
|
|
### Building & Running
|
|
```bash
|
|
# Build for development
|
|
cargo build
|
|
|
|
# Build for release (uses thin LTO optimization)
|
|
cargo build --release
|
|
|
|
# Run the server (requires .env file with DATABASE_URL, BASE_PATH, THUMBNAILS, VIDEO_PATH, BIND_URL, SECRET_KEY)
|
|
cargo run
|
|
|
|
# Run with specific log level
|
|
RUST_LOG=debug cargo run
|
|
```
|
|
|
|
### Testing
|
|
```bash
|
|
# Run all tests (requires BASE_PATH in .env)
|
|
cargo test
|
|
|
|
# Run specific test
|
|
cargo test test_name
|
|
|
|
# Run tests with output
|
|
cargo test -- --nocapture
|
|
```
|
|
|
|
### Database Migrations
|
|
```bash
|
|
# Install diesel CLI (one-time setup)
|
|
cargo install diesel_cli --no-default-features --features sqlite
|
|
|
|
# Create new migration
|
|
diesel migration generate migration_name
|
|
|
|
# Run migrations (also runs automatically on app startup)
|
|
diesel migration run
|
|
|
|
# Revert last migration
|
|
diesel migration revert
|
|
|
|
# Regenerate schema.rs after manual migration changes
|
|
diesel print-schema > src/database/schema.rs
|
|
```
|
|
|
|
### Code Quality
|
|
```bash
|
|
# Format code
|
|
cargo fmt
|
|
|
|
# Run clippy linter
|
|
cargo clippy
|
|
|
|
# Fix automatically fixable issues
|
|
cargo fix
|
|
```
|
|
|
|
### Utility Binaries
|
|
```bash
|
|
# Two-phase cleanup: resolve missing files and validate file types
|
|
cargo run --bin cleanup_files -- --base-path /path/to/media --database-url ./database.db
|
|
|
|
# Batch extract EXIF for existing files
|
|
cargo run --bin migrate_exif
|
|
```
|
|
|
|
## Architecture Overview
|
|
|
|
### Core Components
|
|
|
|
**Layered Architecture:**
|
|
- **HTTP Layer** (`main.rs`): Route handlers for images, videos, metadata, tags, favorites, memories
|
|
- **Auth Layer** (`auth.rs`): JWT token validation, Claims extraction via FromRequest trait
|
|
- **Service Layer** (`files.rs`, `exif.rs`, `memories.rs`): Business logic for file operations and EXIF extraction
|
|
- **DAO Layer** (`database/mod.rs`): Trait-based data access (ExifDao, UserDao, FavoriteDao, TagDao)
|
|
- **Database Layer**: Diesel ORM with SQLite, schema in `database/schema.rs`
|
|
|
|
**Async Actor System (Actix):**
|
|
- `StreamActor`: Manages ffmpeg video processing lifecycle
|
|
- `VideoPlaylistManager`: Scans directories and queues videos
|
|
- `PlaylistGenerator`: Creates HLS playlists for video streaming
|
|
|
|
### Database Schema & Patterns
|
|
|
|
**Tables:**
|
|
- `users`: Authentication (id, username, password_hash)
|
|
- `favorites`: User-specific favorites (userid, path)
|
|
- `tags`: Custom labels with timestamps
|
|
- `tagged_photo`: Many-to-many photo-tag relationships
|
|
- `image_exif`: Rich metadata (file_path + 16 EXIF fields: camera, GPS, dates, exposure settings)
|
|
|
|
**DAO Pattern:**
|
|
All database access goes through trait-based DAOs (e.g., `ExifDao`, `SqliteExifDao`). Connection pooling uses `Arc<Mutex<SqliteConnection>>`. All DB operations are traced with OpenTelemetry in release builds.
|
|
|
|
**Key DAO Methods:**
|
|
- `store_exif()`, `get_exif()`, `get_exif_batch()`: EXIF CRUD operations
|
|
- `query_by_exif()`: Complex filtering by camera, GPS bounds, date ranges
|
|
- Batch operations minimize DB hits during file watching
|
|
|
|
### File Processing Pipeline
|
|
|
|
**Thumbnail Generation:**
|
|
1. Startup scan: Rayon parallel walk of BASE_PATH
|
|
2. Creates 200x200 thumbnails in THUMBNAILS directory (mirrors source structure)
|
|
3. Videos: extracts frame at 3-second mark via ffmpeg
|
|
4. Images: uses `image` crate for JPEG/PNG processing
|
|
|
|
**File Watching:**
|
|
Runs in background thread with two-tier strategy:
|
|
- **Quick scan** (default 60s): Recently modified files only
|
|
- **Full scan** (default 3600s): Comprehensive directory check
|
|
- Batch queries EXIF DB to detect new files
|
|
- Configurable via `WATCH_QUICK_INTERVAL_SECONDS` and `WATCH_FULL_INTERVAL_SECONDS`
|
|
|
|
**EXIF Extraction:**
|
|
- Uses `kamadak-exif` crate
|
|
- Supports: JPEG, TIFF, RAW (NEF, CR2, CR3), HEIF/HEIC, PNG, WebP
|
|
- Extracts: camera make/model, lens, dimensions, GPS coordinates, focal length, aperture, shutter speed, ISO, date taken
|
|
- Triggered on upload and during file watching
|
|
|
|
**File Upload Behavior:**
|
|
If file exists, appends timestamp to filename (`photo_1735124234.jpg`) to preserve history without overwrites.
|
|
|
|
### Authentication Flow
|
|
|
|
**Login:**
|
|
1. POST `/login` with username/password
|
|
2. Verify with `bcrypt::verify()` against password_hash
|
|
3. Generate JWT with claims: `{ sub: user_id, exp: 5_days_from_now }`
|
|
4. Sign with HS256 using `SECRET_KEY` environment variable
|
|
|
|
**Authorization:**
|
|
All protected endpoints extract `Claims` via `FromRequest` trait implementation. Token passed as `Authorization: Bearer <token>` header.
|
|
|
|
### API Structure
|
|
|
|
**Key Endpoint Patterns:**
|
|
|
|
```rust
|
|
// Image serving & upload
|
|
GET /image?path=...&size=...&format=...
|
|
POST /image (multipart file upload)
|
|
|
|
// Metadata & EXIF
|
|
GET /image/metadata?path=...
|
|
|
|
// Advanced search with filters
|
|
GET /photos?path=...&recursive=true&sort=DateTakenDesc&camera_make=Canon&gps_lat=...&gps_lon=...&gps_radius_km=10&date_from=...&date_to=...&tag_ids=1,2,3&media_type=Photo
|
|
|
|
// Video streaming (HLS)
|
|
POST /video/generate (creates .m3u8 playlist + .ts segments)
|
|
GET /video/stream?path=... (serves playlist)
|
|
|
|
// Tags
|
|
GET /image/tags/all
|
|
POST /image/tags (add tag to file)
|
|
DELETE /image/tags (remove tag from file)
|
|
POST /image/tags/batch (bulk tag updates)
|
|
|
|
// Memories (week-based grouping)
|
|
GET /memories?path=...&recursive=true
|
|
```
|
|
|
|
**Request Types:**
|
|
- `FilesRequest`: Supports complex filtering (tags, EXIF fields, GPS radius, date ranges)
|
|
- `SortType`: Shuffle, NameAsc/Desc, TagCountAsc/Desc, DateTakenAsc/Desc
|
|
|
|
### Important Patterns
|
|
|
|
**Service Builder Pattern:**
|
|
Routes are registered via composable `ServiceBuilder` trait in `service.rs`. Allows modular feature addition.
|
|
|
|
**Path Validation:**
|
|
Always use `is_valid_full_path(&base_path, &requested_path, check_exists)` to prevent directory traversal attacks.
|
|
|
|
**File Type Detection:**
|
|
Centralized in `file_types.rs` with constants `IMAGE_EXTENSIONS` and `VIDEO_EXTENSIONS`. Provides both `Path` and `DirEntry` variants for performance.
|
|
|
|
**OpenTelemetry Tracing:**
|
|
All database operations and HTTP handlers wrapped in spans. In release builds, exports to OTLP endpoint via `OTLP_OTLS_ENDPOINT`. Debug builds use basic logger.
|
|
|
|
**Memory Exclusion:**
|
|
`PathExcluder` in `memories.rs` filters out directories from memories API via `EXCLUDED_DIRS` environment variable (comma-separated paths or substring patterns).
|
|
|
|
### Startup Sequence
|
|
|
|
1. Load `.env` file
|
|
2. Run embedded Diesel migrations
|
|
3. Spawn file watcher thread
|
|
4. Create initial thumbnails (parallel scan)
|
|
5. Generate video GIF thumbnails
|
|
6. Initialize AppState with Actix actors
|
|
7. Set up Prometheus metrics (`imageserver_image_total`, `imageserver_video_total`)
|
|
8. Scan directory for videos and queue HLS processing
|
|
9. Start HTTP server on `BIND_URL` + localhost:8088
|
|
|
|
## Testing Patterns
|
|
|
|
Tests require `BASE_PATH` environment variable. Many integration tests create temporary directories and files.
|
|
|
|
When testing database code:
|
|
- Use in-memory SQLite: `DATABASE_URL=":memory:"`
|
|
- Run migrations in test setup
|
|
- Clean up with `DROP TABLE` or use `#[serial]` from `serial_test` crate if parallel tests conflict
|
|
|
|
## Common Gotchas
|
|
|
|
**EXIF Date Parsing:**
|
|
Multiple formats supported (EXIF DateTime, ISO8601, Unix timestamp). Fallback chain attempts multiple parsers.
|
|
|
|
**Video Processing:**
|
|
ffmpeg processes run asynchronously via actors. Use `StreamActor` to track completion. HLS segments written to `VIDEO_PATH`.
|
|
|
|
**File Extensions:**
|
|
Extension detection is case-insensitive. Use `file_types.rs` helpers rather than manual string matching.
|
|
|
|
**Migration Workflow:**
|
|
After creating a migration, manually edit the SQL, then regenerate `schema.rs` with `diesel print-schema`. Migrations auto-run on startup via `embedded_migrations!()` macro.
|
|
|
|
**Path Absolutization:**
|
|
Use `path-absolutize` crate's `.absolutize()` method when converting user-provided paths to ensure they're within `BASE_PATH`.
|
|
|
|
## Required Environment Variables
|
|
|
|
```bash
|
|
DATABASE_URL=./database.db # SQLite database path
|
|
BASE_PATH=/path/to/media # Root media directory
|
|
THUMBNAILS=/path/to/thumbnails # Thumbnail storage
|
|
VIDEO_PATH=/path/to/video/hls # HLS playlist output
|
|
GIFS_DIRECTORY=/path/to/gifs # Video GIF thumbnails
|
|
BIND_URL=0.0.0.0:8080 # Server binding
|
|
CORS_ALLOWED_ORIGINS=http://localhost:3000
|
|
SECRET_KEY=your-secret-key-here # JWT signing secret
|
|
RUST_LOG=info # Log level
|
|
EXCLUDED_DIRS=/private,/archive # Comma-separated paths to exclude from memories
|
|
```
|
|
|
|
Optional:
|
|
```bash
|
|
WATCH_QUICK_INTERVAL_SECONDS=60 # Quick scan interval
|
|
WATCH_FULL_INTERVAL_SECONDS=3600 # Full scan interval
|
|
OTLP_OTLS_ENDPOINT=http://... # OpenTelemetry collector (release builds)
|
|
|
|
# AI Insights Configuration
|
|
OLLAMA_PRIMARY_URL=http://desktop:11434 # Primary Ollama server (e.g., desktop)
|
|
OLLAMA_FALLBACK_URL=http://server:11434 # Fallback Ollama server (optional, always-on)
|
|
OLLAMA_PRIMARY_MODEL=nemotron-3-nano:30b # Model for primary server (default: nemotron-3-nano:30b)
|
|
OLLAMA_FALLBACK_MODEL=llama3.2:3b # Model for fallback server (optional, uses primary if not set)
|
|
SMS_API_URL=http://localhost:8000 # SMS message API endpoint (default: localhost:8000)
|
|
SMS_API_TOKEN=your-api-token # SMS API authentication token (optional)
|
|
```
|
|
|
|
**AI Insights Fallback Behavior:**
|
|
- Primary server is tried first with its configured model (5-second connection timeout)
|
|
- On connection failure, automatically falls back to secondary server with its model (if configured)
|
|
- If `OLLAMA_FALLBACK_MODEL` not set, uses same model as primary server on fallback
|
|
- Total request timeout is 120 seconds to accommodate slow LLM inference
|
|
- Logs indicate which server and model was used (info level) and failover attempts (warn level)
|
|
- Backwards compatible: `OLLAMA_URL` and `OLLAMA_MODEL` still supported as fallbacks
|
|
|
|
**Model Discovery:**
|
|
The `OllamaClient` provides methods to query available models:
|
|
- `OllamaClient::list_models(url)` - Returns list of all models on a server
|
|
- `OllamaClient::is_model_available(url, model_name)` - Checks if a specific model exists
|
|
|
|
This allows runtime verification of model availability before generating insights.
|
|
|
|
## Dependencies of Note
|
|
|
|
- **actix-web**: HTTP framework
|
|
- **diesel**: ORM for SQLite
|
|
- **jsonwebtoken**: JWT implementation
|
|
- **kamadak-exif**: EXIF parsing
|
|
- **image**: Thumbnail generation
|
|
- **walkdir**: Directory traversal
|
|
- **rayon**: Parallel processing
|
|
- **opentelemetry**: Distributed tracing
|
|
- **bcrypt**: Password hashing
|
|
- **infer**: Magic number file type detection
|