# Image API This is an Actix-web server for serving images and videos from a filesystem. Upon first run it will generate thumbnails for all images and videos at `BASE_PATH`. ## Features - Automatic thumbnail generation for images and videos - EXIF data extraction and storage for photos - File watching with NFS support (polling-based) - Video streaming with HLS - Tag-based organization - Memories API for browsing photos by date - **AI-Powered Photo Insights** - Generate contextual insights from photos using LLMs - **RAG-based Context Retrieval** - Semantic search over daily conversation summaries - **Automatic Daily Summaries** - LLM-generated summaries of daily conversations with embeddings ## Environment There are a handful of required environment variables to have the API run. They should be defined where the binary is located or above it in an `.env` file. You must have `ffmpeg` installed for streaming video and generating video thumbnails. - `DATABASE_URL` is a path or url to a database (currently only SQLite is tested) - `BASE_PATH` is the root from which you want to serve images and videos - `THUMBNAILS` is a path where generated thumbnails should be stored - `VIDEO_PATH` is a path where HLS playlists and video parts should be stored - `BIND_URL` is the url and port to bind to (typically your own IP address) - `SECRET_KEY` is the *hopefully* random string to sign Tokens with - `RUST_LOG` is one of `off, error, warn, info, debug, trace`, from least to most noisy [error is default] - `EXCLUDED_DIRS` is a comma separated list of directories to exclude from the Memories API - `WATCH_QUICK_INTERVAL_SECONDS` (optional) is the interval in seconds for quick file scans [default: 60] - `WATCH_FULL_INTERVAL_SECONDS` (optional) is the interval in seconds for full file scans [default: 3600] ### AI Insights Configuration (Optional) The following environment variables configure AI-powered photo insights and daily conversation summaries: #### Ollama Configuration - `OLLAMA_PRIMARY_URL` - Primary Ollama server URL [default: `http://localhost:11434`] - Example: `http://desktop:11434` (your main/powerful server) - `OLLAMA_FALLBACK_URL` - Fallback Ollama server URL (optional) - Example: `http://server:11434` (always-on backup server) - `OLLAMA_PRIMARY_MODEL` - Model to use on primary server [default: `nemotron-3-nano:30b`] - Example: `nemotron-3-nano:30b`, `llama3.2:3b`, etc. - `OLLAMA_FALLBACK_MODEL` - Model to use on fallback server (optional) - If not set, uses `OLLAMA_PRIMARY_MODEL` on fallback server **Legacy Variables** (still supported): - `OLLAMA_URL` - Used if `OLLAMA_PRIMARY_URL` not set - `OLLAMA_MODEL` - Used if `OLLAMA_PRIMARY_MODEL` not set #### SMS API Configuration - `SMS_API_URL` - URL to SMS message API [default: `http://localhost:8000`] - Used to fetch conversation data for context in insights - `SMS_API_TOKEN` - Authentication token for SMS API (optional) #### Fallback Behavior - Primary server is tried first with 5-second connection timeout - On failure, automatically falls back to secondary server (if configured) - Total request timeout is 120 seconds to accommodate LLM inference - Logs indicate which server/model was used and any failover attempts #### Daily Summary Generation Daily conversation summaries are generated automatically on server startup. Configure in `src/main.rs`: - Date range for summary generation - Contacts to process - Model version used for embeddings: `nomic-embed-text:v1.5`