Cameron c01a0479b7 fix: honor library param in /image, /photos, /memories
The Phase 3 plumbing accepted `library=` but didn't actually route
requests through the scoped library once it was resolved. Three
concrete bugs surfaced when testing against a second mounted library:

- `/image` always resolved paths against AppState.base_path (primary),
  so thumbnails for non-primary libraries 400'd when their rel_paths
  didn't exist under primary. Now resolves against the scoped library
  and defaults to primary when the param is omitted.

- `/memories` walked the scoped library correctly but its helper
  functions hardcoded `library_id: PRIMARY_LIBRARY_ID` on every
  MemoryItem, causing clients to route thumbnails back to primary
  regardless of which library the memory actually came from.

- `/photos` non-recursive listing delegated to a `RealFileSystem`
  constructed from AppState.base_path at startup, so walks always
  hit primary even when `library=2` was passed. The non-primary
  path now uses list_files against the scoped library's root;
  primary still goes through FileSystemAccess to preserve the
  existing test mock plumbing.

Also adds `library` to ThumbnailRequest so the /image query param
is actually parsed.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-21 01:55:07 +00:00
2026-02-26 10:05:47 -05:00
2026-01-03 10:30:37 -05:00
2020-07-07 21:48:29 -04:00
2022-03-01 20:44:51 -05:00

Image API

This is an Actix-web server for serving images and videos from a filesystem. Upon first run it will generate thumbnails for all images and videos at BASE_PATH.

Features

  • Automatic thumbnail generation for images and videos
  • EXIF data extraction and storage for photos
  • File watching with NFS support (polling-based)
  • Video streaming with HLS
  • Tag-based organization
  • Memories API for browsing photos by date
  • Video Wall - Auto-generated short preview clips for videos, served via a grid view
  • AI-Powered Photo Insights - Generate contextual insights from photos using LLMs
  • RAG-based Context Retrieval - Semantic search over daily conversation summaries
  • Automatic Daily Summaries - LLM-generated summaries of daily conversations with embeddings

Environment

There are a handful of required environment variables to have the API run. They should be defined where the binary is located or above it in an .env file. You must have ffmpeg installed for streaming video and generating video thumbnails.

  • DATABASE_URL is a path or url to a database (currently only SQLite is tested)
  • BASE_PATH is the root from which you want to serve images and videos
  • THUMBNAILS is a path where generated thumbnails should be stored
  • VIDEO_PATH is a path where HLS playlists and video parts should be stored
  • GIFS_DIRECTORY is a path where generated video GIF thumbnails should be stored
  • BIND_URL is the url and port to bind to (typically your own IP address)
  • SECRET_KEY is the hopefully random string to sign Tokens with
  • RUST_LOG is one of off, error, warn, info, debug, trace, from least to most noisy [error is default]
  • EXCLUDED_DIRS is a comma separated list of directories to exclude from the Memories API
  • PREVIEW_CLIPS_DIRECTORY (optional) is a path where generated video preview clips should be stored [default: preview_clips]
  • WATCH_QUICK_INTERVAL_SECONDS (optional) is the interval in seconds for quick file scans [default: 60]
  • WATCH_FULL_INTERVAL_SECONDS (optional) is the interval in seconds for full file scans [default: 3600]

AI Insights Configuration (Optional)

The following environment variables configure AI-powered photo insights and daily conversation summaries:

Ollama Configuration

  • OLLAMA_PRIMARY_URL - Primary Ollama server URL [default: http://localhost:11434]
    • Example: http://desktop:11434 (your main/powerful server)
  • OLLAMA_FALLBACK_URL - Fallback Ollama server URL (optional)
    • Example: http://server:11434 (always-on backup server)
  • OLLAMA_PRIMARY_MODEL - Model to use on primary server [default: nemotron-3-nano:30b]
    • Example: nemotron-3-nano:30b, llama3.2:3b, etc.
  • OLLAMA_FALLBACK_MODEL - Model to use on fallback server (optional)
    • If not set, uses OLLAMA_PRIMARY_MODEL on fallback server

Legacy Variables (still supported):

  • OLLAMA_URL - Used if OLLAMA_PRIMARY_URL not set
  • OLLAMA_MODEL - Used if OLLAMA_PRIMARY_MODEL not set

SMS API Configuration

  • SMS_API_URL - URL to SMS message API [default: http://localhost:8000]
    • Used to fetch conversation data for context in insights
  • SMS_API_TOKEN - Authentication token for SMS API (optional)

Agentic Insight Generation

  • AGENTIC_MAX_ITERATIONS - Maximum tool-call iterations per agentic insight request [default: 10]
    • Controls how many times the model can invoke tools before being forced to produce a final answer
    • Increase for more thorough context gathering; decrease to limit response time

Fallback Behavior

  • Primary server is tried first with 5-second connection timeout
  • On failure, automatically falls back to secondary server (if configured)
  • Total request timeout is 120 seconds to accommodate LLM inference
  • Logs indicate which server/model was used and any failover attempts

Daily Summary Generation

Daily conversation summaries are generated automatically on server startup. Configure in src/main.rs:

  • Date range for summary generation
  • Contacts to process
  • Model version used for embeddings: nomic-embed-text:v1.5
Description
A Rust actix based Image and Video Server.
Readme 10 MiB
Languages
Rust 97.3%
PowerShell 2.7%