Branch C of the multi-library data-model rollout. Implements the
operational maintenance pipeline pinned in CLAUDE.md → "Multi-library
data model" / "Library availability and safety". Branches A and B
land first; this branch builds on top.
New module: src/library_maintenance.rs
Three idempotent passes the watcher runs every tick after the
per-library ingest loop:
1. Missing-file scan (per online library)
For each Online library, load a paginated page of image_exif rows
(IMAGE_EXIF_MISSING_SCAN_PAGE_SIZE, default 500), stat() each one,
and delete rows whose source file is NotFound. Permission/IO
errors are skipped, never deleted. Capped at
IMAGE_EXIF_MISSING_DELETE_CAP_PER_TICK (default 200) per library
per tick — so a pathological mount that returns NotFound for
everything can't wipe the table in one cycle. Cursor advances
across ticks, wraps on partial-page returns, and naturally cycles
through the entire library over many minutes. Skipped wholesale
for Stale libraries via the existing probe gate.
2. Back-ref refresh (DB-only)
For face_detections / tagged_photo / photo_insights: any
hash-keyed row whose (library_id, rel_path) no longer matches an
image_exif row, but whose content_hash does, is repointed at a
surviving image_exif location. Pure SQL with EXISTS guards so
rows whose hash is fully orphaned are left alone (the orphan GC
handles those). Idempotent; no availability gate needed.
This is what makes a recent → archive move invisible to readers:
when pass 1 retires the lib-A row, pass 2 pivots tags / faces /
insights to lib-B's surviving path before any client notices.
3. Orphan GC (destructive)
Hash-keyed derived rows whose content_hash has no image_exif
referent are GC-eligible. Two-tick consensus: a hash must be
observed orphaned on two consecutive ticks AND every library must
be Online for both. A single Stale tick within the window cancels
all pending deletes (they remain marked but won't be promoted) —
they're re-evaluated next tick. The pending set lives in
OrphanGcState (in-memory); a watcher restart resets it, which can
only delay a delete, never cause one. Hashes that re-appear in
image_exif between ticks are "revived" from the pending set
(handles transient share unmount / remount).
Two new ExifDao methods:
- list_rel_paths_for_library_page(library_id, limit, offset) for
the paginated missing-file scan.
- (count_for_library landed in Branch A.)
Watcher wiring (main.rs)
Per-library: missing-file scan inside the existing per-library
loop, after process_new_files, gated by the same probe check that
already protects ingest. After the loop: reconcile (Branch B),
back-ref refresh, then run_orphan_gc. The maintenance connection is
opened once per tick (image_api::database::connect), used by all
three DB-only passes, and dropped at end of tick.
CLAUDE.md gains a "Maintenance pipeline" subsection that describes
the three passes and their interaction with the existing
availability-and-safety policy.
Tests: 225 pass (217 from Branch B + 8 new in library_maintenance
covering back-ref refresh including the fully-orphaned no-op case,
two-tick GC consensus, Stale-tick consensus reset, image_exif
re-appearance revival, multi-table delete, and the
all_libraries_online helper).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Image API
This is an Actix-web server for serving images and videos from a filesystem.
Upon first run it will generate thumbnails for all images and videos at BASE_PATH.
Features
- Automatic thumbnail generation for images and videos
- EXIF data extraction and storage for photos
- File watching with NFS support (polling-based)
- Video streaming with HLS
- Tag-based organization
- Memories API for browsing photos by date
- Video Wall - Auto-generated short preview clips for videos, served via a grid view
- AI-Powered Photo Insights - Generate contextual insights from photos using LLMs
- RAG-based Context Retrieval - Semantic search over daily conversation summaries
- Automatic Daily Summaries - LLM-generated summaries of daily conversations with embeddings
External Dependencies
ffmpeg (required)
ffmpeg must be on PATH. It is used for:
- HLS video streaming — transcoding/segmenting source videos into
.m3u8+.tsplaylists - Video thumbnails — extracting a frame at the 3-second mark
- Video preview clips — short looping previews for the Video Wall
- HEIC / HEIF thumbnails — decoding Apple's HEIC format (your ffmpeg build must include
libheif; most modern builds do)
Builds used in development: the gyan.dev full build on Windows, and distro ffmpeg
packages on Linux work fine. If HEIC thumbnails silently fail, check
ffmpeg -formats | grep heif to confirm HEIF support.
RAW photo thumbnails
RAW formats (ARW, NEF, CR2, CR3, DNG, RAF, ORF, RW2, PEF, SRW, TIFF) are thumbnailed by reading an embedded JPEG preview out of the TIFF container — no external RAW decoder (libraw / dcraw) is involved. The pipeline tries two layers in order and keeps the largest valid JPEG:
- Fast path (no extra dependency) —
kamadak-exifreadsJPEGInterchangeFormatfrom IFD0 / IFD1 directly. Covers older bodies and most DNGs. exiftoolfallback (recommended for RAW-heavy libraries) — shells out to extractPreviewImage/JpgFromRaw/OtherImage, which reaches MakerNote and SubIFD-hosted previews kamadak-exif can't see (e.g. Nikon'sPreviewIFD, where modern Nikon bodies stash the full-res review JPEG). Ifexiftoolisn't onPATHthis layer is skipped silently and only the fast-path result is used.
Install exiftool via your package manager:
- macOS:
brew install exiftool - Linux (Debian/Ubuntu):
apt install libimage-exiftool-perl - Windows:
winget install OliverBetz.ExifToolorchoco install exiftool
Files where neither layer produces a valid preview fall back to ffmpeg. Anything
that still can't be decoded is marked with a <thumb>.unsupported sentinel in
the thumbnail directory so we don't retry it every scan. Delete those sentinels
(and any cached black thumbnails) to force retries after a tooling upgrade.
Environment
There are a handful of required environment variables to have the API run.
They should be defined where the binary is located or above it in an .env file.
DATABASE_URLis a path or url to a database (currently only SQLite is tested)BASE_PATHis the root from which you want to serve images and videosTHUMBNAILSis a path where generated thumbnails should be stored. Thumbnails mirror the source tree underBASE_PATHand keep the source's original extension (e.g.foo.arworbar.mp4), though the file contents are always JPEG bytes — browsers content-sniff. Files that can't be thumbnailed by theimagecrate, ffmpeg, or an embedded RAW preview get a zero-byte<thumb_path>.unsupportedsentinel in this directory so subsequent scans skip them. Delete the*.unsupportedfiles to force retries (for example after upgrading ffmpeg or adding libheif)VIDEO_PATHis a path where HLS playlists and video parts should be storedGIFS_DIRECTORYis a path where generated video GIF thumbnails should be storedBIND_URLis the url and port to bind to (typically your own IP address)SECRET_KEYis the hopefully random string to sign Tokens withRUST_LOGis one ofoff, error, warn, info, debug, trace, from least to most noisy [error is default]EXCLUDED_DIRSis a comma separated list of directories to exclude from the Memories APIPREVIEW_CLIPS_DIRECTORY(optional) is a path where generated video preview clips should be stored [default:preview_clips]WATCH_QUICK_INTERVAL_SECONDS(optional) is the interval in seconds for quick file scans [default: 60]WATCH_FULL_INTERVAL_SECONDS(optional) is the interval in seconds for full file scans [default: 3600]
AI Insights Configuration (Optional)
The following environment variables configure AI-powered photo insights and daily conversation summaries:
Ollama Configuration
OLLAMA_PRIMARY_URL- Primary Ollama server URL [default:http://localhost:11434]- Example:
http://desktop:11434(your main/powerful server)
- Example:
OLLAMA_FALLBACK_URL- Fallback Ollama server URL (optional)- Example:
http://server:11434(always-on backup server)
- Example:
OLLAMA_PRIMARY_MODEL- Model to use on primary server [default:nemotron-3-nano:30b]- Example:
nemotron-3-nano:30b,llama3.2:3b, etc.
- Example:
OLLAMA_FALLBACK_MODEL- Model to use on fallback server (optional)- If not set, uses
OLLAMA_PRIMARY_MODELon fallback server
- If not set, uses
Legacy Variables (still supported):
OLLAMA_URL- Used ifOLLAMA_PRIMARY_URLnot setOLLAMA_MODEL- Used ifOLLAMA_PRIMARY_MODELnot set
OpenRouter Configuration (Hybrid Backend)
The hybrid agentic backend keeps embeddings + vision local (Ollama) while routing
chat + tool-calling to OpenRouter. Enabled per-request when the client sends
backend=hybrid.
OPENROUTER_API_KEY- OpenRouter API key. Required to enable the hybrid backend.OPENROUTER_DEFAULT_MODEL- Model id used when the client doesn't specify one [default:anthropic/claude-sonnet-4]- Example:
openai/gpt-4o-mini,google/gemini-2.5-flash
- Example:
OPENROUTER_ALLOWED_MODELS- Comma-separated curated allowlist exposed to clients viaGET /insights/openrouter/models. The mobile picker shows only these. Empty/unset = no picker, server default is used.- Example:
openai/gpt-4o-mini,anthropic/claude-haiku-4-5,google/gemini-2.5-flash
- Example:
OPENROUTER_BASE_URL- Override base URL [default:https://openrouter.ai/api/v1]OPENROUTER_EMBEDDING_MODEL- Embedding model for OpenRouter [default:openai/text-embedding-3-small]. Only used if/when embeddings are routed through OpenRouter (currently embeddings stay local).OPENROUTER_HTTP_REFERER- OptionalHTTP-Refererfor OpenRouter attributionOPENROUTER_APP_TITLE- OptionalX-Titlefor OpenRouter attribution
Capability checks are skipped for the curated allowlist — bad model ids surface as a 4xx from the chat call. Pick tool-capable models.
SMS API Configuration
SMS_API_URL- URL to SMS message API [default:http://localhost:8000]- Used to fetch conversation data for context in insights
SMS_API_TOKEN- Authentication token for SMS API (optional)
Agentic Insight Generation
AGENTIC_MAX_ITERATIONS- Maximum tool-call iterations per agentic insight request [default:10]- Controls how many times the model can invoke tools before being forced to produce a final answer
- Increase for more thorough context gathering; decrease to limit response time
Insight Chat Continuation
After an agentic insight is generated, the conversation can be continued. Endpoints:
POST /insights/chat— single-turn reply (non-streaming)POST /insights/chat/stream— SSE variant with livetextdeltas andtool_call/tool_resultevents. Mobile client uses this.GET /insights/chat/history?path=...&library=...— rendered transcript; each assistant message carries atools: [{name, arguments, result}]arrayPOST /insights/chat/rewind— truncate transcript at a rendered index (drops that message + any preceding tool scaffolding + later turns). Used for "try again from here" flows. The initial user message is protected.
Amend mode (amend: true in the chat request body) regenerates the insight's
title and inserts a new row instead of appending to the existing transcript,
so you can rewrite the saved summary from within chat.
AGENTIC_CHAT_MAX_ITERATIONS- Cap on tool-calling iterations per chat turn [default:6]- Per-request
max_iterations(when sent by the client) is clamped to this cap
- Per-request
Fallback Behavior
- Primary server is tried first with 5-second connection timeout
- On failure, automatically falls back to secondary server (if configured)
- Total request timeout is 120 seconds to accommodate LLM inference
- Logs indicate which server/model was used and any failover attempts
Daily Summary Generation
Daily conversation summaries are generated automatically on server startup. Configure in src/main.rs:
- Date range for summary generation
- Contacts to process
- Model version used for embeddings:
nomic-embed-text:v1.5
Apollo + Face Recognition (Optional)
Apollo (sibling project) hosts both the Places API and the local insightface inference service. Both integrations are optional and degrade gracefully when unset.
APOLLO_API_BASE_URL- Base URL of the sibling Apollo backend.- When set, photo-insight enrichment folds the user's personal place name
(Home, Work, Cabin, ...) into the location string, and the agentic loop
gains a
get_personal_place_attool. Unset = legacy Nominatim-only path.
- When set, photo-insight enrichment folds the user's personal place name
(Home, Work, Cabin, ...) into the location string, and the agentic loop
gains a
APOLLO_FACE_API_BASE_URL- Base URL for the face-detection service.- Falls back to
APOLLO_API_BASE_URLwhen unset (typical single-Apollo deploy). Both unset = face feature disabled (file-watch hook and manual-face endpoints short-circuit silently).
- Falls back to
FACE_AUTOBIND_MIN_COS(Phase 3) - Cosine-sim floor for auto-binding a detected face to an existing same-named person via people-tag bootstrap [default:0.4].FACE_DETECT_CONCURRENCY(Phase 3) - Per-scan-tick concurrent detect calls fired by the file watcher [default:8]. Apollo serializes them via its single-worker GPU pool.FACE_DETECT_TIMEOUT_SEC- reqwest client timeout per detect call [default:60]. CPU inference on a backlog can take many seconds.FACE_BACKLOG_MAX_PER_TICK- Cap on the per-tick backlog drain (photos with a content_hash but no face_detections row) [default:64]. Runs every watcher tick regardless of quick-vs-full scan, so the unscanned set drains independently of the file walk.FACE_HASH_BACKFILL_MAX_PER_TICK- Cap on the per-tick content_hash backfill (photos that were registered before the hash field was populated retroactively) [default:2000]. Errors don't burn the cap; only successful hashes count.