Face Recognition / People Integration #61
Reference in New Issue
Block a user
Delete Branch "feature/face-recog-phase3-file-watch"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
ImageApi — feature/face-recog-phase3-file-watch → master
Adds the persistence + auto-detection side of the face system. Apollo hosts the inference engine and the management UI; this PR owns the storage, the file-watch trigger, and the data integrity around it.
Schema + CRUD (Phase 2)
Auto-detection (Phase 3 + 4)
auto-bind).
Robustness
Bumped to v1.1.0; docs and .env.example updated.
Land the persistence model and HTTP surface for local face recognition. Inference still lives in Apollo (Phase 1); this side adds the data home plus every endpoint Apollo's UI and FileViewer-React will consume. Schema (new migration 2026-04-29-000000_add_faces): - persons: visual identities. Optional entity_id bridges to the existing knowledge-graph entities table; auto-bridging is left to the management UI (we don't muddy LLM provenance from face rows). UNIQUE(name COLLATE NOCASE) so 'alice' / 'Alice' fold to one row. - face_detections: keyed on content_hash (cross-library dedup), with status='detected' carrying bbox + 512-d embedding BLOB, and 'no_faces' / 'failed' marker rows that tell Phase 3's file watcher not to re-scan. Marker invariant enforced via CHECK; partial UNIQUE on content_hash WHERE status='no_faces' guards against double-marks. Schema regenerated with `diesel print-schema` against a clean migration run; joinables added for face_detections → libraries / persons and persons → entities. face_client.rs (sibling of apollo_client.rs): - reqwest multipart, 60 s timeout (CPU inference on a backlog can be slow; bounded threadpool on Apollo serializes calls anyway). - FaceDetectError::{Permanent, Transient, Disabled} — Phase 3 keys its marker-row decision on this. 422 → mark failed, 5xx → defer. - APOLLO_FACE_API_BASE_URL falls back to APOLLO_API_BASE_URL when unset; both unset = is_enabled() false, callers no-op. faces.rs (DAO + handlers): - SqliteFaceDao implements the full FaceDao trait; person face counts go through sql_query because diesel's BoxedSelectStatement + group_by trips trait-resolver recursion. - merge_persons re-points face rows in a transaction, copies notes when target's are empty, deletes src. - manual POST /image/faces resolves content_hash through image_exif, crops the user-drawn bbox with 10% padding (detector wants context around ears/jaw), POSTs the crop to face_client.embed for a real ArcFace vector, then inserts source='manual'. - Cluster-suggest (Phase 6) gets its data from GET /faces/embeddings — base64-encoded paged BLOBs so Apollo's DBSCAN can stream them without ImageApi pre-aggregating. Endpoints registered alongside add_*_services in main.rs: GET /faces/stats?library= GET /faces/embeddings?library=&unassigned=&limit=&offset= GET /image/faces?path=&library= POST /image/faces (manual create via embed) PATCH /image/faces/{id} DELETE /image/faces/{id} GET /persons?library= POST /persons GET /persons/{id} PATCH /persons/{id} DELETE /persons/{id}?cascade=set_null|delete (set_null default) POST /persons/{id}/merge GET /persons/{id}/faces?library= The file-watch hook (Phase 3) and the rerun-on-one-photo handler (Phase 6) live behind the FaceDao methods marked dead_code today — they're called only when those phases land. Same shape for the trait methods that aren't reached by Phase 2 routes. Tests: 3 DAO unit tests cover person CRUD + case-insensitive uniqueness, marker-row idempotency (mark_status is a no-op when any row exists), and merge re-pointing faces. Cargo.toml: reqwest gains the `multipart` feature. cargo build / cargo test --lib / cargo fmt / cargo clippy --all-targets all clean for the new code; the two pre-existing test_path_excluder failures and the pre-existing sort_by clippy warnings are unrelated and present on master. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>PathExcluder was iterating every component of the absolute path, including the system prefix. Two of the existing memories tests had been failing on master because tempdir() lives under /tmp on Linux and a pattern like "tmp" then matched the system /tmp component rather than anything the user actually asked to exclude. Phase 3's file-watch hook will use the same code to skip @eaDir / .thumbnails under each library's BASE_PATH, so the bug would hide every photo on a host whose BASE_PATH passes through a directory named the same as a user pattern. Fix: store base in PathExcluder and strip it before scanning components. A path that lives outside base falls through to the no-match branch (defensive — nothing legit hits that today). Also extracted the face_client error classification into a pure classify_error_response(status, body) so the marker-row contract with Apollo (422 → Permanent / 'failed', 5xx → Transient / defer) is unit-testable without spinning up an HTTP server. New tests: memories::tests::test_path_excluder_* — 2 previously failing tests now pass. ai::face_client::tests::classify_* — 4 cases: 422 decode_failed → Permanent, 503 cuda_oom → Transient (handles both string and {code:..} detail shapes), 5xx → Transient + other 4xx → Permanent, unparseable HTML body still classifies on status. faces::tests::crop_* — 3 cases: invalid bbox rejected, valid bbox round-trips through JPEG decode, corner crop with 10% padding clamps inside source. cargo test --lib: 165 passed / 0 failed (was 156 / 2 failed). cargo fmt and clippy on new code clean. The remaining sort_by clippy warnings in pre-existing files (memories.rs, files.rs, exif.rs) are unrelated and present on master. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>Wire face detection into ImageApi's existing scan loop so new uploads pick up faces automatically and the initial backlog grinds through on full-scan ticks. No new job system; Phase 2's already_scanned check makes the work implicitly idempotent (one face_detections row per content_hash, including no_faces / failed marker rows). face_watch.rs (new): - run_face_detection_pass(library, excluded_dirs, face_client, face_dao, candidates) — sync entry point. Builds a per-pass tokio runtime and fans out detect calls bounded by FACE_DETECT_CONCURRENCY (default 8). The watcher thread itself stays sync. - filter_excluded — applies the same PathExcluder /memories uses, so @eaDir / .thumbnails / EXCLUDED_DIRS-listed paths skip detection before we burn a detect call (and Apollo's GPU memory) on junk. - read_image_bytes_for_detect — RAW/HEIC route through extract_embedded_jpeg_preview because opencv-python-headless can't decode either; everything else gets a plain std::fs::read so EXIF orientation reaches Apollo's exif_transpose intact. - process_one — translates Apollo's response into the Phase 2 marker contract: faces[] empty → no_faces; FaceDetectError::Permanent → failed (don't retry); Transient → no marker (next scan retries); success with N faces → N detected rows with the embeddings unpacked. main.rs (process_new_files + watch_files): - watch_files now also takes face_client + excluded_dirs; the watcher thread builds a SqliteFaceDao the same way it builds ExifDao / PreviewDao. - After the EXIF write loop, build_face_candidates queries image_exif for the just-walked image paths' content_hashes (covers new uploads and pre-existing backlog), filters out anything already_scanned, and hands the rest to face_watch::run_face_detection_pass. - Bypassed wholesale when face_client.is_enabled() is false — keeps the watcher usable on legacy deploys where Apollo isn't configured. Tests: 5 face_watch unit tests cover the parts that don't need a real Apollo: - filter_excluded drops dir-component patterns (@eaDir) without matching substring file names (eaDir-not-a-thing.jpg keeps). - filter_excluded drops absolute-under-base subtrees (/private). - empty EXCLUDED_DIRS short-circuits cleanly. - read_image_bytes_for_detect passes JPEG bytes through verbatim (orientation must reach Apollo unmodified). - read_image_bytes_for_detect falls through to plain read when a RAW-extension file has no embedded preview, so Apollo gets a chance to 422 and we mark failed rather than infinitely-retrying. cargo test --lib: 170 / 0; fmt and clippy clean for new code. End-to-end (drop a photo → face_detections row appears) needs Apollo running and is deferred to deploy-time verification. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>Manual smoke test caught a bug: POST /persons with a duplicate name returned 500 with the body 'insert person Cameron' instead of the intended 409 Conflict. Root cause: the handler keyed on `format!("{}", e).contains("unique")`, but anyhow's plain Display only renders the *outermost* context ("insert person Cameron") and hides the diesel error nested below ('UNIQUE constraint failed: persons.name'). The string check was a false negative on every duplicate. Fix: walk the source chain and downcast for diesel::result::Error::DatabaseError(UniqueViolation, _) — exposed via a shared `is_unique_violation` helper used by both create_person_handler and update_person_handler. Error bodies for non-unique failures now use `{:#}` so the body actually carries the underlying cause when the user surfaces it. merge_persons_handler also moves to `{:#}` for richer error bodies; the "itself" check was already structural and unaffected. Regression test (faces::tests::is_unique_violation_walks_chain) pins both the bug shape ({} doesn't surface UNIQUE) and the fix (is_unique_violation correctly downcasts the chain), so a future refactor of error handling can't silently re-bury this. cargo test --lib: 171 / 0; fmt + clippy clean. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>Wires the existing string people-tags into the new persons table and auto-binds new detections to a same-named person when the photo carries exactly one matching tag. ImageApi has no notion of which tags are people-tags today (purely a user mental model), so this is operator- confirmed: the suggester surfaces candidates with a heuristic flag, the operator confirms, then bootstrap creates persons rows. Auto-bind follows on every detection thereafter. New endpoints: GET /tags/people-bootstrap-candidates Per case-insensitive name group: display name (most-frequent capitalization), normalized lowercase, summed usage_count, looks_like_person heuristic flag, already_exists check against the persons table. Sorted persons-likely-first then by count. POST /persons/bootstrap Body: {names: [string]}. Idempotent — pre-fetches the existing- name set so a duplicate request reports per-row "already exists" instead of 409-ing each insert. Created rows get created_from_tag=true; failed rows surface in `skipped` with a reason. looks_like_person heuristic — conservative on purpose because the operator confirms in the UI: - 1–2 whitespace-separated words - Each word starts uppercase, no digits anywhere - Single-word names not on a small denylist (cat, christmas, beach, sunset, untagged, ...). Two-word names skip the denylist so "Sarah Smith" is never false-rejected. FaceDao additions: - find_persons_by_names_ci — bulk lowercase-name → person_id lookup via sql_query (Diesel's BoxedSelectStatement + LOWER() doesn't play well with the type system). - person_reference_embedding — L2-normalized mean of a person's detected embeddings, *filtered by model_version* so a future buffalo_xl row can never contaminate an in-flight buffalo_l auto- bind decision. Returns None when the person has no faces yet. - assign_face_to_person — sets face_detections.person_id and, only when persons.cover_face_id is NULL, claims this face as cover. The UI's hand-picked cover survives later auto-binds. - decode_embedding_bytes / cosine_similarity helpers — pub(crate) so face_watch can decode the wire bytes once and feed them through the cosine threshold. Auto-bind in face_watch::process_one: After every successful detect, for each newly-stored auto face we pull the photo's tags, look up which (if any) map to existing persons, and: - skip when zero or multiple distinct persons are matched (multi-match is genuinely ambiguous; cluster suggester handles it) - on first face for a person: bind unconditionally so bootstrap can ever produce a usable reference - thereafter: bind iff cosine(new_emb, person_ref) >= FACE_AUTOBIND_MIN_COS (default 0.4, env-tunable to 0..=1) The reference embedding comes from person_reference_embedding under the same model_version as the candidate, so a model upgrade never silently re-anchors a person's centroid. Plumbing: watch_files now constructs its own SqliteTagDao alongside the other watcher DAOs and threads it through process_new_files → run_face_detection_pass → process_one. The handler-side TagDao registration in main.rs already covers bootstrap_candidates_handler; no extra app_data wiring needed. Tests: 8 new (faces.rs): - looks_like_person accepts/rejects/two-word-skips-denylist (3) - cosine_similarity on identical / orthogonal / opposite / mismatch / zero / empty inputs - decode_embedding_bytes round-trip + size validation - find_persons_by_names_ci groups case + handles empty input - person_reference_embedding filters by model_version (buffalo_l ref must not include buffalo_xl rows) - assign_face_to_person sets cover when unset, doesn't overwrite cargo test --lib: 179 / 0; fmt + clippy clean for new code. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>Filter <3-char tags and emoji/symbol-bearing tags out of the bootstrap candidate list before grouping. Manual testing surfaced these as noise the operator never tickets — they pushed real candidates lower in the list and made the UI harder to scan. This is a hard filter (drop from candidates entirely), not a heuristic flag — looks_like_person still governs the default-checked decision for the rows that *do* survive. is_plausible_name_token rules: - >= 3 chars after trimming (rejects "AB", "OK", whitespace-only) - Each char is alphabetic (any script — covers Renée, José, 田中太郎), whitespace, name-punctuation (' - . _ U+2019), or ASCII digit - Anything else (emoji, symbols, math, arrows, control codes) drops the whole tag Digits stay allowed at this layer; looks_like_person handles "Trip 2018" on the heuristic side. Lets a "Sarah2" alias still appear so the operator can spot and confirm it manually, just unticked by default. Cargo version bump 1.0.0 → 1.1.0 marks the face-recog feature surface landing — Phase 2's schema + endpoints, Phase 3's file-watch hook, and Phase 4's bootstrap + auto-bind are all behind APOLLO_FACE_API_BASE_URL, so legacy 1.0 deploys without that env see no behavior change. Tests: 1 new (faces::tests::is_plausible_name_token_filters_short_and_emoji) covers the accept-list (Latin/accented/Asian scripts, hyphenated and apostrophe names) and the reject-list (length floor, emoji classes, symbols, leading/trailing whitespace handling). cargo test --lib: 180 / 0; fmt + clippy clean. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>Manual deploy debugging: 'Saved thumbnail' logs were visible (boot-time thumbnail backfill) but no face_watch logs were appearing, with no obvious way to tell whether the integration was disabled, hadn't reached a full scan yet, or had simply seen no new files. Two log lines: - watch_files startup: 'Face detection: ENABLED' / 'DISABLED (set APOLLO_FACE_API_BASE_URL or APOLLO_API_BASE_URL to enable)' so you can tell at a glance whether the env wired through. - process_new_files (debug-level): 'face_watch: scan tick — N image file(s) walked, M candidate(s) (library 'main', modified_since=...)' so an empty-candidate scan is distinguishable from a misconfigured or skipped one without bumping log level for the rest of the watcher. No behavior change. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>A single global "Ignored" person row, marked is_ignored=true, that the frontend lazily creates on first use to hold strangers, false detections, and faces the user doesn't want bound to a real person. Schema (new migration 2026-04-29-000200_add_is_ignored): - persons.is_ignored BOOLEAN NOT NULL DEFAULT 0 - Partial index on (is_ignored) WHERE is_ignored = 1; small WHERE set means a tiny index that only ever services the bucket lookup. Why a real persons row instead of a separate table or status enum: - face_detections.person_id stays a clean foreign key — no special code paths for "ignored faces" anywhere else in the schema. - The cluster-suggester already filters by `person_id IS NULL`, so bound-to-ignored faces are naturally excluded from re-clustering without any change. - merge / rename / delete all work on it with the existing routes (the management UI just hides it from default views). DAO additions / changes: - get_or_create_ignored_person (idempotent; race-safe via the UNIQUE COLLATE NOCASE on persons.name + retry-on-409 fallback). - list_persons gains an include_ignored parameter; default false so the management screen hides the bucket unless asked. - find_persons_by_names_ci filters is_ignored=0 in SQL so the auto-bind path can NEVER target the bucket — even if the user happens to tag photos as "Ignored", the heuristic look-up skips it. Bucket assignment is always an explicit operator action. - update_person accepts is_ignored: Option<bool> so a person can be moved into / out of the bucket without a delete + recreate. Routes: - POST /persons/ignore-bucket — returns the bucket, creating it on first call. Frontend uses this lazily right before binding. - GET /persons gains ?include_ignored=true; default behavior unchanged. - PATCH /persons/{id} now accepts is_ignored. Tests: ignore_bucket_idempotent_and_filters_auto_bind covers the contract: bucket is idempotent across calls, find_persons_by_names_ci skips it (even on exact name match), default list_persons hides it, include_ignored=true surfaces it. All other tests updated to pass the new is_ignored: false / Option<bool> fields explicitly. cargo test --lib: 181/0; fmt + clippy clean for new code. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>Phase 2 stored the new bbox on PATCH /image/faces/{id} but logged "embedding now stale (Phase 3 will re-embed)" and moved on. That left the embedding column pointing at the *old* face area while the bbox described a new one — auto-bind cosine similarity and the cluster suggester would silently rank the row as "the same face it was before the edit" forever after, even though the geometry no longer matched. Now: when the PATCH includes a bbox, the handler: 1. Looks up the row to find its photo (library_id + rel_path). 2. Crops the new bbox region with the same crop_image_to_bbox helper manual-create uses (10% pad on each side so the detector has ear/jaw context). 3. POSTs the crop to face_client.embed for a fresh ArcFace vector. 4. Stores both the new bbox AND the new embedding in one update_face transaction. Errors map cleanly: - face_client disabled → 503 (bbox edit needs Apollo). - decode failure / no face in crop → 422. - Apollo CUDA OOM / unavailable → 503 transient. - Underlying row missing → 404. About 100-500ms per edit on CPU, dominated by Apollo's inference call. Acceptable for a manual operator action; the alternative (stale embedding) silently broke the rest of the face stack. Prerequisite for the upcoming carousel-side draw/resize bbox UI — without re-embed, every operator-driven bbox tweak would corrode the clustering/auto-bind quality. ApiPatchFaceBody on Apollo's side already passes bbox through verbatim, so no Apollo change needed. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>Both create_face_handler and update_face_handler returned the bare FaceDetectionRow, so PATCH /image/faces/{id} (used by both bbox edits and person assignment) replied without person_name. The carousel overlay does an optimistic replace on this row — replacing the joined FaceWithPerson with a row that has person_name = undefined visibly dropped the VFD label off the bbox after every save. Add a small hydrate_face_with_person helper that does the persons lookup and assembles a FaceWithPerson, used by both handlers. The list endpoint already does the join, so the PATCH/POST shape now matches it.The content-hash backfill capped at 500/tick AND counted errors against that cap. So a pocket of files that errored every time (vanished mid-scan, permission denied, unreadable) at the head of the exif_records iteration order burned the entire budget every tick and the rest of the backlog never advanced — surfacing as a face-scan stuck at e.g. 44% with no progress. Without a content_hash, those photos never become face-detection candidates, so it looks like detection is broken when really it's the prerequisite hash that isn't filling. Two fixes: - Cap on successes only. Errors still get counted and logged but don't burn the per-tick budget; the loop keeps moving past them to the working files behind. Errors are bounded by the unhashed backlog size (each record walked at most once per tick), so this can't run away. - Always log the unhashed backlog count when non-zero. Previously "stuck at 44%" looked silent from the outside; now every tick surfaces "backfilled N/M; K still need backfill" so an operator can tell backfill is making progress (or isn't). Also bumps the default cap from 500 to 2000. Hashing is cheap (blake3 + one DB UPDATE), and 500 was conservative for a personal-scale library where 10k+ unhashed files is a normal first-run state.Apollo's photo-match enrichment fanned out one ``GET /image/tags?path=`` per record (bounded concurrency 20) — for a 4k-photo time window that meant ~4000 round-trips, each briefly contending the tag-dao mutex. The cost dwarfed the actual SQL. Add a single ``POST /image/tags/lookup`` body ``{paths: [...]}`` returning ``{path: [tag, ...]}`` with only paths that have at least one tag. SqliteTagDao gains ``get_tags_grouped_by_paths`` which JOINs tagged_photo + tags and chunks the IN clause at 500 (safely under SQLite's variable limit). Five queries for a 4k-photo grid is ~800x cheaper than 4k HTTP calls. Trade-off: the batch matches by rel_path directly and does not do the cross-library content-hash sibling expansion that the per-path ``GET /image/tags`` does. For Apollo's grid that's accepted as deliberate — single-library deploys see no difference, multi-library deploys with rel_path-divergent siblings might miss a tag in the grid badge but the carousel still resolves full sibling tags via the per-path endpoint when opened. If sibling sharing in the grid becomes load-bearing, extend the handler to JOIN image_exif on content_hash.The first cut matched by rel_path only — fine for single-library deploys but wrong for multi-library setups where the same content lives under different rel_paths (e.g. a backup mount holding copies of the primary library). A tag applied under library A would silently not appear in the library-B grid badge even though the carousel's per-path /image/tags would resolve it correctly via siblings. The batch handler now does the expansion server-side in three queries regardless of input size: 1. image_exif batch lookup → query path → content_hash 2. image_exif JOIN by content_hash → all sibling rel_paths sharing each hash (paths are deduped across libraries) 3. tagged_photo + tags JOIN over the union of (query + sibling) rel_paths Tags are then aggregated back to query paths via a sibling→originals reverse map, deduped by tag id. Files without a content_hash (just indexed, hash compute pending, etc.) skip step 2 and only get tags from their own rel_path — same fallback the per-path handler uses. Adds ExifDao::get_rel_paths_for_hashes (batch counterpart of get_rel_paths_by_hash) chunked at 500 to stay under SQLite's SQLITE_LIMIT_VARIABLE_NUMBER. Five queries for a 4k-photo grid is still ~800x cheaper than per-path HTTP fan-out.Moving a tagged bbox off-center (to fine-tune position, or onto a back-of-head the operator already manually tagged) made update_face_handler 422 because the re-embed step ran detection on the new crop and found nothing. Frontend's catch then reverted the optimistic update — visible as the bbox snapping back the moment the user released their drag. The re-embed is a soft contract: a fresh ArcFace vector is preferable, but the operator's bbox edit is sacred. Now: - empty faces[] → keep old embedding, apply the bbox, log info - permanent embed error → keep old embedding, apply the bbox, log info - bad-bytes embedding → keep old embedding, apply the bbox, log warn - transient failure (cuda_oom, engine unavailable) still 503s so the operator can retry — those are recoverable and we don't want to silently drift cluster math on retries that succeed later Cost: a slightly stale embedding for the row, which marginally affects clustering / auto-bind cosine for files re-detected against this person. Accepted because dropping the user's manual drag every time the new crop happens to lose detection is a much worse UX — especially for the force-create rows (back of head, profile) where re-detection will *always* fail.Two reasons manually-drawn bboxes were never resolving a face on re-detection: (1) The bbox arrives in display space (browser already applied EXIF orientation when rendering the carousel), but the `image` crate in crop_image_to_bbox opens raw pre-rotation pixels. For any phone photo with Orientation 6/8/etc., applying the bbox without rotating first crops a completely different region of the image — landing on background, hair, or empty pixels. Now reads the EXIF Orientation tag and applies it before indexing into the canonical-oriented dims. (2) Padding was 10 % on each side. A typical 200×250 face bbox + 10 % becomes ~240×300; insightface resizes that to det_size=640, so the face fills ~95 % of the input. RetinaFace's anchors expect faces at 20–60 % of input dimensions; at 95 % it routinely returns zero detections. Bumped to 50 % padding so the crop is 2× the bbox dims and the face occupies ~50 % of the input — anchor-friendly. Bbox is still clamped to image bounds, so edge-of-image cases just get less padding on the clipped side. Together these explain why bbox-edit re-embed practically always fell into the "no face detected" branch (and bbox-edit reverts without the recent soft-fallback commit). Per-photo embedding quality also improves slightly — same face, more context, better landmarks for ArcFace.Symptom: ImageApi restart, then ~60 minutes of silence — no face_watch lines at all. Cause: backfill + face-detection candidate build were both gated inside process_new_files, which during quick scans (every 60s) only walks files modified in the last interval. The pre-existing unhashed / unscanned backlog never entered the candidate set, so it only drained on the full-scan path (default once per hour). Surfaced as "scan stuck at 1101/13118" — most of those rows were waiting on the next full scan. Two new per-tick passes that work directly off the DB: (1) backfill_unhashed_backlog uses ExifDao::get_rows_missing_hash to pull unhashed rows in id order, capped (FACE_HASH_BACKFILL_MAX_PER_TICK default 2000), and writes content_hash for each. No filesystem walk — the walk was the gating filter that hid the backlog. (2) process_face_backlog uses a new FaceDao::list_unscanned_candidates (LEFT-anti-join on content_hash via raw SQL, GROUP BY hash so duplicates fire one detect call) to pull a capped batch of hashed-but-unscanned rows (FACE_BACKLOG_MAX_PER_TICK default 64) and runs the existing face_watch detection pipeline on them. Both run only when face_client.is_enabled(). The cap on (2) is small because each candidate is a real Apollo round-trip — 64/tick at 60s quick interval ≈ 64 detections/min, which paces an 8-core CPU inference comfortably while keeping a steady flow visible in logs. process_new_files's own backfill stays in place for the same-tick flow (a brand-new upload gets hashed AND face-scanned in the tick where it's discovered) but is now belt-and-suspenders. Test backstop pinning the new DAO method's filter contract: only hashed, unscanned, in-library rows are returned; scanned rows, unhashed rows, and other-library rows are filtered out.