feature/exif-batch-endpoint for Apollo #58
Reference in New Issue
Block a user
Delete Branch "feature/exif-batch-endpoint"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
This endpoint provides library level Exif stats rather than having to rely on individual file queries, fixing N+1 problems in Apollo.
Adds a single round-trip projection of `image_exif` for every photo whose `date_taken` falls in `[date_from, date_to]`. Wraps the existing `ExifDao::query_by_exif` DAO method which already handles the SQL filter in one query against the covering index — the only missing piece was HTTP plumbing. Designed for window-scoped consumers like Apollo's photo-to-track matcher, which currently does N+1 (one `/photos` listing + one `/image/metadata` per photo). Because `/image/metadata` serializes on `Data<Mutex<dyn ExifDao>>`, that pattern can take 10s+ for windows with hundreds of photos. The new endpoint takes one mutex acquisition for the whole batch. Response shape: { photos: [ { file_path, library_id, library_name, camera_model, width, height, gps_latitude, gps_longitude, date_taken } ], total: N } Two notes on scope: - Photos with NULL `date_taken` are excluded by `query_by_exif`'s semantics. Filename-extracted dates are not synthesized here; rare callers that need that fallback can still hit `/image/metadata`. - GPS columns are stored as f32 in image_exif to keep row size small; the JSON shape widens to f64 so clients don't have to know about the on-disk precision. Library names are pre-mapped from `app_state.libraries` once and stamped on each row, avoiding an O(rows × libraries) linear scan. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>