19c099360e3e866a7b035909569a273fce9006c7
Backend (Rust/Actix-web): - Add video_preview_clips table and PreviewDao for tracking preview generation - Add ffmpeg preview clip generator: 10 equally-spaced 1s segments at 480p with CUDA NVENC auto-detection - Add PreviewClipGenerator actor with semaphore-limited concurrent processing - Add GET /video/preview and POST /video/preview/status endpoints - Extend file watcher to detect and queue previews for new videos - Use relative paths consistently for DB storage (matching EXIF convention) Frontend (React Native/Expo): - Add VideoWall grid view with 2-3 column layout of looping preview clips - Add VideoWallItem component with ActiveVideoPlayer sub-component for lifecycle management - Add useVideoWall hook for batch status polling with 5s refresh - Add navigation button in grid header (visible when videos exist) - Use TextureView surface type to fix Android z-ordering issues - Optimize memory: players only mount while visible via FlatList windowSize - Configure ExoPlayer buffer options and caching for short clips - Tap to toggle audio focus, long press to open in full viewer Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Image API
This is an Actix-web server for serving images and videos from a filesystem.
Upon first run it will generate thumbnails for all images and videos at BASE_PATH.
Features
- Automatic thumbnail generation for images and videos
- EXIF data extraction and storage for photos
- File watching with NFS support (polling-based)
- Video streaming with HLS
- Tag-based organization
- Memories API for browsing photos by date
- AI-Powered Photo Insights - Generate contextual insights from photos using LLMs
- RAG-based Context Retrieval - Semantic search over daily conversation summaries
- Automatic Daily Summaries - LLM-generated summaries of daily conversations with embeddings
Environment
There are a handful of required environment variables to have the API run.
They should be defined where the binary is located or above it in an .env file.
You must have ffmpeg installed for streaming video and generating video thumbnails.
DATABASE_URLis a path or url to a database (currently only SQLite is tested)BASE_PATHis the root from which you want to serve images and videosTHUMBNAILSis a path where generated thumbnails should be storedVIDEO_PATHis a path where HLS playlists and video parts should be storedBIND_URLis the url and port to bind to (typically your own IP address)SECRET_KEYis the hopefully random string to sign Tokens withRUST_LOGis one ofoff, error, warn, info, debug, trace, from least to most noisy [error is default]EXCLUDED_DIRSis a comma separated list of directories to exclude from the Memories APIWATCH_QUICK_INTERVAL_SECONDS(optional) is the interval in seconds for quick file scans [default: 60]WATCH_FULL_INTERVAL_SECONDS(optional) is the interval in seconds for full file scans [default: 3600]
AI Insights Configuration (Optional)
The following environment variables configure AI-powered photo insights and daily conversation summaries:
Ollama Configuration
OLLAMA_PRIMARY_URL- Primary Ollama server URL [default:http://localhost:11434]- Example:
http://desktop:11434(your main/powerful server)
- Example:
OLLAMA_FALLBACK_URL- Fallback Ollama server URL (optional)- Example:
http://server:11434(always-on backup server)
- Example:
OLLAMA_PRIMARY_MODEL- Model to use on primary server [default:nemotron-3-nano:30b]- Example:
nemotron-3-nano:30b,llama3.2:3b, etc.
- Example:
OLLAMA_FALLBACK_MODEL- Model to use on fallback server (optional)- If not set, uses
OLLAMA_PRIMARY_MODELon fallback server
- If not set, uses
Legacy Variables (still supported):
OLLAMA_URL- Used ifOLLAMA_PRIMARY_URLnot setOLLAMA_MODEL- Used ifOLLAMA_PRIMARY_MODELnot set
SMS API Configuration
SMS_API_URL- URL to SMS message API [default:http://localhost:8000]- Used to fetch conversation data for context in insights
SMS_API_TOKEN- Authentication token for SMS API (optional)
Fallback Behavior
- Primary server is tried first with 5-second connection timeout
- On failure, automatically falls back to secondary server (if configured)
- Total request timeout is 120 seconds to accommodate LLM inference
- Logs indicate which server/model was used and any failover attempts
Daily Summary Generation
Daily conversation summaries are generated automatically on server startup. Configure in src/main.rs:
- Date range for summary generation
- Contacts to process
- Model version used for embeddings:
nomic-embed-text:v1.5
Description
Languages
Rust
93.8%
PowerShell
6.2%