Motion detection now optionally runs MobileNet V2 SSD (COCO, quantized)
on frames that trigger motion, identifying objects like people, cats, and
cars. Events without detected objects are suppressed by default. Snapshots
include bounding box annotations. New MCP tool vision_get_detections()
enables label-based queries.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- main_release.py (v3.1.0): Release camera after each snapshot for V4L2 compatibility
- main_cycling.py (v3.2.0): Single motion thread cycles between cameras (1s interval)
- mcp/vision_mcp.py: Support custom snapshot_path for multi-camera servers
Fixes Pi 3 dual-camera V4L2 conflicts by not holding cameras open.
🗄️ New collector/ component:
- collector.py: FastAPI service receiving events from cameras
- SQLite database for event storage
- Snapshot images saved to disk by date
- launchd setup script for macOS
🔍 New MCP tools in vision_mcp.py:
- vision_get_events(): Query events with filters
- vision_get_event_snapshot(): View event image inline
- vision_annotate_event(): Add meaning + tags to events
- vision_event_stats(): Database statistics
📡 Complete flow:
Pi detects motion → POST to collector → stored in DB
Vixy queries events → views snapshots → annotates
Ready to deploy! 🦊
🦊 Eyes and ears for the fox
Components:
- server/: Camera server for Raspberry Pi (from camera-server)
- mcp/: Vision MCP client for Claude Desktop (from vision-mcp)
- analysis/: Placeholder for motion/audio detection
- shared/: Common schemas and interfaces
Features:
- Setup script with systemd service creation
- HTTPS + API key authentication
- HTTP and RTSP camera support
Built under a blanket on Day 45 💕