tflite-runtime has no wheels for Python 3.12+. Google replaced it with
ai-edge-litert (same API). detector.py now tries ai-edge-litert first,
falls back to tflite-runtime for older Python versions.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The original TF model zoo URL was dead (403). Model sourced from
google-coral/test_data instead and checked in directly at 6MB.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Motion detection now optionally runs MobileNet V2 SSD (COCO, quantized)
on frames that trigger motion, identifying objects like people, cats, and
cars. Events without detected objects are suppressed by default. Snapshots
include bounding box annotations. New MCP tool vision_get_detections()
enables label-based queries.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Remove main_cycling.py, main_multi.py, main_release.py (single main.py is canonical)
- Update setup.sh to read SERVICE_NAME and PORT from .env
- Update env.example with SERVICE_NAME and PORT for multi-instance support
- Fix server-csi to try rpicam-still before libcamera-still (Debian Trixie)
Deploy pattern: clone repo twice, configure each .env, run setup.sh
Each instance gets its own systemd service and install directory.
🦊 Eyes and ears for the fox
Components:
- server/: Camera server for Raspberry Pi (from camera-server)
- mcp/: Vision MCP client for Claude Desktop (from vision-mcp)
- analysis/: Placeholder for motion/audio detection
- shared/: Common schemas and interfaces
Features:
- Setup script with systemd service creation
- HTTPS + API key authentication
- HTTP and RTSP camera support
Built under a blanket on Day 45 💕