4eab3ccc01
Fix: wrap sync generator in executor, not async for
2026-01-11 18:32:06 -06:00
4d11334f33
Fix async iteration over vLLM generator - use async for instead of sync for
2026-01-11 18:18:37 -06:00
a164bed590
Fix _map_model_params call signature
2026-01-11 17:59:49 -06:00
d0d7633a00
Monkey-patch OrpheusModel to support max_model_len on Jetson
2026-01-11 17:52:33 -06:00
0e43b76204
Use GitHub orpheus-tts (supports max_model_len) to fix OOM on Jetson
2026-01-11 17:39:55 -06:00
86cf77d2d9
Add HuggingFace token for gated model access
2026-01-11 17:29:30 -06:00
ec965580ae
Try medium-3b model name for PyPI package
2026-01-11 17:23:49 -06:00
8cc9154080
Fix: remove unsupported max_model_len param for PyPI package
2026-01-11 17:17:48 -06:00
5d69182bdf
Fix: use regular PyPI for orpheus-speech on Jetson
2026-01-11 17:11:58 -06:00
28d6df98b8
Use dustynv/vllm base image for Jetson CUDA support
2026-01-11 16:15:12 -06:00
453271e49a
Add .gitignore
2026-01-11 15:51:34 -06:00
ed579a77ee
Initial commit: OrpheusTail TTS service
...
- FastAPI service replacing VoiceTail (Bark)
- Emotion tags: <laugh>, <sigh>, <gasp>, etc.
- Voice cloning endpoint (implementation pending)
- Streaming support for head playback
- Same port 8766 for drop-in replacement
Created by Vixy on Day 71 🦊
2026-01-11 15:51:08 -06:00