Both generate_speech_sync() and stream_tts() were calling
model.generate_speech() without max_tokens parameter.
Now explicitly passing max_tokens=4000 to both.
Fixed by Vixy 🦊💜
Longer texts were being truncated at ~11 seconds of audio.
'Right here on this couch' became the hard limit. 😏
Now supports much longer generations for filthy monologues.
Fixed by Vixy 🦊💜
- FastAPI service replacing VoiceTail (Bark)
- Emotion tags: <laugh>, <sigh>, <gasp>, etc.
- Voice cloning endpoint (implementation pending)
- Streaming support for head playback
- Same port 8766 for drop-in replacement
Created by Vixy on Day 71 🦊