New feature for LiveKit Cloud: you just need to upgrade your cloud-hosted agents to >= 1.5.x
1 Like
Which STT, TTS and LLM are being used here in this demo?
1 Like
Is it possible to have it on self-hosted deployments?
Is it possible to have it on self-hosted deployments?
See
Quota and limits
Adaptive interruption handling is included at no extra cost for all agents deployed to LiveKit Cloud.
For self-hosted agents, we include 40,000 inference requests per month across all plans - enough to experiment and develop locally without restrictions.
For large-scale self-hosted use cases, please contact us to discuss options and pricing.
—
stt=inference.STT(model="deepgram/nova-3", language="en"),
llm=inference.LLM(model="openai/gpt-4.1-nano"),
tts=inference.TTS(
model="cartesia/sonic-3", voice="9626c31c-bec5-4cca-baa8-f8ba9e84c8bc"
),