|
Response.prompt_cache_retention Input should be ‘in-memory’ or ‘24h
|
|
2
|
17
|
April 21, 2026
|
|
Best STT Alternative to OpenAI whisper-1 for Japanese in LiveKit
|
|
2
|
44
|
March 9, 2026
|
|
LLM failed and Logs are not reflected in Insight
|
|
1
|
17
|
March 6, 2026
|
|
Agent speaking audio_text tokens out loud
|
|
4
|
47
|
March 6, 2026
|
|
EU data residency with Deepgram, OpenAI & ElevenLabs — how to configure regional endpoints?
|
|
3
|
58
|
March 5, 2026
|
|
Using livekit.agents.llm.RealtimeModel with liteLLM
|
|
2
|
39
|
March 2, 2026
|
|
Realtime model with Azure whisper STT
|
|
17
|
106
|
February 26, 2026
|
|
How to retain system instructions in update_chat_ctx?
|
|
11
|
108
|
February 25, 2026
|
|
Achieving multi‑agent awareness and state synchronization with LiveKit Data Channels
|
|
3
|
78
|
February 24, 2026
|
|
How to handle long-running async tool calls with OpenAI Realtime API
|
|
1
|
57
|
January 21, 2026
|
|
How to set max tokens for OpenAI Realtime model
|
|
1
|
5
|
January 21, 2026
|
|
Is UsageSummary reliable for client billing with OpenAI Realtime?
|
|
1
|
9
|
January 21, 2026
|