I see
Traceback (most recent call last):
File "/Users/marc/...../.venv/lib/python3.12/site-packages/livekit/agents/utils/log.py", line 17, in async_fn_logs
return await fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/marc/....../.venv/lib/python3.12/site-packages/livekit/agents/inference/stt.py", line 559, in recv_task
raise APIError(f"LiveKit Inference STT returned error: {msg.data}")
livekit.agents._exceptions.APIError: LiveKit Inference STT returned error: {"type":"error","session_id":"0fa56f82-54e7-4d1a-813a-a7e47e477b5f","message":"The client
13:25:17.378 WARNI… livekit.agents failed to recognize speech: LiveKit Inference STT returned error:
{"type":"error","session_id":"0fa56f82-54e7-4d1a-813a-a7e47e477b5f","message":"The client did not receive a websocket
Ping message in the last 60 seconds","code":2006}, retrying in 0.1s | extras: stt=livekit.agents.inference.stt.STT,
attempt=0, streamed=True
I believe I didn’t have this before, at least not for livekit-agents < 1.4. I just updated to 1.4.5.
Not blocking, but any ideas? Thanks!
1 Like
Do you see that all the time or just once? Looks a little like a network issue but I will try and reproduce.
Can you share your agent session setup like what LLM, STT, and TTS you are using?
Also can you share all the library versions you are running:
Just give the output for this so we can see all the versions:
pip freeze |grep livekit
That’s when
╰─❯ uv run src/main.py console --text
I see what I posed after 1min. Then after 1min more
08:11:29.258 WARNI… livekit.agents livekit.agents.inference.stt.STT failed, switching to next STT | extras:
streamed=True
{"streamed": true}
Traceback (most recent call last):
File "/Users/marc/.../.venv/lib/python3.12/site-packages/livekit/agents/stt/stt.py", line 322, in _main_task
return await self._run()
^^^^^^^^^^^^^^^^^
File "/Users/marc/.../.venv/lib/python3.12/site-packages/livekit/agents/inference/stt.py", line 574, in _run
await asyncio.gather(*tasks)
File "/Users/marc/.../.venv/lib/python3.12/site-packages/livekit/agents/utils/log.py", line 17, in async_fn_logs
return await fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/marc/.../.venv/lib/python3.12/site-packages/livekit/agents/inference/stt.py", line 559, in
recv_task
raise APIError(f"LiveKit Inference STT returned error: {msg.data}")
livekit.agents._exceptions.APIError: LiveKit Inference STT returned error:
{"type":"error","session_id":"916ad661-e0a3-42fd-8143-69da11e9aa36","message":"The client did not receive a websocket Ping
message in the last 60 seconds","code":2006}
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/marc/.../.venv/lib/python3.12/site-packages/livekit/agents/stt/fallback_adapter.py", line 338, in
_run
async for ev in main_stream:
File "/Users/marc/.../.venv/lib/python3.12/site-packages/livekit/agents/stt/stt.py", line 445, in __anext__
raise exc # noqa: B904
^^^^^^^^^
File "/Users/marc/.../.venv/lib/python3.12/site-packages/livekit/agents/stt/stt.py", line 329, in _main_task
raise APIConnectionError(
and same every 1min. My config (simplfiied)
session = AgentSession[UserData](
userdata=userdata,
stt="deepgram/flux-general-en",
llm="openai/gpt4.1",
tts="cartesia/sonic-3",
# **get_turn_detection_config(ctx),
max_tool_steps=5,
# user_away_timeout=USER_AWAY_TIMEOUT,
)
versions
─❯ uv pip freeze | grep livekit
livekit==1.1.2
livekit-agents==1.4.4
livekit-api==1.0.7
livekit-blingfire==1.1.0
livekit-plugins-assemblyai==1.4.4
livekit-plugins-cartesia==1.4.4
livekit-plugins-deepgram==1.4.4
livekit-plugins-elevenlabs==1.4.4
livekit-plugins-google==1.4.4
livekit-plugins-noise-cancellation==0.2.5
livekit-plugins-openai==1.4.4
livekit-plugins-silero==1.4.4
livekit-plugins-turn-detector==1.4.4
livekit-protocol==1.1.0
Hi Mark, I did some testing with this, this morning and it feels like a bug to me, so I’m checking with the team.
I can reproduce the issue with console --text and using the flux model but not with the nova-3 model, so it’s related to the model, and also related to the WebSocket ping timeout I believe.
Other than not using flux during your console tests, there isn’t a workaround as far as I can see.
ok, thanks for your answer. I have a lot of noise in the logs, so a bit more it’s ok
I assume this will fix on its own in a new release i.e. I won’t have to change anything, right?
assume this will fix on its own in a new release i.e. I won’t have to change anything, right?
Ideally, but I can’t guarantee that
… still following up internally, but I’ll post here when there are updates
1 Like