Description
When agent observability/recording is enabled, the SDK fails to upload the session report at the end of a call with a 500 Internal Server Error from the LiveKit Cloud endpoint.
Error
aiohttp.client_exceptions.ClientResponseError: 500, message='Internal Server Error',
url='https://<project>.livekit.cloud/observability/recordings/v0'
Full traceback:
File ".../livekit/agents/job.py", line 275, in _on_session_end
await _upload_session_report(
File ".../livekit/agents/telemetry/traces.py", line 528, in _upload_session_report
resp.raise_for_status()
File ".../aiohttp/client_reqrep.py", line 636, in raise_for_status
raise ClientResponseError(
Environment
- livekit-agents: 1.5.1
- livekit-protocol: 1.1.3
- livekit: 1.1.3
- Python: 3.11
- Deployment: Self-hosted agent connecting to LiveKit Cloud
- Agent type: Voice agent (STT + LLM + TTS pipeline)
Steps to reproduce
- Enable agent recording (
enable_recording: truein job dispatch, or passrecord=Truetosession.start()) - Start a voice agent session
- End the session (participant disconnects)
- Observe the error in agent logs during session cleanup
What we’ve tried
- Upgraded all LiveKit packages from 1.3.x to 1.5.1 — same error
- Confirmed the JWT is generated with
ObservabilityGrants(write=True) - Confirmed the request is well-formed (protobuf header + chat history JSON + audio)
- The agent session itself works correctly — the error only occurs during the post-session observability upload
Notes
- The 500 is a server-side error from the
/observability/recordings/v0endpoint — the client successfully constructs and sends the request - Agent observability is enabled in the LiveKit Cloud project settings (
enable_recording: trueappears in the job dispatch) - The error is non-fatal but produces noisy logs on every session
- Is there any additional project-level configuration required to enable the observability backend?
