Assistance Needed:

For context, I previously used Vapi but encountered multiple reliability issues, which is why I am exploring LiveKit as a more stable solution. Some of the problems we faced were:

  1. Sometimes Vapi failed to connect and showed errors like “meeting room closed” or “Assistant Did Not Receive Customer Audio”.

  2. In one case, Vapi ended the session with call.in-progress.error-vapifault-worker-died, and all transcription and audio were lost. No recordings or logs were available afterward, which was very problematic.

  3. If TTS/STT/LLM failed internally, Vapi simply switched to a fallback without sending any webhook or event notification. At minimum, a webhook indicating failure would be very helpful.

  4. Recently their VAD has not been working correctly, causing frequent AI interruptions while the user is still speaking.

Because of these issues, I want to ensure our implementation on LiveKit is robust.

Additionally, I have a couple of implementation questions:

  • When the interview starts, I want to capture the agent’s audio stream for recording.

  • If the user changes their microphone during the interview, I want to update the audio track with the newly selected microphone in LiveKit.

Could you please guide me on the best way to implement these behaviors and also help diagnose the invalid token signal connection error?

Thanks in advance for your help.

Welcome :lk-party:

When the interview starts, I want to capture the agent’s audio stream for recording.

You would export audio through Egress: Egress overview | LiveKit Documentation

If the user changes their microphone during the interview, I want to update the audio track with the newly selected microphone in LiveKit.

If you are using our Agents UI built in media controls (which I would strongly recommend), Media controls | LiveKit Documentation, this is handled for you.

also help diagnose the invalid token signal connection error?

I’m not sure what the issue might be, but our docs for token generation are Authentication | LiveKit Documentation and Custom token generation | LiveKit Documentation

I recommend starting with the hosted token server for dev & testing: Sandbox token server | LiveKit Documentation

1 Like

Can you check and give me solution and what is error of this issue

1 Like

I see this error when I end my session but still regions api is called after some interval.
What is this?
How to stop it after interview end.

If you are seeing the calls for 30s, I assume it is this: client-sdk-js/src/room/RegionUrlProvider.ts at main · livekit/client-sdk-js · GitHub, but I’m surprised it’s calling the endpoint in a loop after failing.

Is there a consistent setup for this?

SDK Code

import { LiveKitRoom, RoomAudioRenderer } from "@livekit/components-react"
// ....code 

<LiveKitRoom

          serverUrl={liveKitUrl}

token={token}

connect={true}

audio={{ deviceId: selectedDevice }}

video={false}

onConnected={handleConnected}

onDisconnected={handleDisconnected}

onError={handleError}

onMediaDeviceFailure={handleMediaDeviceFailure}

>

{/* This component safely calls livekit room hooks and syncs data to Provider */}

<LiveKitLogicBridge {...props} />

<RoomAudioRenderer />

{children}

</LiveKitRoom>

On Interview completion the above component become unmount

As a point of comparison, I ran the agent react starter, GitHub - livekit-examples/agent-starter-react: A complete voice AI frontend app for LiveKit Agents with Next.js · GitHub, which uses its own built-in token server (in dev).

That’s showing the regions endpoint called a few times, then stopping with the log message ‘stopping region refetch after…’ which is coming from the file I linked directly above. I still think this is your token logic somehow, perhaps run the starter with its own token server and see if you observe the same as me, you just have to set the 3 LIVEKIT_ variables in the .env file, and (optionally) your agent name.