Sure, we’re not doing anything special on the homepage agent.
If you look at the “Agent Configuration” to the right of the agent it will tell you which models each agent uses (each agent has different models, and we might switch these out from time to time as new models are introduced). We always use LiveKit inference to communicate with models.
For example, Hayley’s model configuration currently looks like this:
stt="deepgram/nova-3",
llm="openai/gpt-4.1-mini",
tts="rime/arcana:astra",
Each agent is also initialized with some instructions, but these do not contain anything special - for our purposes they mostly provide hints about LiveKit, since many customers use the front page agent to ask about our product.
The session configuration for Hayley, and most of the other agents, is as follows (these are not necessarily recommended settings, they are just what work for us on the homepage):
min_endpointing_delay=0.2,
max_endpointing_delay=3,
preemptive_generation=True,
false_interruption_timeout=1,
resume_false_interruption=True,
min_interruption_words=0,
These settings are documented as part of turn detection, and speech generation. In many cases we just use the default values.
The homepage agent will tend to use the latest version of LiveKit agents and at the time of writing, it is using 1.4.0, which is the latest version of Python agents. This will also pull in the latest version of the turn detector model, `turn_detection=MultilingualModel().`
The agent is hosted on LiveKit cloud, in the US, with no special settings.
The web front end used to access the homepage agent is not doing anything special and is analogous to any of our front-end starters such as GitHub - livekit-examples/agent-starter-react: A complete voice AI frontend app for LiveKit Agents with Next.js, or even the agent Playground.