How do I load saved chat context for realtime models?
I tried passing the chat context object while creating agent but that does not seem to be working.
You can load the chat context when initializing the assistant with a realtime model as follows:
initial_chat_ctx = ChatContext()
initial_chat_ctx.add_message(role="user", content="My favourite colour is blue.")
# Start the session, which initializes the voice pipeline and warms up the models
await session.start(
agent=Assistant(chat_ctx=initial_chat_ctx),
...
Then you can ask the agent what your favourite colour is.
I have done exactly that.
For context I am using gemini-2.5-flash-native-audio-preview-12-2025 model.
But when I ask the model through text the model is able to answer questions based on older context but when I ask the same thing via audio the model is not able to answer at all.
That’s interesting, using that model I also see that the context is not picked up. My above code works with OpenAI realtime.
I did look into workarounds, but after doing a bit of digging, I think the most reliable approach for gemini would be to include any required context in your instructions.
I have done the same with fallback to a tool(just in case).
Were you able to find any Github issue linked to it?
I don’t see anything, the repository is here: GitHub · Where software is built