# LiveKit Gemini 3.1 Vertex history_config bug
Hi LiveKit team,
We are testing Gemini 3.1 Flash Live on Vertex AI through `livekit-plugins-google==1.5.4` and hit a connection failure caused by `history_config` being sent on the Vertex path.
## Environment
- `livekit-agents==1.5.4`
- `livekit-plugins-google==1.5.4`
- `google-genai==1.73.1`
- Vertex AI enabled
- Model: `gemini-3.1-flash-live-preview`
## Current behavior
For Gemini 3.1, the plugin marks chat context as non-mutable and builds the connect config with:
```python
history_config=types.HistoryConfig(initial_history_in_client_content=True)
if not self._realtime_model.capabilities.mutable_chat_context
else None
```
On Vertex AI, `google-genai` rejects that field with:
```text
ValueError: history_config parameter is not supported in Vertex AI.
```
This prevents the session from connecting at all.
## Why this seems wrong
Gemini 3.1 on LiveKit already has documented mid-session limitations such as `generate_reply()`, `update_instructions()`, and `update_chat_ctx()` not being supported. That is fine.
The problem here is earlier than those limitations: the plugin currently sends a config field that Vertex explicitly does not support, so basic voice sessions fail before the model can even start.
## Suggested fix
In `livekit/plugins/google/realtime/realtime_api.py`, avoid sending `history_config` on the Vertex path.
Something along these lines should fix it:
```python
history_config = None
if not self._opts.vertexai and not self._realtime_model.capabilities.mutable_chat_context:
history_config = types.HistoryConfig(initial_history_in_client_content=True)
```
Then use that value in `types.LiveConnectConfig(…)`.
## Expected behavior
- Gemini 3.1 on Vertex should connect successfully.
- The known 3.1 mid-session limitations can still remain enforced.
- The plugin should not send provider-incompatible config fields when `vertexai=True`.
## Notes
We applied a local wrapper on our side that clears `history_config` only for Vertex + Gemini 3.1, and that is the workaround we are using for now.
If helpful, I can also send a minimal reproduction snippet.