How to configure LiveKit on GCP without public IPs for VMs

This question originally came up in our Slack community and the thread has been consolidated here for long-term reference.

I am self-hosting LiveKit on GCP with strict requirements:

  • The VM/GKE nodes cannot have public IPs and must access the internet via a corporate hub network
  • It’s behind an Application LB (signaling with TLS termination) and a Network LB (passthrough media ports and TURN) with TLS terminating on the LiveKit server
  • I have a voice agent in the same VPC/subnet

The issue is that if I only advertise the NLB’s IP address as node_ip, the voice agent and egress services don’t work because the NLB drops traffic and doesn’t hairpin back.

I’ve tried:

  1. Public IP with a NAT route on the voice-agent VM, but this seems hacky and won’t scale well with GKE and egress services
  2. Advertising both IPs by adding enable_loopback_candidate: true and manually adding the NLB’s IP to the LiveKit server’s VM

Is it okay to send both a public IP and an internal IP to users? Would the inaccessible internal IP cause issues for external clients?

For Kubernetes-native scenarios like this, one option is STUNner, which was designed specifically for these use cases.

Here’s a guide for AWS EKS that applies similarly to GCP: Deploying WebRTC Applications in AWS EKS with LiveKit and STUNner

Note for GCP: If you need both UDP and TLS ports for the TURN service, you’ll need two separate gateways since GCP cannot provision a multi-protocol load balancer.

You can also add these lines to your LiveKit config:

rtc:
  use_external_ip: false  # Turns off public IP discovery
  use_ice_lite: true      # Forces all traffic via STUNner using TURN protocol

This setup works well for environments where nodes don’t have direct public IP access.