How to advertise both private and public IPs as ICE candidates in Kubernetes

This question originally came up in our Slack community and the thread has been consolidated here for long-term reference.

How do I make the LiveKit server advertise both private IP and public IP as ICE candidates?

Current setup:

  • LiveKit server runs in Kubernetes cluster on Azure
  • Public load balancer exposes the server with UDP-MUX ports for media and HTTP for signaling
  • TURN is disabled

The configuration works, but internal clients also use the public IP of the load balancer, adding unnecessary network costs and potential latency.

How can I make the server advertise both internal and public IPs so internal clients choose the internal IP?

One approach that partially works:

Configure rtc.use_external_ip as false with no node_ip value. This makes the server advertise:

  • Private IP as host candidate
  • Public IP (via load balancer) as srflx candidate

Internal clients will connect using the host candidate, and external clients will use srflx.

Caveats:

  • This may not work reliably at production load due to potential NAT pinhole / SNAT port exhaustion issues
  • Once use_external_ip is set to true, LiveKit does not advertise the private host IP

Alternative approaches:

  • Use split DNS with Azure Private DNS Zone
  • Set up two load balancers (internal and external) routing to the same cluster
  • Internal A record uses private DNS → internal LB
  • External A record uses public DNS → public LB

This ensures internal clients always route to the internal LB, and during ICE negotiation they’ll choose the reachable internal IP.

I having similar issue but in AWS. I am using AWS EKS and livekit is running as a pod in that cluster and the cluster it self is on private subnet so call’s sip signal is connecting through NLB and kamailio proxy but two way media communication is no working.