Livekit K8s Deployment + Haproxy L4 passthrough for all TCP and UDP request via Haproxy

I am working on the POC to migrate VM based livekit deployment to K8s cluster.

Architecture which I am using

LiveKit Behind L4 LoadBalancer.

HAProxy passes:

Client/Device ———-> TCP 443(wss/signaling) —SNI Routing via NodePort—> IngressController—–K8s services

Client/Device ———- > Haproxy Public IP (UDP 3478) ——→ LiveKit TURN (From haproxy — Livekit node on 3478 direct via hostNetwork)

Client/Device ———-> Haproxy Public IP (UDP media ports 40k - 50k) ——> (From haproxy — Livekit node on 40k - 50k ICE candidate based on info provided by Livekit)

same flow while reconnecting from backend to device/client device.

Current Setup


In the livekit design, If I am using haproxy load balancer below livekit node to wss sigling and turn traffic via haproxy instead directly to livekit nodes, I have done all the setup and

I deployed new Livekit setup in kubernetes , where Livekit is working with my existing deployed chat application call MMS service running os same cluster in different namespace NS1-Chat, which is abstraction

layer for managing the token and creating room on the livekit Node,

So architecture point of view

I am using client (external) --------haproxy load balancer (wss or signaling sni media.example.com + WebRTC sni media-ntunr.external.com

connection on 443 (TCP) -------> istioingressGateway + Virtualservice, I have created two services into the istio-system for service one service called X4-ingressgateway which is receiving the all.

connection/Request on Turn 3478 UDP —————→Haproxy Public IP ————-Reverse proxy with persist original client IP—– Direct 3478 port on Livekit node as Its configure as hostNetwork.

From backend Livekit Node ——————– Ice Candidate info (Public Ip Address of Haproxy : Dynamic Port ) —– Back to the client to established connection and start media flow.

connection/Request on Turn —————→Haproxy Public IP media range (40k ——– 50k)UDP ————-Reverse proxy with persist original client IP—– Direct 3478 port on Livekit node as Its configure as hostNetwork.

one for livekit-poc.external.com for (WSS request) and other route livekit-nturn-poc.external.com (turn request on port 3478/UDP) + media range 40k - 50k, connection with policy externalTrafficPolicy: Local or

externalTrafficPolicy: Cluster. currently livekit cluster is running and integrated by chat service,

also I am not using public IP directly NAT with livekit nodes, Istead I am reusing the Haproxy Public IP

and configure Haproxy public IP in the livekit config.yaml in RTC section in my livekit version v1.9.4 use_external_ip: true also external_ip: “Public_IP_Of _Haproxy”,

NS1-chat:

mms-pods are running as —– Deployment.|

MMS SERVICE CONNECTING WITH LIVEKIT VIA HTTP://LIVEKITSERVICE-NAMESPACE-SVC-CLUSTERDOMAIN:7880 (livekit.livekit.svc.cluster.local:7880)

NS2-livekit:

Livekit pod —DaemonSet & Rediss-Master — > As STS & Redis Replica. ——→ As Deployment & Redis Sentinel —— As STS (Currently deployed up & Running )

K8s Deployment Manifest + Haproxy Hapee (Enterprise Version 3.0r1).

livekit-daemonset.yaml

apiVersion: apps/v1
kind: DaemonSet
metadata:
name: {{ include “livekit.fullname” . }}-daemonset
namespace: {{ .Values.global.namespace }}
labels:
{{- include “livekit.labels” . | nindent 4 }}
app: livekit
component: server
spec:
selector:
matchLabels:
app: livekit
component: server
updateStrategy:
type: {{ .Values.daemonset.updateStrategy.type }}
{{- if eq .Values.daemonset.updateStrategy.type “RollingUpdate” }}
rollingUpdate:
maxUnavailable: {{ .Values.daemonset.updateStrategy.rollingUpdate.maxUnavailable }}
{{- end }}
template:
metadata:
labels:
app: livekit
component: server
annotations:
prometheus.io/scrape: “true”
prometheus.io/port: “{{ .Values.livekit.prometheusPort }}”
prometheus.io/path: “/metrics”
inject.istio.io/templates: “sidecar”
sidecar.istio.io/inject: “false”
spec:
nodeSelector:
{{- toYaml .Values.daemonset.nodeSelector | nindent 8 }}
kubernetes.io/arch: “amd64”

  tolerations:
    {{- toYaml .Values.daemonset.tolerations | nindent 8 }}

  {{- include "livekit.dnsConfig" . | nindent 6 }}
  hostNetwork: {{ .Values.daemonset.hostNetwork }}
  hostPID: {{ .Values.daemonset.hostPID }}
  hostIPC: {{ .Values.daemonset.hostIPC }}

  {{- with .Values.hostAliases }}
  hostAliases:
  {{- toYaml . | nindent 8 }}
  {{- end }}
  
  initContainers:
  {{- include "livekit.waitForRedis" . | nindent 6 }}
  
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchExpressions:
              - key: app
                operator: In
                values:
                  - livekit
          topologyKey: "kubernetes.io/hostname"

  containers:
  - name: livekit
    image: "{{ .Values.daemonset.image.repository }}:{{ .Values.daemonset.image.tag }}"
    imagePullPolicy: {{ .Values.daemonset.image.pullPolicy }}
    command:
      - /bin/sh
      - -c
    args:
      - |
        exec /livekit-server \
           --config /etc/livekit/config.yaml \
           {{- if .Values.storeKeysInSecret.enabled }}
           --key-file {{ .Values.livekit.key_file | default "/etc/livekit/keys.yaml" }} \
           {{- end }}
    ports:
      - name: http
        containerPort: 7880
        protocol: TCP
      - name: turn-tls
        containerPort: 5349
        protocol: TCP
      - name: turn-udp
        containerPort: 3478
        protocol: UDP
      - name: turn-tcp
        containerPort: 3478
        protocol: TCP
      # Add RTP port range - hostPort is required for DaemonSet with hostNetwork
      {{- if .Values.daemonset.hostNetwork }}
      - name: rtp-start
        containerPort: 40000
        hostPort: 40000
        protocol: UDP
      - name: rtp-end
        containerPort: 50000
        hostPort: 50000
        protocol: UDP
      {{- else }}
      - name: rtp-start
        containerPort: 40000
        protocol: UDP
      - name: rtp-end
        containerPort: 50000
        protocol: UDP
      {{- end }}

    env:
    # Kubernetes metadata
    - name: K8S_NODE_NAME
      valueFrom:
        fieldRef:
          fieldPath: spec.nodeName
    - name: LIVEKIT_NODE_IP
      valueFrom:
        fieldRef:
          fieldPath: status.hostIP

    # CRITICAL: Disable external IP detection when behind load balancer
    - name: LIVEKIT_RTC_USE_EXTERNAL_IP
      value: "false"
    # CRITICAL: Disable auto IP detection
    - name: LIVEKIT_AUTO_DETECT_IP
      value: "false"
    # HTTP/API port
    - name: LIVEKIT_PORT
      value: "7880"
    # TURN configuration
    - name: LIVEKIT_TURN_ENABLED
      value: "true"
    - name: LIVEKIT_TURN_DOMAIN
      value: "livekit-nturn-quartz.service421.com"
    - name: LIVEKIT_TURN_EXTERNAL_TLS
      value: "false"
    - name: LIVEKIT_TURN_PORT
      value: "3478"
    - name: LIVEKIT_TURN_RELAY_PORT_RANGE_START
      value: "40000"
    - name: LIVEKIT_TURN_RELAY_PORT_RANGE_END
      value: "50000"

    # RTC configuration
    - name: LIVEKIT_RTC_TCP_PORT
      value: "7881"  # Internal WSS port
    - name: LIVEKIT_RTC_UDP_PORT
      value: "7882"  # Internal UDP port
    - name: LIVEKIT_RTC_PORT_RANGE_START
      value: "40000"
    - name: LIVEKIT_RTC_PORT_RANGE_END
      value: "50000"

    # Bind addresses
    - name: LIVEKIT_BIND_ADDRESSES
      value: "0.0.0.0"

    # Redis config
    - name: LIVEKIT_REDIS_ADDRESS
      value: "redis-master.livekit.svc.cluster.local:6379"

    # TURN certificate paths
    - name: LIVEKIT_PROMETHEUS_PORT
      value: "6789"
    - name: LIVEKIT_PROMETHEUS_ENABLED
      value: "true"
    # Region
    - name: LIVEKIT_REGION
      value: "onprem"
    - name: LIVEKIT_NODE_STATS_UPDATE_INTERVAL
      value: "10"  # Default is 5, try increasing to 10
    - name: LIVEKIT_NODE_STATS_HISTORY_INTERVAL
      value: "2"   # Keep recent history

    volumeMounts:
    - name: config
      mountPath: /etc/livekit/config.yaml
      subPath: config.yaml
      readOnly: true
    - name: dev-net-tun
      mountPath: "/dev/net/tun"
    - name: livekit-tls
      mountPath: /etc/lkcert
      readOnly: true
    {{- if .Values.storeKeysInSecret.enabled }}
    - name: keys-volume
      mountPath: {{ .Values.livekit.key_file | default "/etc/livekit/keys.yaml" }}
      subPath: keys.yaml
      readOnly: true
    {{- end }}
    resources:
      {{- toYaml .Values.daemonset.resources | nindent 10 }}
    securityContext:
      {{- toYaml .Values.daemonset.securityContext | nindent 10 }}
    
  volumes:
  - name: config
    configMap:
      name: {{ include "livekit.fullname" . }}-config
  - name: dev-net-tun
    hostPath:
      path: /dev/net/tun
  - name: livekit-tls
    secret:
      secretName: service421-livekit-tls
      items:
        - key: tls.crt
          path: tls.crt
        - key: tls.key
          path: tls.key
  {{- if .Values.storeKeysInSecret.enabled }}
  - name: keys-volume
    secret:
      secretName: {{ .Values.storeKeysInSecret.existingSecret | default ("livekit-secrets") }}
      items:
      - key: keys.yaml
        path: keys.yaml
      defaultMode: 0600
  {{- end }}

livekit-configmap.yaml


apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include “livekit.fullname” . }}-config
namespace: {{ .Values.global.namespace }}
labels:
{{- include “livekit.labels” . | nindent 4 }}
data:
config.yaml: |
port: {{ .Values.livekit.port }}
bind_addresses:

  • “{{ .Values.livekit.bindAddress }}”
# RTC configuration
rtc:
  tcp_port: {{ .Values.livekit.rtc.tcpPort }}
  port_range_start: {{ .Values.livekit.rtc.portRangeStart }}
  port_range_end: {{ .Values.livekit.rtc.portRangeEnd }}
  use_external_ip: {{ .Values.livekit.rtc.useExternalIp }}
  enable_loopback_candidate: {{ .Values.livekit.rtc.enableLoopbackCandidate }}
  packet_buffer_size_video: {{ .Values.livekit.rtc.packetBufferSizeVideo }}
  packet_buffer_size_audio: {{ .Values.livekit.rtc.packetBufferSizeAudio }}
  #node_ip: ${LIVEKIT_NODE_IP}
  node_ip: 94.201.30.208
  # Not supported Paramter in version v1.9.4
  #external_ip: "94.201.30.208"

# Redis Sentinel configuration - UPDATED DNS
redis:
  use_tls: {{ .Values.livekit.redis.useTls }}
  sentinel_master_name: {{ .Values.livekit.redis.sentinelMasterName | quote }}
  sentinel_addresses:
     - redis-sentinel.livekit.svc.cluster.local:26379
  {{- if .Values.livekit.redis.password }}
  password: {{ .Values.livekit.redis.password | quote }}
  {{- end }}

# TURN server configuration
turn:
  enabled: {{ .Values.livekit.turn.enabled }}
  domain: {{ .Values.livekit.turn.domain | quote }}
  udp_port: {{ .Values.livekit.turn.udpPort }}
  external_tls: {{ .Values.livekit.turn.externalTls }}
  relay_range_start: {{ .Values.livekit.rtc.portRangeStart }}
  relay_range_end: {{ .Values.livekit.rtc.portRangeEnd }}
# Logging
logging:
  level: {{ .Values.livekit.logging.level | quote }}

room:
  enabled_codecs:
   - mime: audio/opus
   - mime: video/vp8
  empty_timeout: 300
  departure_timeout: 60
  max_participants: 50
  playout_delay:
    enabled: true
    min: 100
    max: 500

# Signal relay
signal_relay:
  retry_timeout: {{ .Values.livekit.signalRelay.retryTimeout | quote }}
  min_retry_interval: {{ .Values.livekit.signalRelay.minRetryInterval | quote }}
  max_retry_interval: {{ .Values.livekit.signalRelay.maxRetryInterval | quote }}
  stream_buffer_size: {{ .Values.livekit.signalRelay.streamBufferSize }}

# PSRPC configuration
psrpc:
  max_attempts: {{ .Values.livekit.psrpc.maxAttempts }}
  timeout: {{ .Values.livekit.psrpc.timeout | quote }}
  backoff: {{ .Values.livekit.psrpc.backoff | quote }}
  buffer_size: {{ .Values.livekit.psrpc.bufferSize }}

# Audio configuration
audio:
  active_level: {{ .Values.livekit.audio.activeLevel }}
  min_percentile: {{ .Values.livekit.audio.minPercentile }}
  update_interval: {{ .Values.livekit.audio.updateInterval }}
  smooth_intervals: {{ .Values.livekit.audio.smoothIntervals }}

# Monitoring
#prometheus_port: {{ .Values.livekit.prometheusPort }}
#Newly Added Config
prometheus:
  port: 6789

# Region configuration
region: {{ .Values.livekit.region | quote }}

#NodeSelector will help livekit to decide where to create Room

#node_selector:
nodeSelector:
kind: system_load

livekit-redis-master.yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ .Values.redisMaster.name }}
namespace: {{ .Values.global.namespace }}
labels:
app: redis
role: master
spec:
serviceName: {{ .Values.redisMaster.name }}
replicas: {{ .Values.redisMaster.replicas }}
selector:
matchLabels:
app: redis
role: master
template:
metadata:
labels:
app: redis
role: master
spec:
nodeSelector:
{{- toYaml .Values.redisMaster.nodeSelector | nindent 10}}
#:locked:ALLOW SCHEDULING ON TAINTED NODES
tolerations:
{{- toYaml .Values.redisMaster.tolerations | nindent 8 }}
{{- include “livekit.dnsConfig” . | nindent 6 }}
initContainers:

  • name: fixing-permission
    image: “{{ .Values.shared_image.repository }}:{{ .Values.shared_image.tag }}”
    imagePullPolicy: “{{ .Values.shared_image.pullPolicy }}”
    command:

  • sh

  • -c

  • |
    chown -R 999:1000 /data
    chmod -R 755 /data
    volumeMounts:

  • name: redis-data
    mountPath: /data
    securityContext:
    runAsUser: 0
    containers:

  • name: {{ .Values.redisMaster.name }}
    image: “{{ .Values.redisMaster.image.repository }}:{{ .Values.redisMaster.image.tag }}”
    imagePullPolicy: {{ .Values.redisMaster.image.pullPolicy }}
    command:

  • redis-server

  • /etc/redis/redis.conf
    ports:

  • containerPort: {{ .Values.redisMaster.service.port }}
    name: redis
    env:

  • name: REDIS_PASSWORD
    valueFrom:
    secretKeyRef:
    name: redis-password
    key: redis-password
    volumeMounts:

  • name: redis-config
    mountPath: /etc/redis

  • name: redis-data
    mountPath: /data
    resources:
    {{- toYaml .Values.redisMaster.resources | nindent 10 }}
    livenessProbe:
    exec:
    command:

  • sh

  • -c

  • redis-cli -a $REDIS_PASSWORD ping | grep PONG
    initialDelaySeconds: 30
    periodSeconds: 10
    readinessProbe:
    exec:
    command:

  • sh

  • -c

  • redis-cli -a $REDIS_PASSWORD ping | grep PONG
    initialDelaySeconds: 5
    periodSeconds: 5
    volumes:

  • name: redis-config
    configMap:
    name: {{ .Values.redisMaster.cm_name }}
    volumeClaimTemplates:

  • metadata:
    name: redis-data
    spec:
    accessModes: [ “ReadWriteOnce” ]
    resources:
    requests:
    storage: {{ .Values.redisMaster.storage.size }}
    {{- if .Values.redisMaster.storage.storageClass }}
    storageClassName: {{ .Values.redisMaster.storage.storageClass }}
    {{- end }}


apiVersion: v1
kind: Service
metadata:
name: {{ .Values.redisMaster.name }}
namespace: {{ .Values.global.namespace }}
labels:
app: redis
role: master
spec:
type: ClusterIP
clusterIP: None
ports:

  • port: {{ .Values.redisMaster.service.port }}
    targetPort: {{ .Values.redisMaster.service.port }}
    name: redis
    selector:
    app: redis
    role: master

livekit-redis-configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.redisMaster.cm_name }}
namespace: {{ .Values.global.namespace }}
labels:
{{- include “livekit.labels” . | nindent 4 }}
data:
redis.conf: |
protected-mode no
port {{ .Values.redisMaster.service.port }}
tcp-backlog 511
timeout 0
tcp-keepalive 60
daemonize no
logfile “”
loglevel notice

databases 16

dir "/data"

# Persistence (recommended even for LiveKit)
appendonly yes
appendfsync everysec
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 128mb
aof-use-rdb-preamble yes

save 900 1
save 300 10
save 60 10000

# Replication
replica-serve-stale-data yes
replica-read-only yes
repl-diskless-sync yes
repl-diskless-sync-delay 5
repl-timeout 60
repl-backlog-size 256mb
replica-priority 100

# MEMORY — CRITICAL FOR LIVEKIT
maxmemory {{ .Values.redisMaster.maxmemory }}
maxmemory-policy noeviction

# Pub/Sub tuning
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 64mb 16mb 60

hz 20
dynamic-hz yes
activerehashing yes

latency-monitor-threshold 100

slowlog-log-slower-than 10000
slowlog-max-len 512

# Security
requirepass {{ .Values.livekit.redis.password }}
masterauth {{ .Values.livekit.redis.password }}

user default on >{{ .Values.livekit.redis.password }} ~* +@all &* 

livekit-sentinel.yaml

templates/re

dis-sentinel-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ .Values.redisSentinel.name }}
namespace: {{ .Values.global.namespace }}
labels:
{{- include “livekit.labels” . | nindent 4 }}
app: redis
role: sentinel
spec:
serviceName: {{ .Values.redisSentinel.name }}
replicas: {{ .Values.redisSentinel.replicas }}
selector:
matchLabels:
app: redis
role: sentinel
template:
metadata:
labels:
app: redis
role: sentinel
spec:
nodeSelector:
{{- toYaml .Values.redisMaster.nodeSelector | nindent 8 }}
tolerations:
{{- toYaml .Values.redisMaster.tolerations | nindent 8 }}
{{- include “livekit.dnsConfig” . | nindent 6 }}
initContainers:

  • name: verify-redis-master
    image: “{{ .Values.shared_image.repository }}:{{ .Values.shared_image.tag }}”
    imagePullPolicy: “{{ .Values.shared_image.pullPolicy }}”
    command:
  • sh
  • -c
  • |
    cp /etc/redis/sentinel.conf /tmp/redis-config/
    chmod 644 /tmp/redis-config/sentinel.conf
    addgroup -S redis && adduser -S redis -G redis
    chown -R redis:redis /tmp/redis-config
    chmod -R 0777 /tmp/redis-config
    securityContext:
    runAsUser: 0
    volumeMounts:
  • name: sentinel-config
    mountPath: /etc/redis
  • name: writable-config
    mountPath: /tmp/redis-config
  • name: data
    mountPath: /var/run/redis
  • name: shared-data
    mountPath: /data
    containers:
  • name: {{ .Values.redisSentinel.name }}
    image: “{{ .Values.redisSentinel.image.repository }}:{{ .Values.redisSentinel.image.tag }}”
    imagePullPolicy: {{ .Values.redisSentinel.image.pullPolicy }}
    command:
  • redis-sentinel
  • /tmp/redis-config/sentinel.conf
    ports:
  • containerPort: {{ .Values.redisSentinel.service.port }}
    name: sentinel
    env:
  • name: REDIS_PIDFILE
    value: “/var/run/redis/redis-sentinel.pid”
  • name: REDIS_PASSWORD
    valueFrom:
    secretKeyRef:
    name: redis-password
    key: redis-password
    volumeMounts:
  • name: writable-config
    mountPath: /tmp/redis-config
    readOnly: false
  • name: sentinel-data
    mountPath: /data
    readOnly: false
  • name: data
    mountPath: /var/run/redis
    readOnly: false
    resources:
    {{- toYaml .Values.redisSentinel.resources | nindent 10 }}

OPTIMIZED: Simplified probes

livenessProbe:
exec:
command:

  • redis-cli
  • -p
  • “{{ .Values.redisSentinel.service.port }}”
  • ping
    initialDelaySeconds: 30
    periodSeconds: 10
    readinessProbe:
    exec:
    command:
  • redis-cli
  • -p
  • “{{ .Values.redisSentinel.service.port }}”
  • ping
    initialDelaySeconds: 5
    periodSeconds: 5
  volumes:
  - name: sentinel-config
    configMap:
      name: {{ .Values.redisSentinel.cm_name }}
  - name: writable-config
    emptyDir: {}
  - name: data
    emptyDir:
      medium: "Memory"
  - name: shared-data
    emptyDir: {}

volumeClaimTemplates:

  • metadata:
    name: sentinel-data
    spec:
    accessModes: [ “ReadWriteOnce” ]
    resources:
    requests:
    storage: {{ .Values.redisSentinel.storage.size }}
    {{- if .Values.redisSentinel.storage.storageClass }}
    storageClassName: {{ .Values.redisSentinel.storage.storageClass }}
    {{- end }}

apiVersion: v1
kind: Service
metadata:
name: {{ .Values.redisSentinel.name }}
namespace: {{ .Values.global.namespace }}
labels:
app: redis
role: sentinel
spec:
type: ClusterIP
clusterIP: None
ports:

  • port: {{ .Values.redisSentinel.service.port }}
    targetPort: {{ .Values.redisSentinel.service.port }}
    name: sentinel
    selector:
    app: redis
    role: sentinel

livekit-sentinel-configmap.yaml

templates/redis-sentinel-configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.redisSentinel.cm_name }}
namespace: {{ .Values.global.namespace }}
labels:
{{- include “livekit.labels” . | nindent 4 }}
data:
sentinel.conf: |
protected-mode no
port {{ .Values.redisSentinel.service.port }}
daemonize no
pidfile “/var/run/redis_{{ .Values.redisSentinel.service.port }}.pid”
loglevel debug
logfile “”
dir “/data”

# Master configuration - FIXED: Consistent master name usage
sentinel monitor {{ .Values.livekit.redis.sentinelMasterName }} {{ include "redis.master.smartHost" . }} {{ .Values.redisMaster.service.port }} {{ .Values.redisSentinel.sentinelNode.quorum }}

# Timing configuration - FIXED: Consistent master name
sentinel down-after-milliseconds {{ .Values.livekit.redis.sentinelMasterName }} {{ .Values.redisSentinel.downAfterMilliseconds }}
sentinel failover-timeout {{ .Values.livekit.redis.sentinelMasterName }} {{ .Values.redisSentinel.failoverTimeout }}
sentinel parallel-syncs {{ .Values.livekit.redis.sentinelMasterName }} {{ .Values.redisSentinel.parallelSyncs }}

# Hostname resolution
sentinel resolve-hostnames yes
sentinel announce-hostnames yes

# Security
bind 0.0.0.0

# Authentication - FIXED: Use the correct values reference
{{- if .Values.livekit.redis.password }}
sentinel auth-pass {{ .Values.livekit.redis.sentinelMasterName }} {{ .Values.livekit.redis.password }}
{{- end }}

livekit-values.yaml

livekit-values.yaml

Global Configuration

global:
namespace: livekit
clusterDomain: “cluster.local”

fullnameOverride: livekit

serviceMonitor:
create: false
name: “livekit-monitoring”
annotations: {}
automount: false

serviceAccount:
create: false
name: “livekit”
annotations: {}
automount: false

service:
type: NodePort # For internal mms-service communication
externalTrafficPolicy: Local
ports:
http:
port: 7880
https:
port: 7881
tcpTurnTls:
enabled: true
port: 5349
nodePort: 30550 # For HAProxy TURN TLS traffic
udpTurn:
enabled: true
port: 3478
nodePort: 30478 # For HAProxy TURN UDP traffic
metrics:
enabled: true
port: 6789

No nodePort for metrics - internal only

hostAliases:

LiveKit DaemonSet Configuration

livekit:

Prometheus Port to expose the metrics

prometheusPort: 6789

#Secret name
secret_name: livekit-int-secret

#Name of Cluster Domain Istead Default cluster.local
cluster_domain: “api.test.local”

Server Configuration

port: 7880
bindAddress: “0.0.0.0”

RTC Configuration

rtc:
port: 7880
tcpPort: 7881
udpPort: 7882
portRangeStart: 40000
portRangeEnd: 50000
external_port_range_start: 40000
external_port_range_end: 50000
useExternalIp: false
enableLoopbackCandidate: false
packetBufferSizeVideo: 200
packetBufferSizeAudio: 200
#external_ips: “84.204.34.245”
certPath: “/etc/lkcert/tls.crt”
keyPath: “/etc/lkcert/tls.key”

Redis Configuration

redis:
useTls: false
sentinelMasterName: “mymaster”
password: “test123!”
poolSize: 50

TURN Configuration

turn:
enabled: true
domain: “livekit-nturn-webapp.external.com
tlsPort: 5349
udpPort: 3478
externalTls: false #Its Important, Client will be making connection to backend livekit via Haproxy public-Ip:turnTlsNodePort(30550), else it will connect internal-ip:5379.
#externalUdp: true #Its Important, Client will be establishing connection to backend livekit via Haproxy public-Ip:30478, else it will connect internal-ip:3478.
externalIps: [“84.204.34.245”]
certPath: “/etc/lkcert/tls.crt”
keyPath: “/etc/lkcert/tls.key”

Logging Configuration

logging:
level: “debug”
pionLevel: “error”

nodeSelector:
sortBy: “rooms”
enabled: true

Room Configuration

room:
emptyTimeout: 700
departureTimeout: 70
playoutDelay:
enabled: true
min: 100
max: 500

Signal Relay Configuration

signalRelay:
retryTimeout: “60s”
minRetryInterval: “500ms”
maxRetryInterval: “15s”
streamBufferSize: 400

PSRPC Configuration

psrpc:
maxAttempts: 3
timeout: “2s”
backoff: “2s”
bufferSize: 400

Audio Configuration

audio:
activeLevel: 30
minPercentile: 40
updateInterval: 500
smoothIntervals: 4

Monitoring

prometheusPort: 6789

Region Configuration

region: “onprem”

key_file: “/etc/livekit/keys.yaml”

Please use following network and DNSPolicy

hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet

#Store Keys in Secret Configuration
storeKeysInSecret:
enabled: true
existingSecret: “livekit-secrets”

LiveKit DaemonSet Configuration

daemonset:
image:
repository: “registry.test.local/livekit/livekit”
tag: “v1.9.4”
pullPolicy: “IfNotPresent”

updateStrategy:
type: “RollingUpdate”
rollingUpdate:
maxUnavailable: 1

hostNetwork: true
hostPID: false
hostIPC: false

nodeSelector:
node-role.livekit.io/livekit: “true” # :white_check_mark: Updated to match your label

tolerations:

  • key: “livekit”
    operator: “Equal”
    value: “true”
    effect: “NoSchedule”

resources:
requests:
memory: “2Gi”
cpu: “1000m”
limits:
memory: “4Gi”
cpu: “2000m”

securityContext:
allowPrivilegeEscalation: false
privileged: false
readOnlyRootFilesystem: false # :white_check_mark: Change to false for Redis
runAsNonRoot: false # :white_check_mark: Allow root for Redis
runAsUser: 0 # :white_check_mark: Run as root
capabilities:
drop:

  • ALL
    add: # :white_check_mark: Add necessary capabilities for Redis
  • CHOWN
  • SETGID
  • SETUID
  • DAC_OVERRIDE

livenessProbe:
tcpSocket:
port: 7880
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5

readinessProbe:
tcpSocket:
port: 7880
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 3

Livekit Frontend & Backend Ports

ports:
http:
port: 7880
nodePort: 30080 #This port would be Nodeport from Nginx-Ingress-Conroller For Wss Signaling Connection
tcpTurn:
port: 5349
nodePort: 30549 # TLS TURN port Actually Its come automattically not required to configure into livekit service
udpTurn:
port: 3478
nodePort: 30478 # UDP TURN port for HAProxy
metrics:
enabled: true
port: 6789
nodePort: 30689 # Optional: Metrics port

####Common Image
shared_image:
repository: “registry.test.local/busybox”
tag: “latest”
pullPolicy: “IfNotPresent”

Redis Master Configuration

redisMaster:
name: redis-master
cm_name: redis-master-cm
usePodIP: false
replicas: 1
image:
repository: “registry.test.local/livekit/redis”
tag: “7.4.0-custom”
pullPolicy: “IfNotPresent”

service:
type: “ClusterIP”
port: 6379
name: redis-master
maxmemory: 3gb

nodeSelector:
node-role.livekit.io/livekit: “true” # :white_check_mark: Updated to match your label

tolerations:

  • key: “livekit”
    operator: “Equal”
    value: “true”
    effect: “NoSchedule”

resources:
requests:
memory: “4Gi”
cpu: “800m”
limits:
memory: “8Gi”
cpu: “1600m”

storage:
size: “10Gi”
storageClass: “longhorn” # Use default storage class

Redis Replicas Configuration

redisReplicas:
name: redis-replica
cm_name: redis-replica-cm
replicas: 2
image:
repository: “registry.test.local/livekit/redis”
tag: “7.4.0-custom”
pullPolicy: “IfNotPresent”
nodeSelector:
node-role.livekit.io/livekit: “true”
tolerations:

  • key: “livekit”
    operator: “Equal”
    value: “true”
    effect: “NoSchedule”
    resources:
    requests:
    memory: “1Gi”
    cpu: “200m”
    limits:
    memory: “2Gi”
    cpu: “500m”

Redis Sentinel Configuration

redisSentinel:
name: redis-sentinel
cm_name: redis-sentinel-cm
replicas: 3
downAfterMilliseconds: 30000
failoverTimeout: 180000
parallelSyncs: 1
image:
repository: “registry.test.local/livekit/redis”
tag: “7.4.0-custom”
pullPolicy: “IfNotPresent”

two sentinel instances must agree that the Redis master is unavailable before initiating failover procedures.

sentinelNode:
quorum: 2

service:
type: “ClusterIP”
port: 26379

resources:
requests:
memory: “128Mi”
cpu: “50m”
limits:
memory: “256Mi”
cpu: “100m”

nodeSelector:
node-role.livekit.io/livekit: “true” # :white_check_mark: Updated to match your label

tolerations:

  • key: “livekit”
    operator: “Equal”
    value: “true”
    effect: “NoSchedule”

storage:
size: “1Gi”
storageClass: “longhorn” # Use default storage class

Secret Configuration, Below Values are in plain Text

secrets:
apiKey: IysdgdsgkjkjsfgkwfwgwrgtwgghjasflADGgjL
apiSecret: cgXMxCM5Tzmpdfgdfhjfh9etpiohjweptijK
redisPassword: “test123!”

current haproxy config

global
maxconn 10000

Syslog daemon

log 127.0.0.1 local0
log 127.0.0.1 local1 notice

Systemd journal

#log /dev/log local0
#log /dev/log local1 notice
user hapee-lb
group hapee
chroot /var/empty
pidfile /var/run/hapee-3.0/hapee-lb.pid
stats socket /var/run/hapee-3.0/hapee-lb.sock user hapee-lb group hapee mode 660 level admin expose-fd listeners
stats timeout 10m
module-path /opt/hapee-3.0/modules

module-load hapee-lb-update.so

module-load hapee-lb-sanitize.so

module-load hapee-lb-udp.so ## It will enable the UDP functionality
daemon
log-send-hostname

# multiple cpus
nbthread            2
cpu-map             1 0
cpu-map             2 1

defaults
log global
option tcplog
option dontlognull
option redispatch
timeout connect 5000
timeout client 50000
timeout server 50000

=======================

TCP FRONTENDS

=======================

Main HTTPS/WSS frontend

frontend https
bind :443
mode tcp

tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }

# SNI-based routing
use_backend bk_istio_ingress_gw if { req_ssl_sni -i webapp.externaldomain.com }
use_backend bk_istio_ingress_gw if { req_ssl_sni -i webapp-ntfy.externaldomain.com }
use_backend bk_istio_ingress_gw if { req_ssl_sni -i webapp-sygnal.externaldomain.com }

# LiveKit WSS (signaling)
use_backend bk_livekit_wss if { req_ssl_sni -i livekit-poc.externaldomain.com }

# LiveKit TURN-TLS (using port 443 with different SNI)
use_backend bk_livekit_turn_tls if { req_ssl_sni -i livekit-nturn-poc.externaldomain.com }

TURN TCP fallback (if clients use TCP mode)

frontend turn_tcp
bind :3478
mode tcp
option tcplog
default_backend bk_livekit_turn_tcp

TURN TLS (standard port 5349)

frontend turn_tls
bind :5349
mode tcp
option tcplog
default_backend bk_livekit_turn_tls

=======================

UDP FRONTENDS (using HAPEE UDP module)

=======================

TURN UDP - Main STUN/TURN port

udp-lb turn_udp
dgram-bind :3478
log global
balance source # Source IP hashing for consistency

Option A: Direct to node ports (if using hostNetwork)

server lk01 lk01-poc:3478 check
server lk02 lk02-poc:3478 check
server lk03 lk03-poc:3478 check

# Option B: If using NodePort services (uncomment if needed)
# server lk01 lk01-poc:32347 check  # Example NodePort
# server lk02 lk02-poc:32347 check
# server lk03 lk03-poc:32347 check

WebRTC Media Port Range (CRITICAL - must preserve port)

udp-lb rtc_media_range
dgram-bind :40000-50000
log global
balance source

# IMPORTANT: Must forward to SAME PORT on backend
# Using hostNetwork: port must match
server lk01 lk01-poc check 
server lk02 lk02-poc check
server lk03 lk03-poc check

=======================

TCP BACKENDS

=======================

Istio Ingress Gateway

backend bk_istio_ingress_gw
mode tcp
balance leastconn
option tcp-check
server aw01 aw01-poc:32555 check
server aw02 aw02-poc:32555 check
server aw03 aw03-poc:32555 check

LiveKit WSS (signaling over TLS)

backend bk_livekit_wss
mode tcp
balance leastconn
option tcp-check

# If using NodePort for LiveKit signaling
server lk01 lk01-poc:30880 check
server lk02 lk02-poc:30880 check
server lk03 lk03-poc:30880 check

LiveKit TURN-TLS (over port 443 with SNI)

backend bk_livekit_turn_tls
mode tcp
balance source # Source IP hashing for TURN consistency
option tcp-check

# TURN TLS port (typically same as TURN port)
server lk01 lk01-poc:3478 check
server lk02 lk02-poc:3478 check
server lk03 lk03-poc:3478 check

LiveKit TURN TCP (plain TCP fallback)

backend bk_livekit_turn_tcp
mode tcp
balance source
option tcp-check
server lk01 lk01-poc:3478 check
server lk02 lk02-poc:3478 check
server lk03 lk03-poc:3478 check

LiveKit TURN TLS (port 5349)

backend bk_livekit_turn_tls_5349
mode tcp
balance source
option tcp-check
server lk01 lk01-poc:3478 check # LiveKit uses same port for TLS/plain
server lk02 lk02-poc:3478 check
server lk03 lk03-poc:3478 check