Deploy Traefik on GKE with Google L7 Load Balancer

How to Configure Traefik with Google Cloud HTTP(S) Load Balancer (L7) on GKE for Scalable Kubernetes Ingress

· 1379 words · 7 minute read

When deploying services in Google Kubernetes Engine (GKE), one of the first things we typically do is expose them externally. The simplest approach is to set the type: LoadBalancer on a Kubernetes Service, which provisions a regional External TCP/UDP Network Load Balancer (L4) for you automatically. This works well for basic needs—especially if you’re exposing raw TCP services like SSH or Jenkins agents.

However, there are two downsides:

  1. It’s regional, which can introduce latency for users located far from your cluster.

  2. It doesn’t support HTTP-level features like routing, TLS termination, or caching.

To improve global performance and gain access to more powerful features, you can instead use the Google Cloud HTTP(S) Load Balancer (L7). Backed by Google’s globally distributed edge infrastructure, this load balancer terminates traffic at the nearest point-of-presence and forwards it to your backend, reducing latency significantly—especially for HTTP-based services.

Why L7 Load Balancing? 🔗

Here’s a benchmark Google shared, comparing latency from Germany to a service in us-central1:

Option Ping TTFB
No load balancing 110 ms to the web server 230 ms
Network Load Balancing 110 ms to the in-region network load balancer 230 ms
HTTP(S) Load Balancing 1 ms to the closest GFE 123 ms

That’s a dramatic improvement in both responsiveness and user experience.

Beyond performance gains, the Google HTTP(S) Load Balancer (L7) also unlocks some powerful features:

  • Cloud CDN: This integrates seamlessly with the load balancer to cache static and dynamic HTTP content at the edge. It’s especially useful when serving large frontend bundles—like JavaScript libraries or assets—reducing load times for users across the globe.

  • Identity-Aware Proxy (IAP): With IAP, you can secure HTTP(S) endpoints by restricting access to specific users or groups with Google accounts. This is ideal for exposing internal tools or admin dashboards safely.

To take advantage of the L7 load balancer in GKE, you’ll need to expose your service through a Google-managed Ingress controller. Instead of using type: LoadBalancer on your service, you’ll configure it as type: NodePort, which allows the Ingress to route external HTTP traffic into your cluster (more details here).

Choosing the Right Ingress Controller 🔗

How we expose services via Kubernetes Ingress can significantly impact both architecture and cost—especially in GCP.

Take the Voyager Ingress Controller for example. By default, it uses an L4 Network Load Balancer per Ingress object. So if you have 10 services, you’ll end up with 10 separate L4 load balancers (more details in here). That might work fine at a small scale, but GCP’s pricing model starts to bite:

  • The first 5 forwarding rules (whether regional or global) are covered by a flat rate.

  • Beyond that, additional forwarding rules incur extra cost.

This means it’s worth optimizing your setup to use fewer than 5 load balancers where possible.

One option might be to consolidate all services under a single Ingress resource—one load balancer, one entry point, all routes defined centrally. While this can reduce costs, it comes at the cost of tight coupling: every time you update a service, you have to touch the central ingress config. That’s not ideal for teams practicing independent deployments or microservice ownership.

A better solution? Traefik.

Diagram showing a user accessing services through an Identity-Aware Proxy and HTTPS load balancer, which routes traffic to a Traefik Ingress Controller in GKE managing three services

Traefik can sit behind a single Google L7 Load Balancer and use its own CRDs (Custom Resource Definitions) to define individual routing rules per service. This gives you:

  • One global load balancer.

  • Fully decoupled service routing.

  • Independent deployments with zero Ingress coupling.

Next, we’ll walk through how to deploy Traefik in GKE and wire it up with Google’s HTTP(S) Load Balancer.

Deploying Traefik in GKE 🔗

We’ll use the official Traefik Helm chart to deploy the controller in our GKE cluster. However, some tweaks are necessary to make it work smoothly with Google’s HTTP(S) Load Balancer (L7).

Health checks: Why we need custom probes 🔗

GCP automatically configures health checks based on the liveness and readiness probes defined in your Kubernetes deployment. By default, the Traefik chart uses its main service port for these probes—but to properly integrate with the Google Load Balancer, we need to expose a dedicated health check port instead.

(Hopefully this customization will eventually be included as an upstream Helm chart option.)

Here’s a diff that shows how to override the probe ports in the Traefik pod template:

diff --git a/traefik/templates/_podtemplate.tpl b/traefik/templates/_podtemplate.tpl
index f401d26..3d5ddc5 100644
--- a/traefik/templates/_podtemplate.tpl
+++ b/traefik/templates/_podtemplate.tpl
@@ -38,7 +38,7 @@
         readinessProbe:
           httpGet:
             path: /ping
-            port: {{ .Values.ports.traefik.port }}
+            port: {{ default .Values.ports.traefik.port .Values.ports.traefik.healthchecksPort }}
           failureThreshold: 1
           initialDelaySeconds: 10
           periodSeconds: 10
@@ -47,7 +47,7 @@
         livenessProbe:
           httpGet:
             path: /ping
-            port: {{ .Values.ports.traefik.port }}
+            port: {{ default .Values.ports.traefik.port .Values.ports.traefik.healthchecksPort }}
           failureThreshold: 3
           initialDelaySeconds: 10
           periodSeconds: 10

With the health check probes updated, the next step is to customize the values.yaml file used by the Traefik Helm chart. These changes are essential to ensure compatibility with the Google HTTP(S) Load Balancer and to optimize the overall setup.

Key Changes 🔗

  1. Change the service type to NodePort
    Google’s Ingress controller requires services to be of type NodePort in order to forward traffic from the external load balancer to the Traefik pods.

  2. Configure a dedicated health check port
    This helps the Google Load Balancer accurately detect pod health without interfering with traffic ports.

  3. Disable the websecure entrypoint
    Since TLS termination will be handled at the Google Load Balancer level, we don’t need Traefik to manage HTTPS internally. This simplifies the configuration and avoids port conflicts.

Here’s a summarized diff of what your values.yaml changes should look like:

# Enable required CLI args
additionalArguments:
  - "--providers.kubernetesingress.ingressclass=traefik"
  - "--ping"
  - "--ping.entrypoint=web"

# Health check port override
ports:
  traefik:
    port: 9000
    healthchecksPort: 8000

  websecure:
    port: 8443
    expose: false  # SSL handled by Google LB
    exposedPort: 443

# Service exposed via NodePort, not LoadBalancer
service:
  enabled: true
  type: NodePort

Creating the GCE Ingress Resource 🔗

Finally, we need an Ingress resource that connects the dots between the Google Load Balancer and our Traefik service. This example Helm template does just that, conditionally generating a GCE-style Ingress only when the service type is NodePort:

{{- if and .Values.service.enabled (eq .Values.service.type "NodePort") }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: traefik
  annotations:
    kubernetes.io/ingress.class: "gce"
    kubernetes.io/ingress.allow-http: "false"
    external-dns.alpha.kubernetes.io/hostname: {{ .Values.hostname }}
spec:
  defaultBackend:
    service:
      name: {{ template "traefik.fullname" . }}
      port:
        number: {{ .Values.ports.web.exposedPort }}
  tls:
    - secretName: {{ .Values.certificateSecret }}
{{- end }}

📌 Notes:

  • .Values.hostname might be something like traefik.example.com

  • .Values.certificateSecret should point to a TLS certificate, e.g., generated by cert-manager

Using Traefik to Expose Other Services 🔗

Let’s see how this Traefik setup works in practice by exposing a simple service. We’ll use the well-known whoami container, which responds with basic request information—perfect for testing and debugging ingress routes.

Step 1: Deploy the whoami Service 🔗

Start with a basic Kubernetes Deployment and Service:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: whoami
  labels:
    app: whoami
spec:
  replicas: 1
  selector:
    matchLabels:
      app: whoami
  template:
    metadata:
      labels:
        app: whoami
    spec:
      containers:
        - name: whoami
          image: containous/whoami:v1.5.0
          ports:
            - name: web
              containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: whoami
spec:
  selector:
    app: whoami
  ports:
    - name: web
      port: 80
      protocol: TCP

Step 2: Create a DNS Record with ExternalDNS 🔗

If you’re using ExternalDNS, you can automatically manage DNS records based on Kubernetes resources. For this example, create a DNSEndpoint to associate whoami.example.com with your Traefik instance:

apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
  name: whoami
spec:
  endpoints:
    - dnsName: whoami.example.com
      recordType: CNAME
      recordTTL: 300
      targets:
        - traefik.example.com

Step 3: Define a Traefik IngressRoute 🔗

Finally, create a IngressRoute resource to route traffic to the whoami service when requests come in for whoami.example.com:

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: whoami
spec:
  routes:
    - kind: Rule
      match: Host(`whoami.example.com`)
      services:
        - name: whoami
          port: 80

Step 4: Test It Out 🔗

Once everything is deployed, test your setup using curl:

curl https://whoami.example.com

You should see a response like this:

Hostname: whoami-0
IP: 127.0.0.1
IP: 10.40.1.16                # whoami pod IP
RemoteAddr: 10.40.1.13:47784  # traefik pod IP
GET / HTTP/1.1
Host: whoami.example.com
User-Agent: curl/7.68.0
Accept: */*
Accept-Encoding: gzip
Via: 1.1 google
X-Client-Data: CgSL6ZsV
X-Cloud-Trace-Context: 1c5d7...
X-Forwarded-For: 10.40.1.1
X-Forwarded-Host: whoami.example.com
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Server: traefik-6d6fcbd876-58nvn
X-Real-Ip: 10.40.1.1

🎉 Conclusion 🔗

With this setup, you now have a globally distributed, cost-efficient, and decoupled way to expose services in your Kubernetes cluster using Traefik and Google’s L7 HTTP(S) Load Balancer. You can deploy new services independently, define routing rules with CRDs, and keep your infrastructure lean and scalable.