Skip to main content

Command Palette

Search for a command to run...

Exposing Kubernetes Applications with Otoroshi and the Gateway API on Clever Cloud

Published
28 min read
Exposing Kubernetes Applications with Otoroshi and the Gateway API on Clever Cloud

There's a certain feeling you get when you've just finished setting up a new piece of infrastructure and it actually works. The kind of satisfaction that makes you reach for your coffee, lean back in your chair, and just... breathe. That's exactly how I felt when, after a couple hours of tinkering on a rainy afternoon, I watched a curl command return a clean HTTP 200 from an application running deep inside a Kubernetes cluster — routed through Otoroshi's Gateway API integration, running on Clever Cloud's managed Kubernetes engine.

I want to take you through that journey in this article. We'll go from the fundamental problem of exposing Kubernetes workloads to the outside world, through the history of Ingress controllers, all the way to the new Kubernetes Gateway API standard — and we'll deploy the whole stack, step by step, on CKE (Clever Kubernetes Engine), Clever Cloud's managed Kubernetes offering.

Grab a coffee. This is a long one.


The Problem: Your App is Trapped Inside the Cluster

When you deploy an application to Kubernetes, you're placing it inside a highly capable but fundamentally isolated environment. Kubernetes pods live in their own private network space. Services like ClusterIP make those pods reachable within the cluster, but by design, they're not accessible from the outside world.

That's actually a feature, not a bug. Kubernetes gives you a private, controllable network fabric. But the moment your application needs to serve real users — or talk to external systems — you need to punch a controlled hole through that isolation.

Over the years, the Kubernetes community has developed several approaches to this problem, each with its own trade-offs.

The Early Days: NodePort and LoadBalancer

The simplest option is a NodePort service. It binds a port on every node of your cluster and forwards traffic to your pods. It works, but it's clunky: you're dealing with high port numbers (30000+), you're exposing every node's IP directly, and you have zero traffic management capabilities. NodePort is great for quick local testing, but it's not something you'd run in production.

The next step up is a LoadBalancer service. Cloud providers — including Clever Cloud — can provision an external load balancer and wire it up to your pods automatically. This is a real improvement: you get a stable external IP and clean port numbers. But if you have dozens of services to expose, you end up with dozens of load balancers, each costing money, each needing DNS configuration. It doesn't scale well.

What you really need is a single entry point — a reverse proxy — that can intelligently route traffic to the right service based on the hostname, path, headers, or other request attributes. And that's exactly what the Ingress resource was created for.


A Brief History of Ingress

Kubernetes introduced the Ingress resource around 2015 as a standardized way to define HTTP routing rules. The idea was elegant: you write a YAML manifest that says "requests to api.example.com/v1 should go to service my-api on port 8080", and a controller picks that up and configures the actual reverse proxy.

A typical Ingress resource looks like this:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app
  namespace: default
spec:
  rules:
  - host: api.example.com
    http:
      paths:
      - path: /v1
        pathType: Prefix
        backend:
          service:
            name: my-api
            port:
              number: 8080

The Ingress Controller Ecosystem

The Ingress spec itself doesn't do anything on its own — you also need an Ingress controller, a piece of software that watches for Ingress resources and translates them into actual proxy configurations. Nginx was the most popular choice. Others followed: Traefik, HAProxy Ingress, Contour, and more.

The problem is that the Ingress spec was intentionally kept minimal. Basic host and path-based routing was all you got from the standard. Everything else — TLS termination modes, rate limiting, authentication, retries, canary deployments — had to be configured through annotations. And since annotations are just arbitrary strings attached to resources, every controller had its own vocabulary. A rate limit annotation for Nginx looks completely different from one for Traefik. Moving from one controller to another meant rewriting all your Ingress resources.

# nginx-specific annotation
nginx.ingress.kubernetes.io/limit-rps: "10"

# Same thing in Traefik's world
traefik.ingress.kubernetes.io/rate-limit: |
  extractorfunc: client.ip
  rateset:
    default:
      period: 1s
      average: 10

This annotation sprawl was a recognized pain point in the community. The Ingress API was showing its age.

The Deprecation Story

The situation became clearer in 2023 when the Kubernetes maintainers announced the deprecation of the original kubernetes/ingress-nginx controller — not the Nginx Ingress Controller project itself, but the Kubernetes community-maintained one. The project had accumulated significant technical debt and the community decided it was time to move on.

More broadly, the Ingress API itself was being kept in maintenance mode. No new features were being added. The community had already been working on something better.


The Future is Here: Kubernetes Gateway API

The Kubernetes Gateway API is the modern, standardized successor to Ingress. It reached General Availability (GA) in late 2023 and has been steadily gaining adoption since.

The Gateway API was designed with several key goals in mind: to be expressive enough to cover advanced routing scenarios without annotations, to have clear separation of concerns between different user roles, and to be extensible in a controlled way.

A Different Model

Where Ingress has a single resource type, the Gateway API introduces a hierarchy of resources:

GatewayClass — Defines which controller implementation handles a particular class of gateways. Think of it like a StorageClass for storage, but for traffic. It's typically created by the infrastructure team or the platform operator.

apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: my-gateway-class
spec:
  controllerName: vendor.io/my-controller

Gateway — A specific instance of a gateway, using a given class. It defines what protocols and ports to listen on, and what hostnames to handle. This is the resource created by the team that owns the cluster or the network infrastructure.

apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: my-gateway
  namespace: infra
spec:
  gatewayClassName: my-gateway-class
  listeners:
  - name: http
    protocol: HTTP
    port: 80
    hostname: "*.example.com"
    allowedRoutes:
      namespaces:
        from: All

HTTPRoute — The actual routing rules, defined by application teams in their own namespaces. This is the part that app developers care about: "my app lives at api.example.com/v2".

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: api-route
  namespace: my-app
spec:
  parentRefs:
  - name: my-gateway
    namespace: infra
    sectionName: http
  hostnames:
  - "api.example.com"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /v2
    backendRefs:
    - name: my-api-service
      port: 8080

This role-based model is a significant improvement. Cluster operators control the Gateway (which hostnames are allowed, which namespaces can attach routes), while application developers control their own HTTPRoute resources without needing cluster-level privileges. Cross-namespace references are supported with explicit grants via ReferenceGrant resources.

The Gateway API also brings first-class support for traffic splitting (weighted backends for canary deployments), header manipulation, redirects, URL rewrites, and a proper extension mechanism through custom filter types — all without resorting to controller-specific annotations.


Meet Otoroshi

Before we dive into the hands-on setup, let me introduce the main character of this story: Otoroshi.

Otoroshi is an open-source reverse proxy and API gateway that I originally created in 2017, while working as a contractor at MAIF, a large French insurance company. MAIF had just migrated to the Clever Cloud platform and needed a unified solution for securing and managing API traffic across their diverse application landscape — without imposing specific libraries or frameworks on their development teams. The project has since grown well beyond its original scope, and I still lead it today as core maintainer and project lead, under the Cloud APIM umbrella.

The philosophy behind Otoroshi is summarized in five principles: technology agnostic, HTTP-first, API-first (the web UI is just another API client), security-focused, and event-driven. It's designed to be the single entry point for all HTTP traffic to your applications, regardless of where those applications are hosted or what language they're written in.

What Can Otoroshi Do?

The feature list is substantial. On the traffic management side: load balancing with multiple strategies (round robin, sticky sessions, least connections, best response time), distributed rate limiting, traffic mirroring, canary deployments, and relay routing across network zones.

On the security side: a built-in Web Application Firewall powered by Coraza with OWASP Core Rule Sets, IP blocklists with CIDR support, geolocation-based access control, API key management with quotas, JWT validation (including JWE), OAuth 2.0/2.1 with PKCE, OIDC, LDAP, SAML V2, and even WebAuthn/FIDO2 support.

For observability: native Prometheus metrics, OpenTelemetry (OTLP) traces, Datadog, StatsD, Kafka event streaming, and more.

And it supports HTTP/1.1, HTTP/2, HTTP/3, WebSocket, TCP, and gRPC — including a next-generation Netty-based server backend for the most demanding workloads.

Otoroshi in Kubernetes

Otoroshi has had native Kubernetes integration since version 1.5.0. In Kubernetes mode, Otoroshi runs as an Ingress controller and can operate through multiple mechanisms:

  • Standard Ingress controller: Watches Ingress resources and creates corresponding routes

  • Custom Resource Definitions (CRDs): A rich set of 23+ custom resources that let you configure every aspect of Otoroshi natively through Kubernetes manifests — routes, backends, certificates, API keys, JWT verifiers, auth modules, and more

  • Secret synchronization: Kubernetes TLS secrets are automatically synced as Otoroshi certificates

The Otoroshi controller runs as a background job inside the Otoroshi process itself, polling/watching the Kubernetes API and reconciling the desired state with the current routing configuration. No separate controller deployment needed.

One important note from the docs: running Otoroshi behind an existing ingress controller is explicitly not recommended. It degrades TCP proxying capabilities, TLS, mTLS, and adds unnecessary latency. Otoroshi should be your outermost entry point.


Otoroshi Meets the Gateway API

Here's where things get exciting. Starting with version 17.13.0, Otoroshi implements the Kubernetes Gateway API specification. It was something I'd been wanting to tackle for a while, and the Gateway API spec having reached GA felt like the right moment for me to finally do it properly.

The feature is labeled experimental in the current docs, but it's fully functional for core use cases: Gateway, HTTPRoute, and GRPCRoute support. The implementation follows an interesting architecture choice: rather than dynamically provisioning new listeners on demand (which would require significant cluster privileges and complexity), Otoroshi validates that Gateway listeners match its already-running HTTP/HTTPS ports, then uses the routes to build its routing table.

The reconciliation flow works like this:

  1. Otoroshi fetches all Gateway API resources from the Kubernetes API server

  2. It validates GatewayClass resources that reference its controller name

  3. It resolves TLS certificates from referenced Kubernetes Secrets

  4. It validates Gateway listeners against its configured ports

  5. It converts HTTPRoute and GRPCRoute objects into native NgRoute entities

  6. It saves the routes and cleans up any orphaned entries from previous reconciliation cycles

All generated routes are tagged with otoroshi-provider: kubernetes-gateway-api, making them easy to identify and query.

The Gateway API controller is activated by running the KubernetesGatewayApiControllerJob as a background job in Otoroshi's configuration. Let's see exactly how to set all of this up.


Building It: End-to-End Setup on Clever Cloud

Let me walk you through exactly what I did that afternoon. We're going to:

  1. Provision a Kubernetes cluster on CKE

  2. Add worker nodes

  3. Deploy Otoroshi with Gateway API support enabled

  4. Install the Gateway API CRDs

  5. Deploy a demo application

  6. Route traffic to it through the Gateway API

  7. Test that everything works

Before diving into the steps, here's a bird's eye view of what we're building. Two planes coexist: the control plane (dashed arrows), where Otoroshi watches Kubernetes API resources and reconciles them into live routes, and the data plane (solid arrows), where actual HTTP traffic flows from the outside world to the application pods.

Prerequisites

You'll need:

  • A Clever Cloud account

  • The clever CLI installed and authenticated (npm install -g clever-tools or download from the releases page)

  • kubectl installed

Important — CKE is currently in private access. Clever Kubernetes Engine is not yet generally available to all Clever Cloud users. To get access, you need to reach out to the Clever Cloud support and ask them to enable CKE on your organization. The team is responsive and the process is straightforward — just mention what you want to build and they'll get you set up. Once it's enabled on your org, the rest of this guide applies as-is.

Step 1: Enable Kubernetes on Your Clever Cloud Organization

Once CKE is unlocked for your organization, you need to enable the Kubernetes feature via the CLI:

clever features enable k8s

You can list existing clusters with:

clever k8s list --org xxx

Step 2: Create Your Cluster

Creating a new cluster is a single command. The --watch flag keeps the CLI running and shows you progress as the control plane comes up — which is a nice touch when you're sitting there wondering if something went wrong.

clever k8s create gateway-api-demo --org xxx --watch

After a few minutes, you'll have a running cluster.

You can try the list again. The output shows you the cluster status and eventually confirms that the control plane is ready.

clever k8s list --org xxx

You can also notice that a new item has been added to your Clever Cloud Console

Step 3: Grab Your kubeconfig

Once the cluster is ready, download the kubeconfig file:

clever k8s get-kubeconfig gateway-api-demo --org xxx > kubeconfig.yaml

Then point kubectl at it:

export KUBECONFIG="$(pwd)/kubeconfig.yaml"

Or if you prefer, drop that export into a config.sh file and source it:

source config.sh

Let's verify we can talk to the cluster:

kubectl get nodes

At this point, you'll likely see only the control plane node or nothing at all — we haven't added any worker nodes yet.

Step 4: Add Worker Nodes

By default, a CKE cluster starts with a control plane but no worker nodes. We need to create a NodeGroup — Clever Cloud's abstraction for a group of compute nodes with a given flavor (CPU/memory profile) and count.

Here's the manifest I used (manifests/nodegroup.yaml):

apiVersion: api.clever-cloud.com/v1
kind: NodeGroup
metadata:
  name: gateway-api-demo-nodegroup
spec:
  flavor: L
  nodeCount: 2

The M flavor gives you a solid amount of CPU and RAM for running Otoroshi comfortably. I went with 2 nodes because I wanted room to experiment without hitting resource constraints.

Apply it:

kubectl create -f ./manifests/nodegroup.yaml

Now watch the nodes come up:

kubectl get nodes
kubectl get nodegroups

You'll see the nodes join the cluster one by one. Give it a few minutes. Once you see all 2 nodes in Ready state, you're set to proceed.

Pro tip: If you later decide you need more capacity, you can scale the node group without recreating it:

kubectl scale nodegroup gateway-api-demo-nodegroup --replicas=4

Step 5: Install Otoroshi's RBAC Rules and CRDs for Gateway API

Otoroshi needs specific RBAC permissions to watch and update Gateway API resources — particularly gatewayclasses, gateways, httproutes, grpcroutes, referencegrants, backendtlspolicies, and the corresponding /status subresources.

The Otoroshi project provides a ready-made RBAC manifest specifically for the Gateway API integration:

kubectl apply -f 'https://raw.githubusercontent.com/MAIF/otoroshi/refs/heads/master/kubernetes/gateway-api/rbac-gateway.yaml'

And the Otoroshi Custom Resource Definitions:

kubectl apply -f 'https://raw.githubusercontent.com/MAIF/otoroshi/refs/heads/master/kubernetes/gateway-api/crds-gateway.yaml'

These CRDs let you configure Otoroshi-native resources (routes, backends, API keys, etc.) through Kubernetes manifests. Even though we're focusing on the Gateway API today, having these installed gives you the full Otoroshi Kubernetes integration.

Step 6: Deploy Otoroshi

This is the meaty part. Let me walk you through the deployment manifest in detail, because there are several important settings here.

Full manifest (manifests/otoroshi.yaml):

kind: Namespace
apiVersion: v1
metadata:
  name: otoroshi
---
kind: ServiceAccount
apiVersion: v1
metadata:
  namespace: otoroshi
  name: otoroshi-admin-user
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: otoroshi-admin-user
  namespace: otoroshi
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: otoroshi-admin-user
subjects:
- kind: ServiceAccount
  name: otoroshi-admin-user
  namespace: otoroshi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: otoroshi
  namespace: otoroshi
spec:
  replicas: 1
  selector:
    matchLabels:
      app: otoroshi
  template:
    metadata:
      labels:
        app: otoroshi
    spec:
      serviceAccountName: otoroshi-admin-user
      terminationGracePeriodSeconds: 60
      containers:
      - name: otoroshi
        image: maif/otoroshi:latest
        imagePullPolicy: Always
        ports:
        - containerPort: 10049
          name: http
        - containerPort: 10048
          name: https
        env:
        - name: APP_STORAGE_ROOT
          value: otoroshi
        - name: APP_STORAGE
          value: inmemory
        - name: ADMIN_API_EXPOSED_SUBDOMAIN
          value: otoroshi-api-5jsfzwhhx6unbywz
        - name: ADMIN_API_TARGET_SUBDOMAIN
          value: otoroshi-target-5jsfzwhhx6unbywz
        - name: APP_BACKOFFICE_SUBDOMAIN
          value: otoroshi-backoffice-5jsfzwhhx6unbywz
        - name: APP_PRIVATEAPPS_SUBDOMAIN
          value: otoroshi-privateapps-5jsfzwhhx6unbywz
        - name: APP_DOMAIN
          value: cleverk8s.io
        - name: OTOROSHI_INITIAL_ADMIN_PASSWORD
          value: password
        - name: ADMIN_API_CLIENT_ID
          value: admin-api-apikey-id
        - name: ADMIN_API_CLIENT_SECRET
          value: admin-api-apikey-secret
        - name: ADMIN_API_ADDITIONAL_EXPOSED_DOMAIN
          value: otoroshi-api.otoroshi.svc.cluster.local
        - name: OTOROSHI_SECRET
          value: veryveryveryveryverysecret
        - name: OTOROSHI_EXPOSED_PORTS_HTTP
          value: "80"
        - name: OTOROSHI_EXPOSED_PORTS_HTTPS
          value: "443"
        - name: HEALTH_LIMIT
          value: "5000"
        - name: SSL_OUTSIDE_CLIENT_AUTH
          value: Want
        - name: HTTPS_WANT_CLIENT_AUTH
          value: "true"
        - name: OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_ENABLED
          value: "true"
        - name: OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_ACCESSLOG
          value: "true"
        - name: OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_EXPOSED_HTTP_PORT
          value: "80"
        - name: OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_EXPOSED_HTTPS_PORT
          value: "443"
        - name: OTOROSHI_INITIAL_CUSTOMIZATION
          value: >
            {
              "config": {
                "scripts": {
                  "enabled": true,
                  "jobRefs": [
                    "cp:otoroshi.plugins.jobs.kubernetes.KubernetesGatewayApiControllerJob"
                  ],
                  "jobConfig": {
                    "KubernetesConfig": {
                      "trust": false,
                      "namespaces": ["*"],
                      "labels": {},
                      "namespacesLabels": {},
                      "defaultGroup": "default",
                      "ingresses": false,
                      "crds": true,
                      "kubeLeader": false,
                      "restartDependantDeployments": false,
                      "watch": false,
                      "syncIntervalSeconds": 60,
                      "otoroshiServiceName": "otoroshi-service",
                      "otoroshiNamespace": "otoroshi",
                      "clusterDomain": "cluster.local",
                      "gatewayApi": true,
                      "gatewayApiWatch": true,
                      "gatewayApiControllerName": "otoroshi.io/gateway-controller",
                      "gatewayApiHttpListenerPort": [8080, 80],
                      "gatewayApiHttpsListenerPort": [8443, 443],
                      "gatewayApiSyncIntervalSeconds": 30,
                      "gatewayApiAddresses": []
                    }
                  }
                }
              }
            }
        - name: JAVA_OPTS
          value: '-Xms1g -Xmx2g -XX:+UseContainerSupport -XX:MaxRAMPercentage=80.0'
        readinessProbe:
          httpGet:
            path: /ready
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
          failureThreshold: 5
        livenessProbe:
          httpGet:
            path: /live
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
          failureThreshold: 5
---
apiVersion: v1
kind: Service
metadata:
  name: otoroshi-service
  namespace: otoroshi
spec:
  selector:
    app: otoroshi
  ports:
  - port: 8080
    name: "http"
    targetPort: "http"
  - port: 8443
    name: "https"
    targetPort: "https"
---
apiVersion: v1
kind: Service
metadata:
  name: otoroshi-lb-external
  namespace: otoroshi
spec:
  type: LoadBalancer
  selector:
    app: otoroshi
  ports:
  - port: 80
    name: "http"
    targetPort: "http"
  - port: 443
    name: "https"
    targetPort: "https"

Let me highlight the key configuration decisions here.

Storage: We're using APP_STORAGE=inmemory here to keep things simple and avoid setting up an external data store for this demo. It's the quickest way to get Otoroshi running — no dependencies, no configuration overhead. But let me be clear about the trade-off: with in-memory storage, the entire Otoroshi configuration is gone the moment the pod restarts and you can only count on Otoroshi CRDs for your configs. For production, you absolutely want a persistent backend.

The natural choice is Redis (APP_STORAGE=redis), which is what most Otoroshi production deployments use. If you're already on Clever Cloud, the good news is that Redis add-ons are available directly on the platform — just provision one, wire up the connection string, and you're set. And beyond persistence, production deployments will also want to run multiple Otoroshi instances for high availability. Otoroshi supports a cluster mode where instances share state through Redis, which fits naturally with Clever Cloud's managed Redis offering. If you want to go further, the Clever Cloud Kubernetes Operator lets you provision and manage Clever Cloud add-ons (including Redis) directly from Kubernetes manifests — a clean fit if you want to keep everything in GitOps.

The Netty server: Those OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_* variables enable Otoroshi's next-generation HTTP server based on Netty. This server handles HTTP/1.1, HTTP/2, and HTTP/3 properly, and it's required for gRPC support. We enable it and bind it to ports 80 and 443.

The Gateway API configuration — this is the critical part — lives in OTOROSHI_INITIAL_CUSTOMIZATION. Let's break it down:

{
  "config": {
    "scripts": {
      "enabled": true,
      "jobRefs": [
        "cp:otoroshi.plugins.jobs.kubernetes.KubernetesGatewayApiControllerJob"
      ],
      "jobConfig": {
        "KubernetesConfig": {
          "gatewayApi": true,
          "gatewayApiWatch": true,
          "gatewayApiControllerName": "otoroshi.io/gateway-controller",
          "gatewayApiHttpListenerPort": [8080, 80],
          "gatewayApiHttpsListenerPort": [8443, 443],
          "gatewayApiSyncIntervalSeconds": 10
        }
      }
    }
  }
}
  • "jobRefs" activates the KubernetesGatewayApiControllerJob background job — this is the component that watches Gateway API resources and reconciles them

  • "gatewayApi": true enables the Gateway API controller

  • "gatewayApiWatch": true enables near-real-time watching of Kubernetes resources (instead of polling only on a fixed interval)

  • "gatewayApiControllerName": "otoroshi.io/gateway-controller" is the controller identifier — this must exactly match the controllerName in your GatewayClass resource

  • "gatewayApiHttpListenerPort": [8080, 80] — this tells Otoroshi which port values are valid for HTTP listeners in Gateway resources. Gateways that reference port 8080 or port 80 will be accepted

  • "gatewayApiSyncIntervalSeconds": 10 — with watch mode enabled, this is how quickly forced reconciliation cycles run

The two Services: We create two services for Otoroshi:

  1. otoroshi-service (ClusterIP on 8080/8443) — internal access for other pods in the cluster

  2. otoroshi-lb-external (LoadBalancer on 80/443) — this is what gets exposed to the outside world. Clever Cloud will provision an external load balancer and assign it a public IP

Apply the deployment:

kubectl apply -f manifests/otoroshi.yaml

Step 7: Watch the Logs

There's a certain ritual to deploying something new: you wait, and you watch logs. Let's follow Otoroshi's startup:

kubectl -n otoroshi logs -f deploy/otoroshi --tail=200

You'll see Otoroshi boot up, initialize its storage, start the Kubernetes synchronization jobs, and eventually log something like:

[INFO] [KubernetesGatewayApiControllerJob] Gateway API controller started, watching for GatewayClass resources with controllerName: otoroshi.io/gateway-controller

That's the signal you're waiting for. The Gateway API controller is running and ready to pick up resources.

While you wait for Otoroshi to become healthy, you can also check the load balancer service to find the external IP that was assigned:

kubectl -n otoroshi get service otoroshi-lb-external

The EXTERNAL-IP column will initially show <pending>. Once Clever Cloud has provisioned the load balancer, it'll fill in with a real IP address. Note this IP — you'll need it for DNS configuration and testing.

kubectl -n otoroshi get service otoroshi-lb-external
NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP        PORT(S)
otoroshi-lb-external   LoadBalancer   10.x.x.x        <LOAD_BALANCER_IP>  80:..., 443:...

Step 8: Install the Gateway API CRDs

The Kubernetes Gateway API is not built into Kubernetes by default — it ships as separate CRDs. We need to install them before we can create GatewayClass, Gateway, or HTTPRoute resources.

The SIG Network team maintains official release artifacts. We're using v1.4.0:

kubectl apply -f 'https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.4.0/standard-install.yaml'

This installs the stable channel CRDs: GatewayClass, Gateway, HTTPRoute, GRPCRoute, ReferenceGrant, and a few others.

Verify they're in place:

kubectl get crds | grep gateway.networking.k8s.io

You should see entries for gatewayclasses.gateway.networking.k8s.io, gateways.gateway.networking.k8s.io, httproutes.gateway.networking.k8s.io, and so on.

Step 9: Deploy the Gateway Configuration

Now the fun begins. We need to create two resources: a GatewayClass that tells Kubernetes "Otoroshi handles this", and a Gateway that defines our listening configuration.

Here's the manifest (manifests/gateway.yaml):

apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: otoroshi
spec:
  controllerName: otoroshi.io/gateway-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: demo-otoroshi-gateway
  namespace: otoroshi
spec:
  gatewayClassName: otoroshi
  listeners:
  - name: http
    protocol: HTTP
    port: 80
    hostname: "*.demo-cke.io"
    allowedRoutes:
      namespaces:
        from: All

Let's unpack this:

GatewayClass: The controllerName here — otoroshi.io/gateway-controller — must exactly match the gatewayApiControllerName we configured in Otoroshi's environment variables. This is how Otoroshi claims ownership of this GatewayClass.

Gateway:

  • gatewayClassName: otoroshi links this Gateway to our GatewayClass

  • The listener is on port 80 (HTTP), matching one of our gatewayApiHttpListenerPort values

  • hostname: "*.demo-cke.io" is a wildcard — any subdomain of demo-cke.io can be routed through this gateway

  • allowedRoutes.namespaces.from: All means HTTPRoute resources from any namespace can attach to this gateway. For production, you'd likely want Same or Selector with specific labels

Apply it:

kubectl apply -f manifests/gateway.yaml

A few seconds after applying (the Gateway API watch mode kicks in very quickly with gatewayApiSyncIntervalSeconds: 2), you can verify the GatewayClass was accepted:

kubectl get gatewayclass otoroshi -o yaml

Look for the status section:

status:
  conditions:
  - lastTransitionTime: "..."
    message: GatewayClass accepted
    reason: Accepted
    status: "True"
    type: Accepted

Accepted: True means Otoroshi recognized the GatewayClass and is managing it. If you see False here, double-check that the controllerName matches exactly — it's case-sensitive.

Step 10: Deploy a Demo Application and HTTPRoute

Now let's deploy something to actually route traffic to. I'm using traefik/whoami — a tiny Go web server that echoes back information about the HTTP request it received. Perfect for debugging and demos.

Here's the full manifest (manifests/routes.yaml):

apiVersion: v1
kind: Namespace
metadata:
  name: demo
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: whoami
  namespace: demo
  labels:
    app: whoami
spec:
  replicas: 1
  selector:
    matchLabels:
      app: whoami
  template:
    metadata:
      labels:
        app: whoami
    spec:
      containers:
        - name: whoami
          image: traefik/whoami:latest
          ports:
            - name: http
              containerPort: 80
          readinessProbe:
            httpGet:
              path: /
              port: http
            initialDelaySeconds: 1
            periodSeconds: 5
          livenessProbe:
            httpGet:
              path: /
              port: http
            initialDelaySeconds: 5
            periodSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
  name: whoami
  namespace: demo
  labels:
    app: whoami
spec:
  type: ClusterIP
  selector:
    app: whoami
  ports:
    - name: http
      port: 80
      targetPort: http
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: my-route
  namespace: demo
spec:
  parentRefs:
  - name: demo-otoroshi-gateway
    namespace: otoroshi
    sectionName: http
  hostnames:
  - "whoami.demo-cke.io"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /v1
    backendRefs:
    - name: whoami
      port: 80
      weight: 1

The HTTPRoute is particularly interesting. Let me walk through it:

  • parentRefs — this references our demo-otoroshi-gateway Gateway (note: since the Gateway is in the demo namespace and the HTTPRoute is also in demo, no cross-namespace reference grant is needed). sectionName: http ties this route specifically to the http listener of that gateway.

  • hostnames — this route only applies to requests with the Host: whoami.demo-cke.io header

  • rules — one rule: match requests where the path starts with /v1, forward to the whoami service on port 80

Apply it:

kubectl apply -f manifests/routes.yaml

After a moment, check the HTTPRoute status:

kubectl get httproute my-route -n demo -o yaml

The status should show:

status:
  parents:
  - conditions:
    - lastTransitionTime: "..."
      message: Route accepted
      reason: Accepted
      status: "True"
      type: Accepted
    - lastTransitionTime: "..."
      message: All references resolved
      reason: ResolvedRefs
      status: "True"
      type: ResolvedRefs
    parentRef:
      name: demo-otoroshi-gateway
      sectionName: http

Both Accepted: True and ResolvedRefs: True — all the backend references are valid and the route has been accepted by the gateway.

Step 11: DNS Configuration

We need whoami.demo-cke.io to resolve to our load balancer's external IP. In a real setup, you'd add a DNS A record (or a wildcard *.demo-cke.io record) pointing to that IP.

Get your load balancer IP:

kubectl -n otoroshi get service otoroshi-lb-external
NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP        PORT(S)
otoroshi-lb-external   LoadBalancer   10.x.x.x        <LOAD_BALANCER_IP>  80:..., 443:...

For this demo, rather than setting up actual DNS, we can use curl's --resolve flag to manually map the hostname to the IP — simulating what DNS would do.

Step 12: The Moment of Truth

curl -v --resolve whoami.demo-cke.io:80:<LOAD_BALANCER_IP> http://whoami.demo-cke.io/v1

If everything is wired up correctly, you'll see something like:

*   Trying <LOAD_BALANCER_IP>:80...
* Connected to whoami.demo-cke.io (<LOAD_BALANCER_IP>) port 80
> GET /v1 HTTP/1.1
> Host: whoami.demo-cke.io
> User-Agent: curl/8.x.x
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: text/plain; charset=utf-8
<
Hostname: whoami-xxxxxxxxx-xxxxx
IP: 127.0.0.1
IP: ::1
IP: 10.x.x.x
RemoteAddr: 10.x.x.x:xxxx
GET /v1 HTTP/1.1
Host: whoami.demo-cke.io
User-Agent: curl/8.x.x
Accept: */*
X-Forwarded-For: x.x.x.x
X-Forwarded-Host: whoami.demo-cke.io
X-Forwarded-Proto: http
X-Real-Ip: x.x.x.x

A clean HTTP 200. Traffic flowed from curl → Clever Cloud Load Balancer → Otoroshi (routing based on the HTTPRoute we defined) → the whoami pod in the demo namespace.

Notice the X-Forwarded-For, X-Forwarded-Host, and X-Real-Ip headers that Otoroshi injected. It's transparent to the backend service but fully managed at the gateway layer.


Troubleshooting Tips

If something doesn't work, here's my debugging checklist:

Check Otoroshi's logs:

kubectl -n otoroshi logs -f deploy/otoroshi --tail=200

Look for lines mentioning KubernetesGatewayApiControllerJob, GatewayClass, or HTTPRoute. Any errors in reconciliation will show up here.

Check resource statuses:

kubectl get gatewayclass otoroshi -o yaml
kubectl get gateway demo-otoroshi-gateway -n demo -o yaml
kubectl get httproute my-route -n demo -o yaml

The status.conditions fields tell you exactly what's wrong. Common issues:

  • GatewayClass: Accepted: FalsecontrollerName mismatch

  • Gateway Listener: Programmed: False → The port in the Gateway spec doesn't match gatewayApiHttpListenerPort in Otoroshi's config

  • HTTPRoute: Accepted: FalseparentRef is wrong or the namespace isn't allowed by the Gateway's allowedRoutes

Verify the generated routes in Otoroshi:

curl -s http://<otoroshi-service-ip>:8080/api/routes \
  -u admin-api-apikey-id:admin-api-apikey-secret | \
  jq '.[] | select(.metadata["otoroshi-provider"] == "kubernetes-gateway-api") | {id, name}'

If the route was generated, it'll show up here. If the list is empty, the reconciliation hasn't happened or encountered an error.

Check that the backend pod is running:

kubectl get pods -n demo
kubectl get service whoami -n demo

If the pod isn't ready or the service selector doesn't match the pods, you'll get 503s.


Going Further: What Else Can You Do?

What I've shown here is the basics — routing HTTP traffic from a hostname/path to a backend service. But the Gateway API support in Otoroshi goes quite a bit further.

Traffic Splitting for Canary Deployments

The weight field in backendRefs lets you split traffic between multiple versions of a service:

rules:
- backendRefs:
  - name: my-service-v1
    port: 80
    weight: 90
  - name: my-service-v2
    port: 80
    weight: 10

This sends 90% of traffic to v1 and 10% to v2 — a classic canary deployment pattern, all configured declaratively.

Header Manipulation

Built-in filters let you add, modify, or remove request and response headers:

rules:
- filters:
  - type: RequestHeaderModifier
    requestHeaderModifier:
      add:
      - name: X-Request-Source
        value: gateway
      remove:
      - X-Internal-Secret
  backendRefs:
  - name: my-service
    port: 80

Redirects and URL Rewrites

rules:
- matches:
  - path:
      type: PathPrefix
      value: /old-api
  filters:
  - type: RequestRedirect
    requestRedirect:
      scheme: https
      statusCode: 301

Extending with Otoroshi Plugins

This is where things get really powerful. You can attach Otoroshi's plugin system to routes via annotations:

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: my-secure-route
  namespace: demo
  annotations:
    proxy.otoroshi.io/route-plugins: |
      [
        {
          "plugin": "cp:otoroshi.next.plugins.ApikeyCalls",
          "enabled": true,
          "config": {}
        }
      ]
spec:
  # ... rest of the HTTPRoute

This annotation adds API key authentication enforcement to the route — using Otoroshi's full plugin ecosystem, accessed through a standard Kubernetes annotation. You can add rate limiting, JWT validation, WAF rules, and any of Otoroshi's 200+ plugins this way.

Cross-Namespace Routing

If your HTTPRoute is in a different namespace than the backend Service, you need a ReferenceGrant in the target namespace:

apiVersion: gateway.networking.k8s.io/v1beta1
kind: ReferenceGrant
metadata:
  name: allow-from-frontend
  namespace: backend
spec:
  from:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    namespace: frontend
  to:
  - group: ""
    kind: Service

HTTPS/TLS Termination

To terminate TLS at Otoroshi, reference a Kubernetes TLS secret in your Gateway listener:

listeners:
- name: https
  protocol: HTTPS
  port: 443
  hostname: "*.example.com"
  tls:
    mode: Terminate
    certificateRefs:
    - name: my-tls-secret
  allowedRoutes:
    namespaces:
      from: All

Otoroshi will import the certificate from the Secret and handle TLS termination for all routes attached to this listener.


Wrapping Up

This is a setup that genuinely works, and works well.

CKE gets you a production-ready Kubernetes cluster in minutes. The clever k8s create command, the NodeGroup abstraction for worker pools, the automatic load balancer provisioning — it's fast, clean, and there's very little friction between "I want a cluster" and "I have a cluster". If you're already using Clever Cloud for your other workloads, it fits right into your existing workflow without any surprise.

Otoroshi's Gateway API implementation delivers exactly what you'd expect from it: fast reconciliation (changes propagate in seconds with gatewayApiWatch: true), accurate status updates on GatewayClass, Gateway, and HTTPRoute resources that make debugging straightforward, and full access to Otoroshi's plugin ecosystem through annotations. The entire routing, security, and traffic management surface of Otoroshi — 200+ plugins — is available from standard Kubernetes manifests. That's the implementation working as designed.

The whole stack comes together cleanly: a few kubectl commands, a handful of YAML manifests, and you have a fully functional API gateway integrated with the Kubernetes Gateway API standard. If you're already familiar with Otoroshi — which you might well be if you use Clever Cloud, since it's available as an add-on — the learning curve is essentially zero. You're just expressing what Otoroshi already does, in the standard Kubernetes way.

The era of annotation-heavy Ingress resources and controller-specific lock-in is ending. The Gateway API is the way forward, and with Otoroshi on CKE, the path there is straightforward.

If you have questions, run into issues, or want to share what you build with this stack, reach out to us at Cloud APIM. We're always happy to talk Kubernetes networking, API gateways, and all things Otoroshi.

Resources