February 21, 2026 • 12 min read
From Ingress NGINX to Gateway API: Migrate the Edge Before It Becomes a Liability
Why ingress-nginx retirement changes edge risk, and a practical migration pattern to Gateway API.
It started the way these things usually start: with a Slack message that looked harmless.
“Hey — do we still use ingress-nginx anywhere?”
That question is never just a question. It’s someone sensing risk before it has a ticket number.
By the time I answered, the real problem had already arrived: Ingress NGINX is in best‑effort maintenance until March 2026, and after that there are no releases, no bug fixes, and no security updates. Existing clusters won’t “break” on that date — which is exactly why it’s dangerous. You can keep running it, and it will keep routing traffic… right up until the day it routes an exploit. (kubernetes.io)
This post is my engineer’s log of getting off ingress-nginx and onto Gateway API: what surprised us, what we broke, what we learned, and what I’d do differently if I had to do it again with the clock ticking.
Why this matters (even if nothing “stops working”)
The Kubernetes project was unusually blunt about this retirement: after March 2026, there are no further releases and no updates to resolve security vulnerabilities. Repos go read‑only, artifacts remain available, and deployments keep functioning. (kubernetes.io)
The statement from the Kubernetes Steering Committee and Security Response Committee went further: staying on ingress-nginx after retirement leaves you and your users vulnerable to attack, and there is no drop‑in replacement. (kubernetes.io)
That combination is what makes this different from a typical “upgrade when you can” situation:
- It’s an edge component (internet-facing, high blast radius).
- It’s unmaintained after a specific date (risk increases every day after).
- The most common mitigation path isn’t “upgrade ingress-nginx” — it’s migrate away. (kubernetes.io)
And if you’re in a regulated environment (or you just have a security team that reads advisories), “we’re running an unpatched edge proxy on purpose” is a hard position to defend.
The reality we were migrating from
Most ingress-nginx estates look similar:
- A pile of
Ingressobjects that started neat and became… expressive. - A handful of “blessed” annotations everyone uses.
- A long tail of “one service needed a weird thing once” annotations.
- A secret stash of snippets, because sometimes you need the escape hatch.
That last one is important.
One of the reasons ingress-nginx became so popular is the same reason it became hard to sustain: flexibility. Kubernetes called out that features like arbitrary config injection via “snippets” annotations were once “helpful options” and are now considered serious security flaws — “yesterday’s flexibility has become today’s insurmountable technical debt.” (kubernetes.io)
I’ve seen snippets used for:
- header rewriting that should have been in the app,
- one-off IP allowlists,
- CORS hacks,
- temporary redirects that lived for two years,
- and, once, a very creative attempt at rate limiting.
Snippets work… until they don’t. They make the edge programmable in a way that’s hard to govern.
When the retirement notice landed, our first instinct was predictable:
“Okay. What’s the new ingress-nginx?”
That instinct is the trap. The point isn’t to find the next controller that can impersonate your current mess. The point is to move to an API model that makes the edge governable.
That’s what Gateway API is trying to be.
The first dead end: treating this like a YAML translation problem
Our first pass was naive: “Ingress in, Gateway out.”
We opened a few Ingresses and started mapping:
spec.rules[].host→HTTPRoute.spec.hostnamespath/pathType→HTTPRoute.spec.rules[].matchesserviceName/servicePort→backendRefs
And for the simple stuff, it worked.
Then we hit the real world:
nginx.ingress.kubernetes.io/rewrite-targetnginx.ingress.kubernetes.io/proxy-body-sizenginx.ingress.kubernetes.io/auth-*nginx.ingress.kubernetes.io/server-snippet- timeouts, buffering, websocket behavior, custom error pages…
At that point you realize: Ingress wasn’t your API. Annotations were.
And annotations are controller-specific. That’s the whole reason Gateway API exists: to stop encoding critical behavior in vendor extensions stapled to a too-small spec.
So we changed the question from:
“How do we convert Ingress manifests?”
to:
“What edge behaviors do we actually need — and which ones were accidental?”
That shift was the turning point.
The key insight: separate “platform edge” from “app routing”
Gateway API’s design forces a separation of concerns that Ingress never really achieved:
- GatewayClass: “what implementation runs this?”
- Gateway: “where is the edge and what listeners exist?”
- HTTPRoute: “how does traffic map to backends?”
- ReferenceGrant and policy attachments: “what cross-namespace wiring is allowed?”
In other words: platform owns the edge, teams own their routes, and security has something concrete to enforce.
This wasn’t just architecture purity. It made migration survivable.
Because we could build a new edge alongside the old one.
The official Gateway API migration guide explicitly recommends running a Gateway controller alongside ingress-nginx, each with a different external IP, so you can validate in isolation. (gateway-api.sigs.k8s.io)
That line saved us. It turned a scary cutover into an incremental migration.
The migration approach that actually worked
We ended up with three parallel tracks:
- Stand up a Gateway API controller (implementation choice depends on your environment).
- Create a platform-owned Gateway that represents “the front door”.
- Migrate apps one by one by adding HTTPRoutes and moving traffic gradually.
Step 0: inventory what you’re really using
Before you touch controllers, do an inventory pass:
- List all Ingresses and extract annotations.
- Categorize annotations into:
- “core routing” (hosts, paths),
- “security” (auth, mTLS, allowlists),
- “traffic policy” (timeouts, retries, rate limits),
- “escape hatches” (snippets, custom templates).
This is where you’ll find the migration tax. Also where you’ll find things you should probably delete.
The Kubernetes blog post even gives a blunt way to confirm you’re using ingress-nginx:
kubectl get pods --all-namespaces \
--selector app.kubernetes.io/name=ingress-nginx
If that returns pods, you’re in this story. (kubernetes.io)
Implementation walkthrough: a concrete “hello, production” Gateway setup
Below is a clean baseline. It won’t cover every policy you’ve bolted onto Ingress over the years — that’s the point.
1) GatewayClass (controller-owned)
Your controller will usually install a GatewayClass or tell you what to create. A simplified example:
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: prod-gateway
spec:
controllerName: example.net/gateway-controller
The key line is controllerName: it binds the class to a specific implementation.
2) Gateway (platform-owned)
Think of this as the load balancer / edge listener definition.
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: edge
namespace: platform-gateway
spec:
gatewayClassName: prod-gateway
listeners:
- name: https
protocol: HTTPS
port: 443
hostname: "*.example.com"
tls:
mode: Terminate
certificateRefs:
- kind: Secret
name: wildcard-example-com-tls
A few things worth calling out:
listenersare explicit, typed, and structured (no annotation soup).- TLS is first-class, not a controller convention.
- The Gateway lives in a platform namespace, which makes “who owns the front door?” a real answer.
3) HTTPRoute (app-owned)
Now teams can attach routing rules to the gateway.
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: orders
namespace: orders
spec:
parentRefs:
- name: edge
namespace: platform-gateway
hostnames:
- "orders.example.com"
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: orders-svc
port: 80
This is the “Ingress replacement” most people imagine — but the ergonomics are different:
parentRefsmakes attachment explicit.- Cross-namespace attachment can be controlled.
- You can have multiple routes attach to the same gateway without a single mega-manifest.
4) Cross-namespace access control
In many environments, you don’t want any namespace to attach to the edge gateway by default. This is where Gateway API’s policy model shows its teeth.
Depending on your implementation and configuration, you can enforce attachment controls so only approved namespaces/routes bind to the gateway. This is the opposite of the “anyone who can create an Ingress can publish a host” era.
(Exactly how you enforce this varies by controller, but the API model supports it cleanly — and it’s one of the reasons the ecosystem is pushing people toward Gateway API as the successor to Ingress. (gateway-api.sigs.k8s.io))
Validation: how we proved we weren’t lying to ourselves
If you migrate the edge like a config exercise, you’ll learn the truth in production. Don’t.
We validated in layers:
1) Shadow traffic (when possible)
For read-heavy endpoints, we used mirrored requests at the client or gateway layer when supported by our chosen implementation. When not possible, we relied on synthetic load.
2) Synthetic checks with “old vs new” parity
We stood up the Gateway on a separate external IP (recommended in the migration guide). (gateway-api.sigs.k8s.io)
Then we ran the same smoke and contract tests against:
ingress-old.example.com→ ingress-nginx IPingress-new.example.com→ gateway IP
Things that broke first were the things we expected least:
- redirect chains and trailing slashes,
- websocket upgrade headers,
- body size limits,
- oddities around path matching for legacy clients.
3) Observability signals that actually matter at the edge
We watched:
- 4xx/5xx rate by host and route,
- upstream latency percentiles,
- retry rates (if enabled),
- TLS handshake errors,
- sudden increases in request size rejections.
The important part: we compared shape, not single numbers. I don’t trust a migration that “looks fine” because a dashboard is green.
Tradeoffs and alternatives we seriously considered
When Kubernetes says “migrate to Gateway API or another Ingress controller,” that’s not a single path. (kubernetes.io)
We evaluated three realistic options:
Option A: Move to another Ingress controller (stay on Ingress API)
Pros:
- Fastest “keep the lights on” path for basic routing.
- Least app-team retraining.
Cons:
- You’re still betting on a frozen API surface.
- You’ll likely repeat the annotation trap (different annotations, same disease).
- You don’t get the governance model that Gateway API is built around.
This is viable as a temporary bridge, especially if you have a hard deadline and a small platform team. But don’t confuse “possible” with “strategic.”
Option B: Adopt Gateway API (recommended path)
Pros:
- Clear separation of platform vs app concerns.
- First-class routing/policy structures instead of annotations.
- Designed as the successor to Ingress; core APIs are GA (v1). (kubernetes.io)
Cons:
- Not a drop-in replacement. The Kubernetes steering statement is explicit about that. (kubernetes.io)
- Controller choice matters a lot (feature parity varies).
- You will need to redesign some behaviors you previously expressed via annotations/snippets.
This is what we chose, because it reduced long-term operational risk instead of just swapping components.
Option C: Vendor/load balancer specific integrations
Cloud provider gateways, service meshes, API gateway products — sometimes this is the right answer, especially if you already pay for it and it matches your compliance posture.
But I’ve also seen teams trade one kind of lock-in for another and call it “modernization.” Choose with eyes open.
Production hardening: edge cases that will bite you
A few things I’d put on the “do not discover this at 2am” list:
1) TLS and certificate sprawl
Ingress often hid TLS complexity behind conventions. Gateway makes it explicit — which is better, but means you need a sane strategy:
- wildcard vs per-host certs,
- where cert secrets live,
- how rotation works,
- how teams request new hostnames.
If you don’t have automation here, you’ll invent it during the migration. That’s fine — just admit it’s part of the work.
2) Request/response limits
Many ingress-nginx setups rely on annotations for body size and buffering. Those limits don’t disappear when you migrate — your clients will find them.
Decide intentionally:
- what limits should exist at the edge,
- what belongs in the app,
- what should be standardized vs per-route.
3) The “we need a snippet” reflex
This is the cultural migration, not the technical one.
Ingress NGINX’s own retirement post calls out snippets as an example of flexibility turning into a security flaw. (kubernetes.io)
If your org’s muscle memory is “just add a snippet,” moving to Gateway is your opportunity to replace that with a governed extension model — or at least a review process that treats edge code like production code (because it is).
4) Multi-tenancy and “who can publish routes”
If you run shared clusters, Gateway API’s separation of duties is one of the biggest wins. But you need to actually use it:
- limit who can create Gateways,
- control which namespaces can attach routes,
- review cross-namespace references.
If you skip this, you’ve basically recreated Ingress with prettier YAML.
Lessons learned (the durable kind)
1) Migrations fail when you pretend your current state is intentional
The fastest way to get stuck is to treat every existing annotation as a “requirement.” Some were hacks. Some were temporary. Some were never used.
Gateway API didn’t just move traffic — it forced us to decide what we meant.
2) Don’t optimize for YAML parity; optimize for blast radius
We stopped aiming for “same behavior everywhere” and aimed for “safe behavior first.”
A smaller feature set with predictable operations beats a perfect emulation of the old world — especially at the edge.
3) Run both systems long enough to learn the weirdness
The Gateway API guide’s advice to run controllers side-by-side with separate IPs is gold. (gateway-api.sigs.k8s.io)
It gives you time to discover the unglamorous differences: redirects, path matching, websocket quirks, client timeouts.
4) The edge is a platform product, whether you admit it or not
Ingress NGINX often lived in an uncomfortable middle ground: “platform runs it, apps configure it, nobody owns the outcome.”
Gateway API makes ownership boundaries clearer — but only if you implement them that way.
5) “It still works” is not a security strategy
Kubernetes couldn’t have been clearer: after March 2026, there are no security updates for ingress-nginx. (kubernetes.io)
Continuing to run it is a conscious choice to accept accumulating risk at the boundary of your system.
Closing reflection: what I’d do differently next time
If I had to redo this migration, I’d spend less time on translation tooling and more time on:
- building a clean, opinionated “golden path” Gateway and HTTPRoute pattern,
- documenting supported behaviors (and explicitly rejecting the rest),
- automating TLS and DNS earlier,
- and, honestly, setting expectations with app teams that some “edge tricks” are going away.
Because the real work isn’t moving manifests. It’s moving a habit: from “the edge is where we improvise” to “the edge is where we standardize.”
If you’re in the middle of this right now, I’d love to hear the messy parts:
- Which ingress-nginx annotations turned out to be hardest to replace?
- Did you go Gateway API, or hop to another Ingress controller first?
- What was your “we thought it was simple until…” moment?
Final takeaways
- March 2026 isn’t when traffic stops. It’s when risk starts compounding. (kubernetes.io)
- Gateway API is not a drop-in replacement — plan for redesign, not translation. (kubernetes.io)
- Run Gateway alongside ingress-nginx on separate IPs to validate safely. (gateway-api.sigs.k8s.io)
- Use the migration to delete accidental complexity — especially snippet-driven behavior. (kubernetes.io)