You're sleeping on container networking
I drafted this article based on observations (and speculations), just to be pleasantly surprised when I saw Isovalent had just released their 2025 State of Kubernetes Networking. Their research and my observations are nearly a 1-to-1 mapping, so now I’ve added some of their quantitative and qualitative takeaways to validate all the below.
I started looking at container networking because:
No other analyst did
I think you’re sitting on a performance, security, and observability goldmine and have little idea about or are put off by the technicalities.
Container networking is in the awkward place between network engineering and cloud engineering. Neither are adequately trained to deal with it, so you need a special breed of engineers that understand both networks, clouds, and Kubernetes.
This special breed of engineers are Platform/Infrastructure engineers, who amount to 40% of those involved in Kubernetes networking, followed by SREs and DevOps folks at 25%.
There are also a lot of components that go into container networks, most (if not all) of them open source. This means that you’ve got a long learning curve and very few enterprise-grade support for deployment and implementation. Between CNIs, load balancers, ingress gateways, and service meshes, organizations have an average of 6.28 networking tools.
Also noteworthy is that Ingress controllers are superseded by the Gateway API, with 45% of respondents of the survey planning to add it to their environment.
All-in-all, I did not find any comprehensive picture of what this container networking business means, so I’ve set out to do my own.
Container networking is not really about networking
When done well, networking supersedes its core function and becomes a vehicle for advanced functions such as the ones below. The best part is that all of the below are already available with your existing container networking infrastructure.
Service performance - lowering virtualization overheads, minimizing kernel-user space context switching, latency optimization through intelligent routing.
Security - microsegmentation, IDS/IPS, L3 through to L7 traffic filtering.
Observability - distributed tracing across service calls, real-time metrics collection, traffic flow visualization, and centralized logging
Authentication and authorization - mutual TLS for service-to-service authentication, SPIFFE/SPIRE implementaiton, zero trust enforcemenet
Reliability - traffic distribution to ensure service continuity in case of outages or handling increases in the volume of requests. and availability.
I found this great observation in the report, which highlights the idea that once you set up the networking pipes, the next natural activities move into security and identity.
Teams typically start with no [network] policies, then implement basic L3/L4 segmentation, often at the namespace level. More mature environments evolve toward pod- and identity-based policies, or introduce L7 rules to secure service-to-service communication.
The two containerized horses
The most important component of container networking are the CNIs, which are mandatory plugins for Kubernetes that configure the network resources, provision IP addresses, and maintain connectivity with hosts. In many ways, CNIs have been synonymous with container networking.
As such, container networking has mainly been a two horse race between Calico and Cilium, the most widely deployed CNIs. The main difference between them is that Cilium is fully eBPF based, while Calico supports eBPF alongside Linux and Windows. But looking at a feature comparison, these two have been so close that I wouldn’t be able recommend one over the other.
But, after having followed the space for 3-4 years now, I can finally see some differentiation between them.
Isovalent, now part of Cisco
The creators of Cilium, Tetragon, and Hubble, as well as a main driving force behind the eBPF ecosystem. Some distinguishing aspect include:
Having been acquired by Cisco in late 2023, the Isovalent products are now becoming part of the wider Cisco portfolio. For example Isovalent’s runtime security integrates directly with Nexus devices.
The Isovalent Load Balancer, which is designed to distribute application traffic across heterogeneous environments (data center/ on-prem, cloud-native, or self-hosted/managed Kubernetes).
Tigera
The creators of Calico and sub-projects Felix and Whisker, Tigera’s CNI has offered a non-eBPF option for those who want to use Linux iptables. Lately, some of Tigera’s developments include:
The binary deployment option, which means you can use Calico to enforce the same network policies on non-containerized entities, such as bare metal and virtual machines. This means that you can get consistent security policies across environments.
Advancements in L7 stuff and native implementation of the Envoy proxy and Gateway API.
The up-and-coming container ponies
As I’ve been researching the space to produce the GigaOm Radar for Container Networking, I was looking at other solutions that support L3/4 networking for Kubernetes (as a pure L7 would qualify as a service mesh), and found really robust solutions from the folks below.
Greymatter
Greymatter came in the other way around, starting off as a L7 service mesh, gradually expanding its capabilities into L3/4, and recently introducing an agentic intelligence layer for the autonomous management of networking components.
F5
Besides the widely deployed NGINX, F5 developed the Distributed Cloud Services following its 2021 acquisition of Volterra to provide full-stack networking portfolio for Layer 3 through Layer 7. F5 Distributed Cloud Services features a native CNI implementation to host containers or VMs as pods.
Tetrate
Tetrate Service Bridge is an Istio-based service mesh and control plane that sits at the application edge, at cluster ingress, and between workloads in Kubernetes and traditional compute clusters. Edge and ingress gateways route and load balance application traffic across clusters and clouds, while the service mesh controls connectivity between services.


