Blog

"Start at the edge, typically with an API gateway, and work inwards"

Interview with Daniel Bryant

Jul 6, 2021

We spoke with Daniel Bryant about how API gateways have evolved, what impact microservices has on APIs, service meshes, and more. What are the biggest challenges when deploying and working with API gateways and what helpful tips does Daniel Bryant offer?

JAXenter: How have API gateways evolved over the past two decades, and what were the driving factors? Is there anything fundamentally different with “cloud native” API gateway technology?

Daniel Bryant: There has been primarily three big changes with API gateways and related edge technologies over the past 25 years: the move from hardware to software; the shift of focus from layer 4 of the OSI networking stack to layer 7; and the need to support decentralized management.

I’m old enough to remember manually installing edge appliances, such as “racking and stacking” load balancers in data centers, but even the younger reader will have seen the increased adoption of virtualization technology over the past 10 years. Load balancers, WAFs, and other edge components have moved from hardware to software for mainly two reasons: (1) to save costs, e.g. running software on commodity VMs in the cloud is considerably cheaper than using specialized edge hardware; and (2) to increase both configurability and flexibility e.g. “cloud native” developers can now program the edge using “infrastructure as code”, as seen with Kubernetes YAML and custom resources.

In regards to the move to a layer 7 aware edge, we are now seeing a lot of API access and routing decisions being made based on a richer set of user and application data than we did before the time of “cloud native”. For example, we can route a user’s request based on HTTP metadata, such as User-Agent or a cookie, or we can rate limit a MongoDB connection based on the metadata contained within the request. This has primarily been driven by the need to innovate and experiment more rapidly.

The third change is the move from centralized to decentralized management of the edge. When my Java developer career was in full swing in the early 2010s, we typically had centralized teams that managed the edge and API gateway ecosystem. If we wanted to make a change in these systems, e.g. opening up a new port or registering a new API, we typically had to raise a ticket to get this change actioned. With modern development teams wanting to move increasingly fast, raising tickets to make changes in an API gateway is a potential blocker. Instead, development teams now want access to self-service mechanisms to change edge config or release a new API.

No longer is it a single monolith offering all of the APIs; now potentially every service offers an API. Scaling the management of the edge and API gateways is therefore a big challenge.

JAXenter: What impact has adopting new architecture styles, such as microservices, had on API gateways?

Daniel Bryant: The biggest impact is that with microservices-based architectures there are typically more services and APIs exposed at the edge. No longer is it a single monolith offering all of the APIs; now potentially every service offers an API. Scaling the management of the edge and API gateways is therefore a big challenge.

For developers to release new services and functionality rapidly — and also be able to understand the state of the distributed system and get the feedback they need — everything needs to be self-service, and also independently configurable and stored in a “single source of truth”.

JAXenter: How does an API gateway relate to a service mesh? Do you need both when adopting Kubernetes?

Daniel Bryant: I get this question a lot. And I understand why — it can be very confusing!

An API gateway handles user ingress traffic. This is often referred to as “north-south” traffic, as historically the network diagrams we drew showed user traffic flowing down the page, north to south. An API gateway is typically deployed at the edge of a system.

A service mesh handles service-to-service traffic, often called “east-west” traffic. This technology typically sits within a cluster or data center, and can also span “multi-cluster”, joining two disparate clusters together at the network level.

Although a lot of the underlying technologies involved are the same e.g. you can find the Envoy Proxy in API gateways and service meshes, the use cases are quite different. In particular, the way engineers interact with the two technologies and configure them is quite different. We often say that the “control plane” has different requirements for proxies being used as an API gateway or as a service mesh.

You don’t necessarily need both when moving to Kubernetes. My advice is to start at the edge (typically with an API gateway) and work inwards as necessary. Running a service mesh can offer a lot of benefits, but the operational cost can be quite high.

JAXenter: What are the biggest challenges when deploying and working with API gateways and Kubernetes and what tips can you offer engineers to overcome these challenges?

Daniel Bryant: In my experience (and building on some of the ideas I mentioned above), the two biggest challenges when working with API gateways in Kubernetes are scaling edge management and supporting diverse edge requirements. Adopting a microservices style architecture often means that the number of services proliferates (for better or worse!), and in turn this also means that the number of APIs that are exposed to the edge increases. The key to managing these APIs in a “cloud native” way is to adopt best practices such as using declarative configuration (custom resources in Kubernetes) and embracing a GitOps style workflow, including storing everything in version control and using a continuous delivery K8s operator like Weave’s Flux.

In regards to diverse edge requirements, this is all about supporting multiple protocols that are popular within modern cloud-based apps (gRPC, gRPC-Web REST, WebSockets) and allowing developers to specify how centralized cross-functional requirements should be configured for each API, such as authentication, rate limiting, and caching.

JAXenter: What is the best way to get started with experimenting with this technology?

Daniel Bryant: I’m a big fan of using a playground for experimenting with a technology or tool, and “learning by doing.” Whether it’s using something like the Java REPL, cloud sandboxes, or locally deployed Kubernetes clusters, in my experience these are great ways to get started.

The Ambassador Labs team am I have recently been working on a free web-based tool, the K8s Initializer, for helping developers bootstrap a newly deployed Kubernetes cluster to be a more realistic playground with Ingress, CD tooling, and observability.

I’m a big fan of using a playground for experimenting with a technology or tool, and “learning by doing.”.

All a developer needs to do is answer a few questions about their K8s cluster location (minikube, AWS etc), their networking config (ELB vs NLB), and what tools they want installed (Argo, Prometheus etc) and the Initializer will generate a series of K8s YAML files that can be applied to a cluster in order to provide an integrated configuration of all the selected options and tools.

Thanks very much!

All News & Updates of API Conference:

Behind the Tracks

API Management

A detailed look at the development of APIs

API Development

Architecture of APIs and API systems

API Design

From policies and identities to monitoring

API Platforms & Business

Web APIs for a larger audience & API platforms related to SaaS