The Evolution of Distributed Systems on Kubernetes – InfoQ.com

Posted: March 31, 2021 at 6:11 am

Key Takeaways

At the QCon in March, I gave a talk on the evolution of distributed systems with Kubernetes. First, I want to start with a question, what comes after microservices? I'm sure you all have an answer to that, and I have mine too. You'll find out at the end what I think that will be. To get there, I suggest we look at what are the needs of the distributed systems. And how those needs have been evolving over the years, starting with monolithic applications to Kubernetes and with more recent projects such as Dapr, Istio, Knative, and how they are changing the way we do distributed systems. We will try to make some predictions about the future.

To set a little more context on this talk, when I say distributed systems, what I have in mind is systems composed of multiple components, hundreds of those. These components can be stateful, stateless, or serverless. Moreover, these components can be created in different languages running on hybrid environments and developing open-source technologies, open standards, and interoperability. I'm sure you can make such systems using closed source software or create them on AWS and other places. For this talk specifically, I'm looking at the Kubernetes ecosystem and how you can create such a system on the Kubernetes platform.

Let's start with the needs of distributed systems. What I have in mind is we want to create an application or service and write some business logic. What else do we need from the platform from our runtime to build distributed systems? At the foundation, at the beginning is we want some lifecycle capabilities. When you write your application in any language, then we want to have the ability to package and deploy that application reliably, to do rollbacks, health checks. And be able to place the application on different nodes and do resource isolation, scaling, configuration management, and all of these things. These are the very first things you would need to create a distributed application.

The second pillar is around networking. Once we have an application, we want it to reliably connect to other services, whether within the cluster or in the outside world. We want to have abilities such as service discovery, load balancing. We want to do traffic shifting, whether for different release strategies or some other reasons. Then we want to have an ability to do resilient communication with other systems, whether that is through retries, timeouts, circuit breakers, of course. Have security in place, and get adequate monitoring, tracing, observability, and all that.

Once we have networking, the next thing is we want to have the ability to talk to different APIs and endpoints, i.e., resource bindings - to talk to other protocols and different data formats. Maybe even be able to transform from one data format to another one. I would also include here things such as light filtering, that is, when we subscribe to a topic, maybe we are interested only in certain events.

What do you think is the last category? It is state. When I say state and stateful abstractions, I'm not talking about the actual state management, such as what a database does or a file system. I'm talking more about developer abstractions that behind the scenes rely on the state. Probably, you need to have the ability to do workflow management. Maybe you want to manage long-running processes or do temporal scheduling or some cron jobs to run your service periodically. Perhaps you also want to do distributed caching, have idempotence, or be able to do rollbacks. All of these are developer-level primitives, but behind the scenes, they rely on having some state. You want to have these abstractions at your disposal to create sound distributed systems.

We will use this framework of distributed system primitives to evaluate how these have been changing on Kubernetes and other projects.

Suppose we start with the monolithic architectures and how we get those capabilities. In that case, the first thing is when I say monolith, and what I have in mind, in the context of distributed applications, is the ESB. ESBs are pretty powerful, and when we check our list of needs, we would say that ESBs had excellent support for all stateful abstractions.

With an ESB, you could do the orchestration of long-running processes, do distributed transactions, rollbacks, and idempotence. Furthermore, ESBs also provide outstanding resource binding capabilities and have hundreds of connectors, support transformation, orchestration, and even have networking capabilities. And lastly, an ESB can even do service discovery and load balancing.

It has all things around the resiliency of the networking connection so that it can do retries. Probably, because by nature, an ESB is not very distributed, it doesn't need very advanced networking and releases capabilities. Where ESB lacks is primarily lifecycle management. Because it's a single runtime, the first thing is you are limited to using a single language. That's typically the language that the actual runtime is created in, Java, or .NET, or something else. Then, because it's a single runtime, we cannot easily do declarative deployments or do an automatic placement. The deployments are pretty big, quite heavy, so it usually involves human interaction. And another difficulty with such a monolithic architecture is scaling: We cannot scale individual components.

Last but not least, around isolation, whether that's resource isolation or fault isolation. None of these can be done with monolithic architectures. From our needs' framework point of view, the ESB's monolithic architectures don't qualify.

Next, I suggest we look at cloud-native architectures and how those needs have been changing. If we look at a very high level, how those architectures have been changing, cloud-native probably started with the microservices movement. Microservices allow us to split a monolithic application by business domain. It turned out that containers and Kubernetes are actually a good platform for managing those microservices. Let's see some of the concrete features and capabilities that Kubernetes becomes particularly attractive for microservices.

In the very beginning, the ability to do health probes is what made Kubernetes popular. In practice, it means when you deploy your container in a pod, Kubernetes will check the health of your process. Typically, that process model is not good enough. You still may have a process that's up and running, but it's not healthy. That's why there is also the option of using readiness and liveness checks. Kubernetes will do a readiness check to decide when your application is ready to accept traffic during startup. It will do a liveness check to check the health of your service continuously. Before Kubernetes, this wasn't very popular, but today almost all languages, all frameworks, all runtimes have health checking capabilities where you can quickly start an endpoint.

The next thing that Kubernetes introduced is around the managed lifecycle of your application - what I mean is that you are no longer in control of when your service will start up and when it will shut down. You trust the platform to do that. Kubernetes can start up your application; it can shut it down, move it around on the different nodes. For that to work, you have to properly implement the events that the platform is telling you during startup and shutdown.

Another thing that Kubernetes made popular is around deployments and having those declaratively. That means you don't have to start the service anymore; check the logs whether it has started. You don't have to upgrade instances manually - Kubernetes with declarative deployments can do that for you. Depending on the strategy you chose, it can stop old instances and start new ones. Moreover, if something goes wrong, it can do a rollback.

Another thing is around declaring your resource demands. When you create a service, you containerize it. It is a good practice to tell the platform how much CPU and memory that service will require. Kubernetes uses that knowledge to find the best node for your workloads. Before Kubernetes, we had to manually place an instance to a node based on our criteria. Now we can guide Kubernetes with our preferences, and it will make the best decision for us.

Nowadays, on Kubernetes, you can do polyglot configuration management. You don't need in your application runtime anything to do configuration lookup. Kubernetes will make sure that the configurations end up on the same node where your workload is. The configurations are mapped as a volume or environment variable ready for your application to use.

It turns out those specific capabilities I just spoke about are also related. For example, if you want to do an automatic placement, you have to tell Kubernetes the resource requirements of your service. Then you have to tell it what deployment strategy to use. For the strategy to work correctly, your application has to implement the events coming from the environment. It has to implement health checks. Once you put all of these best practices in place and use all of these capabilities, your application becomes an excellent cloud-native citizen, and it's ready for automation on Kubernetes (this represents the foundational patterns for running workloads on Kubernetes). And lastly, there are other patterns around structuring the containers in a pod, configuration management, and behavior.

The next topic I want to cover briefly is around workloads. From the lifecycle point of view, we want to be able to run different workloads. We can do that on Kubernetes, too. Running Twelve-Factor Apps and stateless microservices is pretty straightforward. Kubernetes can do that. That's not the only workload you will have. Probably you will also have stateful workloads, and you can do that on Kubernetes using a stateful set.

Another workload you may have is a singleton. Maybe you want an instance of an app to be the only one instance of your app throughout the whole cluster - you want it to be a reliable singleton. When it fails, it should be started again. Hence, you can choose between stateful sets and replica sets depending on your needs and whether you want the singleton to have at least one or at most one semantic. Another workload you may have is around jobs and cron jobs - with Kubernetes, you can do those as well.

If we map all of these Kubernetes features to our needs, Kubernetes satisfies the life cycle needs. The list of requirements I usually create is primarily driven by what Kubernetes provides us today. These are expected capabilities from any platform, and what Kubernetes can do for your deployment is configuration management, resource isolation, and failure isolation. Furthermore, it supports different workloads except serverless on its own.

Then, if that's all Kubernetes gives for developers, how do we extend Kubernetes? And how can we make it give us more features? Therefore, I want to describe the two common ways that are used today.

The first thing is the concept of a pod, an abstraction used to deploy containers on nodes. Moreover, a pod gives us two sets of guarantees:

Depending on if you're using init containers or application containers, you get different guarantees. For example, init containers are run at the beginning; when a pod starts, it runs sequentially, one after another. They run only if the previous container has been completed successfully. They help implement workflow-like logic driven by containers.

Application containers, on the other hand, run in parallel. They run throughout the lifecycle of the pod, and this is the foundation for the sidecar pattern. A sidecar can run multiple containers that cooperate and jointly provide value to the user. That's one of the primary mechanisms we see nowadays for extending Kubernetes with additional capabilities.

To explain the following capability, I have to tell you how Kubernetes works internally briefly. It is based on the reconciliation loop. The idea of the reconciliation loop is to drive the desired state to the actual state. Within Kubernetes, many bits rely on that. For example, when you say I want two pod instances, this is the desired state of your system. There is a control loop that continually runs and checks if there are two instances of your pod. If two instances are not there, it will calculate the difference if there is one or more than two. It will make sure that there are two instances.

There are many examples of this. Some are replica sets or stateful sets. The resource definition maps to what the controller is, and there is a controller for each resource definition. This controller makes sure that the real world matches the desired one, and you can even write your own custom controller.

When running an application in a pod and you cannot load any configuration file changes at runtimes. However, you can write a custom controller that detects config map changes, restart your pod and application - and thus pick up the configuration changes.

It turns out that even though Kubernetes has a good collection of resources, that they are not enough for all the different needs you may have. Kubernetes introduced the concept of custom resource definitions. That means you can go and model your requirements and define an API that lives within Kubernetes. It lives next to other Kubernetes native resources. You can write your own controller in any language that understands your model. You can design a ConfigWatcher implemented in Java that describes what we explained earlier. That is what the operator pattern is, a controller that works with the custom resource definitions. Today, we see lots of operators coming up, and that's the second way for extending Kubernetes with additional capabilities.

Next, I want to briefly go over a few platforms built on top of Kubernetes, which heavily use sidecars and operators to give developers additional capabilities.

Lets start with the service mesh, and what is a service mesh?

We have two services, service A that wants to call service B, and it can be in any language. Consider that this is our application workload. A service mesh uses sidecar controllers and injects a proxy next to our service. You will end up with two containers in the pod. The proxy is a transparent one, and your application is completely unaware that there is a proxy - that is intercepting all incoming and outgoing traffic. Furthermore, the proxy also acts as a data firewall.

The collection of these service proxies represents your data plane and are small and stateless. To get all the state and configuration, they rely on the control plane. The control plane is the stateful part that keeps all the configurations, gathers metrics, takes decisions, and interacts with the data plane. Moreover, they are the right choice for different control planes and data planes. And as it turns out, we need one more component - an API gateway to get data into our cluster. Some service meshes have their own API gateway, and some use a third party. All of these components, if you look into those, provide the capabilities we need.

An API gateway is primarily focused on abstracting the implementation of our services. It hides the details and provides borderline capabilities. Service mesh does the opposite. In a way, it enhances the visibility and reliability within the services. Jointly, we can say that API gateway and service mesh provide all the networking needs. To get networking capabilities on top of Kubernetes, using just the services is not enough: You need some service mesh.

The next topic I like to discuss is Knative - a project started by Google a few years ago. It is a layer on top of Kubernetes that gives you serverless capabilities and has two main modules:

Just to give you a feel, what Knative Serving is? With Knative Serving, you define a service, but that is different from a Kubernetes service. This is a Knative service. Once you define a workload with a Knative service, you get a deployment but with serverless characteristics. You don't need to have an instance up and running. It can be started from zero when a request arrives. You get serverless capabilities; it can scale up rapidly and scale down to zero.

Knative Eventing gives us a fully declarative event management system. Let's assume we have some external systems we want to integrate with, some external event producers. At the bottom, we have our application in a container that has an HTTP endpoint. With Knative Eventing, we can start a broker, which can trigger a broker that Kafka maps, or it can be in memory or some cloud service. Furthermore, we can start importers that connect to the external system and import events into our broker. Those importers can be, for example, based on Apache Camel, which has hundreds of connectors.

Once we have our events going to the broker, then declaratively with the YAML file, we can subscribe our container to those events. In our container, we don't need any messaging clients - for example, a Kafka client. Our container would get events through HTTP POST using cloud events. This is a fully platform-managed messaging infrastructure. As a developer, you have to write your business code in a container and don't deal with any messaging logic.

From our needs' point of view, Knative satisfies a few of those. From a lifecycle point of view, it gives our workloads serverless capabilities, so the ability to scale to zero, and activate from zero and go up. From a networking point of view, if there is some overlap with the service mesh, Knative can also do traffic shifting. From a binding point of view, it has pretty good support for binding using Knative importers. It can give us Pub/Sub, or point-to-point interaction, or even some sequencing. It satisfies the needs in a few categories.

Another project using sidecars and operators is Dapr, which was started by Microsoft only a few months ago - and is rapidly getting popular. Moreover, version 1.0 is considered to be production-ready. It is a distributed systems toolkit as a sidecar - everything in Dapr is provided as a sidecar and has a set of what they call building blocks or a set of capabilities.

What are those capabilities? The first set of capabilities is around networking. Dapr can do service discovery and point-to-point integration between services. Similarly, it can also do the tracing, reliable communications, retries, and recovery to service mesh. The second set of capabilities is around resource binding:

Interestingly, Dapr also introduces the notion of state management. In addition to what Knative and service mesh gives you, Dapr also has abstraction on top of the state store. Furthermore, you can have key-value-based interaction with Dapr backed by a storage mechanism.

At a high level, the architecture is you have your application at the top, which can be in any language. You can use the client libraries provided by Dapr, but you don't have to. You can use the language features to do HTTP and gRPC called the sidecar. The difference to service mesh is that here the Dapr sidecar is not a transparent proxy. It is an explicit proxy that you have to call from your application and interact with over HTTP or gRPC. Depending on what capabilities you need, Dapr can talk to other systems, such as cloud services.

On Kubernetes, Dapr is deployed as a sidecar and can work outside of Kubernetes (it's not only Kubernetes). Furthermore, it also has an operator - and sidecars and operators are the primary extension mechanism. A few other components manage certificates, deal with actor-based modeling, and inject the sidecars. Your workload interacts with the sidecar and does all the magic to talk to other services, giving you some interoperability with different cloud providers. It also gives you additional distributed system capabilities.

If I were to sum up, what these projects are giving you, we could say that ESB is the early incarnation of distributed systems where we had the centralized control plane and data plane - yet it didn't scale well. There is still a centralized control plane with cloud-native, but the data plane is decentralized - and is highly scalable with sound isolation.

We always would need Kubernetes to do good lifecycle management, and on top of that, you would probably need one or more add-ons. You may need Istio to do advanced networking. You may use Knative to do serverless workloads or Dapr to do the integration. Those frameworks play nicely with Istio and Envoy. From a Dapr and Knative point of view, you probably have to pick one. Jointly, they are providing what we used to have on an ESB in a cloud-native way.

For the next part, I have made an opinionated list of a few projects where I think exciting developments are happening in these areas.

I want to start with the lifecycle. With Kubernetes, we can do a useful lifecycle of your application, which might not be enough for more complex lifecycle management. For example, you may have scenarios where the deployment primitive in Kubernetes is not enough for your application if you have a more complex stateful application.

In these scenarios, you can use the operator pattern. You can use an operator that does deployment and upgrade where it also backs up maybe the storage of your service to S3. Furthermore, you may also find out that the actual health checking mechanism in Kubernetes is not good enough. Suppose the liveness check and readiness check are not good enough. In that case, you can use an operator to do a more intelligent liveness and readiness check of your application, and based on that, perform recovery.

A third area would be auto-scaling and tuning. You can have an operator understanding your application better and do auto-tune on the platform. Today, there are primarily two frameworks for writing operators, the Kubebuilder from Kubernetes special interest group and the Operator SDK, which is part of the operator framework created by Red Hat. It has a few things:

The Operator SDK lets you write operators - an Operator Lifecycle Manager, about managing the operators lifecycle and OperatorHub, where you can publish your operator. You will see over 100 operators manage databases, message queues, and monitoring tools if you go there today. From lifecycle space, probably operators are the area where most active development is happening in the Kubernetes ecosystem.

Another project I picked is Envoy. The introduction of service mesh interfaces specification will make it easier for you to switch different service mesh implementations. There has been some consolidation on Istio architecture in the deployment. You don't have to deploy seven pods for the control plane; now, you can just deploy once. More interestingly is what's happening at the data plane in the Envoy project. We see that more and more Layer 7 protocols are added to Envoy.

Service mesh adds support for more protocols such as MongoDB, ZooKeeper, MySQL, Redis, and the most recent one is Kafka. I see that the Kafka community is now further improving their protocol to make it friendlier for service meshes. We can expect that there will be even more tight integration, more capabilities. Most likely, there will be some bridging capability. You can do an HTTP call locally from your application in your service, and the proxy will, behind the scene, use Kafka. You can do transformation and encryption outside of your application in a sidecar for the Kafka protocol.

Another exciting development has been the introduction of HTTP caching. Now Envoy can do HTTP caching. You don't have to use caching clients within your applications. All of that is done transparently in a sidecar. There are tap filters, so you can tap the traffic and get a copy of the traffic. Most recently, the introduction of WebAssembly, means if you want to write some custom filter for Envoy, you don't have to write it in C++ and compile the whole Envoy runtime. You can write your filter in WebAssembly, and deploy that at runtime. Most of these are still in progress. They are not there, indicating that the data plane and service mesh have no intention of stopping, just supporting HTTP and gRPC. They are interested in supporting more application-layer protocols to offer you more, to enable more use cases. Mostly, with the introduction of WebAssembly, you can now write your custom logic in the sidecar. That's fine as long as you're not putting there some business logic.

Apache Camel is a project for doing integrations, and it has lots of connectors to different systems using enterprise integration patterns. Camel version 3, for instance, is deeply integrated into Kubernetes and uses the same primitives we spoke about so far, such as operators.

You can write your integration logic in Camel in languages such as Java, JavaScript, or YAML. The latest version has introduced a Camel operator that runs in Kubernetes and understands your integration. When you write your Camel application, deploy it to a custom resource, the operator then knows how to build the container or find dependencies. Depending on the platform's capabilities, whether that's Kubernetes only, whether that's Kubernetes with Knative, it can decide what services to use and how to materialize your integration. There is quite a lot of intelligence going outside of your runtime - but into the operator - and all of that happens pretty fast. Why would I say it's a binding trend? Mainly because of the capabilities of Apache Camel with all the connectors it provides. The interesting point here is how it integrates deeply with Kubernetes.

Another project I like to discuss is Cloudstate and around state-related trends. And Cloudstate is a project by Lightbend and primarily focused on serverless and function-driven development. With their latest releases, they are integrating deeply with Kubernetes using sidecars and operators.

The idea is, when you write your function, all you have to do in your function is use gRPC to get state, to interact with state. The whole state management happens in a sidecar that is clustered with other sidecars. It enables you to do event sourcing, CQRS, key-value lookups, messaging.

From your application point of view, you are not aware of all these complexities. All you do is a call to a local sidecar, and the sidecar handles the complexity. It can use, behind the scenes, two different data sources. And it has all the stateful abstractions you would need as a developer.

So far, we have seen the current state of the art in the cloud-native ecosystem and some of the recent developments that are still in progress. How do we make sense of all that?

If you look at how microservice looks on Kubernetes, you will need to use some platform functionality. Moreover, you will need to use Kubernetes features for lifecycle management primarily. And then, most likely, transparently, your service will use some service mesh, something like an Envoy, to get enhanced networking capabilities, whether that's traffic routing, resilience, enhanced security, or even if it is for a monitoring purpose. On top of that, depending on your use case, you may need Dapr or Knative, depending on your workloads. All of these represent your out-of-process, additional capabilities. What's left to you is to write your business logic, not on top, but as a separate runtime. Most likely, future microservices will be this multi-runtime composed of multiple containers. Some of those are transparent, and some of those are very explicit that you use.

If I look a little bit deeper, how that might look like, you write your business logic in some high-level language. It doesn't matter what it is; it doesn't have to be Java only as you can use any other language and develop your custom logic in-house.

All the interactions of your business logic with the external world happen through the sidecar, integrating with the platform and does the lifecycle management. It does the networking abstractions for the external system and gives you advanced binding capabilities and state abstraction. The sidecar is something you don't develop. You get it off the shelf. You configure it with a little bit of YAML or JSON, and you use it. That means you can update sidecars easily because it's not embedded anymore into your runtime. It makes patching, updating easier. It enables polyglot runtime for our business logic.

That brings me to the original question, what comes after microservices?

If we see how the architectures have been evolving, application architectures at a very high level started with monolithic applications. Yet Microservices gives us the guiding principles on how to split a monolithic application into separate business domains. After that came serverless and Function-as-a-Service (FaaS), where we said we could split those further by operations, giving us extreme scaling - because we can scale each operation individually.

I would argue that maybe FaaS is not the best model - as functions are not the best model for implementing reasonably complex services where you want multiple operations to reside together when they have to interact with the same dataset. Probably, multi-runtime, I call it Mecha architecture, where you have your business logic in one container, and you have all the infrastructure-related concerns as a separate container. They jointly represent a multi-runtime microservice. Maybe that's a more suitable model because it has better properties.

You get all the benefits of microservice. You still have all your domain, all the bounded contexts in one place. You have all the infrastructure and distributed application needs in a separate container, and you combine them at runtime. Probably, the closest thing that's getting to that right now is Dapr. They are following that model. If you're only interested from a networking aspect, probably using Envoy is also getting close to this model.

Bilgin Ibryam is a product manager and a former architect at Red Hat, a committer, and a member of the Apache Software Foundation. He is an open-source evangelist, regular blogger, speaker, and author of Kubernetes Patterns and Camel Design Patterns books. Bilgins current work focuses on distributed systems, event-driven architecture, and repeatable cloud-native application development patterns and practices. Follow him @bibryam for future updates on similar topics.

Continue reading here:

The Evolution of Distributed Systems on Kubernetes - InfoQ.com

Related Posts