A new Amazon Web Services tool that allows users to manage AWS cloud services directly within Kubernetes should be a boon for developers, according to one AWS partner.
AWS Controllers for Kubernetes (ACK) is designed to make it easier to build scalable and highly available Kubernetes applications that use AWS services without the hassle of defining resources outside a cluster or running supporting services such as databases, message queues or object stores within a cluster.
The AWS-built, open source project is now in developer preview on GitHub, which means the end-user-facing installation mechanisms arent yet in place. ACK currently supports Amazon S3, AWS API Gateway V2, Amazon SNS, Amazon SQS, Amazon DynamoDB and Amazon ECR.
Our goal with ACK (is) to provide a consistent Kubernetes interface for AWS, regardless of the AWS service API, according to a blog post by AWS principal open source engineer Jay Pipes, Michael Hausenblas, a product developer advocate for the AWS container service team, and Amazon EKS senior project manager Nathan Taber.
ACK got its start in 2018 when Chris Hein, then an AWS partner solutions architect, debuted AWS Service Operator (ASO) as an experimental project. Feedback prompted AWS to relaunch it last August as a first-tier, open-source software project, and AWS renamed ASO as ACK last month.
The tenets we put forward are ACK is a community-driven project based on a governance model defining roles and responsibilities; ACK is optimized for production usage with full test coverage, including performance and scalability test suites; (and) ACK strives to be the only code base exposing AWS services via a Kubernetes operator, the blog post states.
ACK continues the spirit of the original ASO, but with two updates in addition to now being an official project built and maintained by AWS Kubernetes team. AWS cloud resources now are managed directly through AWS APIs instead of CloudFormation, allowing Kubernetes to be the single source of truth for a resources desired state, according to the blog post. And code for the controllers and custom resource definitions is generated automatically from the AWS Go SDK, with human editing and approval.
This allows us to support more services with less manual work and keep the project up-to-date with the latest innovations, the AWS blog post stated.
ACK is a collection of Kubernetes Custom Resource Definitions and Kubernetes custom controllers that work together to extend the Kubernetes API and create AWS resources on behalf of a users cluster, according to AWS. Each controller manages customs resources representing API resources of a single AWS service.
Kubernetes users can install a controller for an AWS service and then create, update, read and delete AWS resources using the Kubernetes API in lieu of logging into the AWS console or using AWS Command Line Interface to interact with the AWS service API.
This means they can use the Kubernetes API to fully describe both their containerized applications, using Kubernetes resources like Deployment and Service, as well as any AWS managed services upon which those applications depend, AWS said.
AWS plans to add ACK support for Amazon Relational Database Service and Amazon ElastiCache, and possibly Amazon Elastic Kubernetes Service (EKS) and Amazon Managed Streaming for Apache Kafka.
The cloud provider, which is seeking developer input on the expected behavior of destructive operations in ACK and whether it should be able to adopt AWS resources, also is working on enabling cross-account resource management and native application secrets integration.
AWS Partner Reaction
ACK is a strategic move for AWS, especially as it competes with other Kubernetes offerings from competitors including Google Cloud, which already offers native integration from its Google Kubernetes Engine (GKE) to its cloud services such as Spanner, BigQuery and others, according to Bruno Andrade, a cofounder and CEO of AWS partner Shipa, a Santa Clara, Calif. startup that launched this year and directly integrates into AWS Kubernetes offering and its services.
We believe ACK makes total sense, especially for users that are looking at building a true cloud-native application, where there is native integration to cloud services for their application directly from their clusters, which can reduce drastically the time to launch applications or roll out updates, said Andrade, whose company allows teams to easily deploy and operate applications without having to learn, write and maintain a single Kubernetes object or YAML file.
ACK and GKE connector are focused on services running within their clusters and clouds, Andrade said, so one thing that still (needs) to be fully addressed are cases when customers have clusters running across multiple clouds and on-premises, and how the workloads running across these clusters will properly connect across the cloud-native services offered by the different services.
When using Kubernetes clusters in production, workloads typically need to integrate with other cloud services and resources to deliver their intended solutions, said Kevin McMahon, executive director of cloud enablement at digital technology consultancy SPR, an AWS Advanced Consulting Partner based in Chicago.
Integrating with the cloud services provided by vendors like AWS requires custom controllers and resource definitions to be created, he said. AWS Controllers for Kubernetes makes it easier to enhance Kubernetes workloads using AWS cloud services by providing vendor-managed, standardized integration points for companies relying on Kubernetes. Now companies looking to use Kubernetes can completely describe their applications and the AWS managed services that those applications rely on in one standard format.
With ACK, AWS continues to simplify the deployment and configuration of its services by integrating natively with Kubernetes, said Alban Bramble, director of public cloud services at Ensono, AWS Advanced Consulting Partner and managed services provider with its headquarters in Downers Grove, Ill.
This change will be a boon for developers looking to speed up releases and manage all resources from a single deployment, Bramble said.
But one area of possible concern, according to Bramble, is this could negatively impact policies already in place by SecOps teams, resulting in resources being deployed without their knowledge, thereby reducing their ability to effectively monitor and secure the services running in the environment.
Careful consideration and planning needs to take place between those two groups in order to ensure that processes are in place that dont stifle the developers ability to work within agile release cycles, while also accounting for the governance and security policies already in place, he said.
More: