The release of Kubernetes version 1.18 comes at an interesting time, to say the least. The Kubernetes release team has done an amazing job of pushing out the new version despite all the turmoil and uncertainty caused by the spread of COVID-19, which impacts the global Kubernetes developer community members like everyone else.
The release features a number of new enhancements and changes. New and maturing features include enhanced security options, improved support for Windows, multiple extensions to the Container Storage Interface, and more. We will cover a few of these changes and enhancement highlights.
Version 1.18 includes several backwards-incompatible changes that users and developers need to know about before upgrading.
kubectl no longer defaults to using http://localhost:8080 for the Kubernetes API server endpoint, to encourage using secure, HTTPS connections. Users must explicitly set their cluster endpoint now.
Cluster administrators can choose to use a third-party Key Management Service (KMS) provider as one option for encrypting Kubernetes secrets at rest in the etcd data store backing the cluster. The KMS provider uses envelope encryption, which uses a data encryption key (DEK) to encrypt the secrets. Kubernetes stores a KMS-encrypted copy of the DEK locally. When the kube-apiserver needs to encrypt or decrypt a Secret object, it sends the DEK to the KMS provider for decryption. Kubernetes does not persist the decrypted DEK to storage.
Release 1.18 makes several changes to the KMS provider interface used for EncryptionConfiguration resources. The CacheSize field no longer accepts 0 as a valid value; the CacheSize type changes from int32 to *int32; and validation of the Unix domain socket for the KMS provider endpoint now happens when the EncryptionConfiguration is loaded.
Weve compiled a checklist to help ensure your K8s clusters are production-ready for security, stability, and scale.
Download Today
To simplify the configuration and security of Kubernetes API calls that involve streaming connections to containers, this change deprecates two streaming configurations.
Kubernetes persistent volumes default to giving containers in a pod access to the volume by mounting the filesystem, a suitable method for the majority of applications and use cases. However, some applications require direct access to the storage block device, notably certain databases that use their own storage format for increased performance.
This enhancement allows users to request a persistent volume as a block device where supported by the CSI and underlying storage provider. In the corresponding pods container specification, users can set the device path which the containers application can use to access the block device.
The horizontal pod autoscaling (HPA) API allows users to configure the automatic addition and removal of pods in a replica set based on various metric values. This enhancement adds an optional behavior field to the HorizontalPodAutoscaler resource type. Users can set the scale-up and scale-down rates, enabling them to customize the HPA behavior for different applications. For example, an application like a web server which sometimes gets sudden spikes in traffic may require adding new pods very quickly.
Because web servers are generally stateless, pods could also be removed quickly when the traffic subsides. On the other hand, users may want to slow the scale-down for deployments with a higher initialization overhead, e.g., containers running Java.
Cloud providers and many on-premises environments offer multiple zones or other topological divisions that provide redundancy in case of a localized failure. For applications to benefit from the independent availability of multiple failure zones, replicas need to be deployed to multiple zones. However, the default Kubernetes scheduler had no awareness or options for spreading a replica sets pods across zones.
This feature adds an optional topologySpreadConstraints field to the pod specification. Users can select node labels to use for identifying these domains and configure the tolerance and evenness for replica placement.
Currently, Secret and ConfigMap objects mounted in a container periodically get updated with the new object value if the associated Kubernetes resource gets changed. In most cases, that behavior is desirable. Pods do not need to be restarted to see the new value, and if a workload only needs the startup value, it can read it once and ignore future changes.
Some use cases may benefit from preserving the secret or config map data as it was at the pods start time. Making the data available in the mounted volume immutable protects applications from potential errors in updates to the underlying Kubernetes object. It also reduces the load on the kubelet and the kubeapi-server, because the kubelet no longer has to poll the Kubernetes API for changes for immutable objects.
This change adds the optional ability to make Secret and ConfigMap objects immutable through the new immutable field in their specifications. A resource created as immutable can no longer be updated, except for metadata fields. Users will need to delete an existing resource and recreate it with new data to make changes. If users do replace an object with new values, they will need to replace all running pods using those mounts, because existing pods will not get updates for the new data.
The ability to create a persistent volume cloned with the data from an existing persistent volume claim as source graduates to generally available. This feature is supported only via the Container Storage Interface, not in in-tree drivers. In addition, the back-end storage provider and the CSI plugin in use must support creating a volume from an existing volumes image. Specify a dataSource in a PersistentVolumeClaim to clone from an existing PVC.
Note that the exact method of cloning depends on the storage provider. Some providers may not support cloning mounted volumes or volumes attached to a virtual machine. In addition, cloning active volumes creates the possibility of data corruption in the copy.
Currently, the kube-apiserver in most Kubernetes clusters uses one of two methods to connect to nodes, pods, and service endpoints in the cluster. In most cases, the server makes a direct connection to the target, but this ability requires a flat network with no overlap between the IP CIDR blocks of the control plane, the nodes, and the clusters pod and service network.
The other method, largely used only in Google Kubernetes Engine, creates SSH tunnels from the control plane network to the cluster. The reliability and security of the SSH tunnel method have not held up well. SSH tunnel support in Kubernetes has been deprecated and will be removed altogether in the future.
As a replacement, this feature creates an extensible TCP proxy system for connections from the control plane to endpoints in the cluster. It uses the new Konnectivity service, with a server component in the control plane network and clients deployed as a DaemonSet on the cluster nodes. This architecture simplifies the API servers code base, as well as opening up the possibility of using a VPN to secure and monitor traffic between the control plane and the nodes and offering other opportunities for customization.
We just covered a handful of the enhancements in the 1.18 release, focusing on new features that may be extremely useful to some users and others which highlight the ongoing work to improve the security posture of Kubernetes and to address the complexity of the code base, which had created issues and questions during last years audit. Check out the (soon to be published) official release notes for a complete list of changes. Also, in case you missed it, you can find a great interactive tool for searching Kubernetes release notes at https://relnotes.k8s.io/.
View original post here:
Whats New in Kubernetes 1.18? Enhancements and Feature Updates - Security Boulevard