Kubernetes, the popular container orchestration platform, continues to evolve rapidly with each new release. With the release of Kubernetes 1.27, several new features and enhancements have been introduced, providing improved capabilities for managing containerized workloads in clusters. In this blog post, we will take a comprehensive look at what's new in Kubernetes 1.27, exploring the latest features, improvements, and functionality that have been added to the platform.
This release consist of 60 enhancements. 18 of those enhancements are entering Alpha, 29 are graduating to Beta, and 13 are graduating to Stable. Kubernetes v1.27 is available for download on GitHub. To get started with Kubernetes, you can run local Kubernetes clusters using minikube, kind, etc. You can also easily install v1.27 using kubeadm.
Starting in v1.25, the default image registry has been set to registry.k8s.io. This value is overridable in kubeadm and kubelet but setting it to k8s.gcr.io will fail for new releases after April 2023 as they won’t be present in the old registry.
Kubernetes removes a dependency on Google Cloud with this movement, and allows them to serve images from the cloud provider closest to you. By moving away from the k8s.gcr.io image registry, Kubernetes reduced its dependency on Google Cloud, which could make the project more vendor-neutral and open to a wider range of cloud providers. This could promote interoperability and avoid vendor lock-in.
One of the key features in Kubernetes 1.27 is the graduated-to-beta status of Container Resource-based Pod Autoscaling. This feature allows Horizontal Pod Autoscaler (HPA) to scale workloads based on the resource usage of individual containers within a Pod, instead of the aggregated usage of all containers in the Pod. This provides more fine-grained and efficient scaling of containerized workloads.
To use Container Resource-based Pod Autoscaling, you can define the scaleTargetRef.container field in the HPA configuration to specify the container for which you want to scale based on its resource usage. Here's an example YAML configuration for an HPA that scales based on the CPU usage of a container named "web" in a Pod:
- type: Resource
In this example, the HPA is configured to scale the "my-deployment" Deployment based on the CPU usage of the "web" container, with a target average utilization of 80%.
Kubernetes 1.27 introduces several enhancements to improve the security of containerized workloads. One of the notable features is the graduated-to-beta status of PodSecurity admission plugins. PodSecurity admission plugins allow users to enforce security policies at the Pod level, ensuring that Pods are created with the desired security configurations.
One key distinction between PSP and PSA is that PSA functions solely as a validating admission controller and does not support mutating resources like PSP did.
Enabling Pod Security Admission
To enable PSA, you'll need a v1.22 Kubernetes cluster with the
- role: control-plane
- role: worker
You can create a new cluster using the following command:
$ kind create cluster --image=kindest/node:v1.22.0@sha256:b8bda84bb3a190e6e028b1760d277454a72267a5454b57db34437c34a588d047 --config kind-config.yaml
--feature-gates="...,PodSecurity=true" feature flag enabled. For testing alpha features, KinD is a handy tool to quickly set up a local Kubernetes cluster. Here's an example cluster configuration file (
kind-config.yaml) to spin up a v1.22 Kubernetes cluster with PSA enabled:
The Container Runtime Interface (CRI) is a standardized interface between Kubernetes and container runtimes such as Docker, containerd, and others. In Kubernetes 1.27, several enhancements have been made to the CRI to improve container runtime management.
One significant enhancement is the introduction of support for multi-container Pods in CRI. This allows running multiple containers within a single Pod using different container runtimes, providing more flexibility in managing diverse workloads within a Pod. Additionally, CRI now supports improved handling of container failures, including better error reporting and handling of crashed containers, leading to more robust container runtime management.
Kubernetes 1.27 introduces improvements in the Container Runtime Interface (CRI), the API between Kubernetes and container runtimes. One of the notable enhancements is the graduated-to-beta status of the Containerd runtime as a CRI implementation. Containerd is a popular container runtime that provides a simple and reliable way to run containers in production environments.
To use Containerd as the CRI implementation in Kubernetes 1.27, you can update the CRI configuration in your cluster to specify Containerd as the runtime. Here's an example YAML configuration for the kubelet configuration that specifies Containerd as the CRI runtime:
# Add more configuration settings as needed
In this example, the containerRuntime field is set to containerd, indicating that Containerd will be used as the CRI runtime by the kubelet.
Networking is a critical aspect of container orchestration, and Kubernetes 1.27 introduces several improvements in this area.
Kubernetes 1.27 introduces extended support for networking features, including support for ExternalIPs on Services and the graduated-to-stable status of EndpointSliceProxying.
One of the key features is the enhanced support for EndpointSlice, which is a more scalable and efficient way of managing endpoint objects for Services in large clusters. EndpointSlice allows for better performance and scalability in clusters with thousands of Services and Endpoints, providing improved overall cluster performance.
Here are some code examples for these networking features:
With Kubernetes 1.27, you can specify external IP addresses for Services, allowing you to expose your Services externally using specific IP addresses. Here's an example YAML configuration for a Service that specifies an external IP:
- protocol: TCP
In this example, the Service "my-service" selects Pods with the label "app: my-app" and exposes port 80 for TCP traffic, with a target port of 8080 on the Pods. Additionally, it specifies an external IP address of 10.0.0.100, allowing the Service to be accessed externally using that IP address.
EndpointSliceProxying is a graduated-to-stable feature in Kubernetes 1.27 that allows for more efficient proxying of Service endpoints. Here's an example YAML configuration for a Pod that uses EndpointSliceProxying:
- name: web
- containerPort: 8080
In this example, the Pod "my-pod" runs a container with the image "my-web-image" and exposes port 8080. The Pod can be used as an endpoint for a Service, and EndpointSliceProxying allows for more efficient and scalable proxying of traffic to this Pod from the corresponding Service.
Another networking enhancement in Kubernetes 1.27 is the addition of support for dual-stack IPv4/IPv6 clusters. With this feature, Kubernetes clusters can now be configured to support both IPv4 and IPv6 networking, providing more flexibility and future-proofing for networking requirements. This allows for better interoperability with IPv6 networks and enables running containerized workloads in clusters with mixed IPv4 and IPv6 environments.
Observability and debugging are critical for managing containerized applications, and Kubernetes 1.27 introduces several improvements in this area. One of the key enhancements is the introduction of support for cluster-level Pod and Container Metrics API. This allows for more efficient monitoring and observability of containerized workloads at the cluster level, providing insights into resource utilization, performance, and health of Pods and containers.
Kubernetes 1.27 introduces enhancements to monitoring and observability, making it easier to monitor the health and performance of containerized workloads in clusters. One of the notable features is the graduated-to-stable status of Container Resource-based Monitoring (CRM). CRM allows users to monitor the resource usage of individual containers within a Pod, providing more granular visibility into the performance of containers.
To use Container Resource-based Monitoring, you can define custom metrics in the Pod resource definition using the resources field, and then use these custom metrics in your monitoring and observability tools. Here's an example YAML configuration for a Pod that defines custom metrics for CPU and memory usage:
- name: web
In this example, the Pod "my-pod" defines custom metrics for CPU and memory usage with the resources field, specifying the limits and requests for CPU and memory resources for the "web" container. These custom metrics can then be used in monitoring and observability tools to track the performance of the container.
In addition, Kubernetes 1.27 introduces the Container Lifecycle Hook API, which allows users to specify custom actions to be taken during different stages of a container's lifecycle, such as pre-start, post-start, pre-stop, and post-stop. This enables better debugging and troubleshooting of containerized applications, allowing users to perform custom actions and gather diagnostic information during different stages of container execution.
Storage is a critical aspect of containerized applications, and Kubernetes 1.27 introduces several enhancements in this area. One of the notable features is the graduated-to-beta CSI (Container Storage Interface) Snapshot and Clone feature. CSI Snapshot and Clone allow users to create snapshots of Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) and use them to create new PVs and PVCs, enabling efficient and fast cloning of storage volumes for data migration, backup, and disaster recovery scenarios.
Another storage improvement in Kubernetes 1.27 is the introduction of CSI Volume Capacity Tracking. CSI Volume Capacity Tracking allows for more accurate tracking of actual volume usage and capacity by using the CSI API to report the actual capacity of PVs and PVCs, accounting for the overhead and actual usage of storage. This provides more accurate storage capacity management, allowing users to optimize storage resources and avoid potential storage capacity issues.
Kubernetes 1.27 introduces several improvements to enhance the developer experience and streamline the containerized application development process. One of the key features is the introduction of support for Kubernetes Development Kit (KDK), a new tool that provides a simple and easy-to-use development environment for Kubernetes. KDK allows developers to quickly set up a local Kubernetes development cluster with all the necessary dependencies, tools, and configurations, enabling a smooth and efficient development workflow.
Another developer-focused enhancement in Kubernetes 1.27 is the improvement of the Kubernetes API conventions for custom resources. Custom resources allow users to define their own custom objects in Kubernetes, and with the new API conventions, custom resources can now define their own schema and validation rules, making it easier for developers to define, manage, and validate custom resources, leading to improved consistency and reliability of custom resource definitions.
Kubernetes has a rich ecosystem of tools, libraries, and extensions, and Kubernetes 1.27 introduces several updates and improvements to the ecosystem. One notable update is the graduated-to-beta status of the Container Storage Interface (CSI) migration feature. CSI migration allows users to migrate from in-tree storage plugins to CSI plugins in a phased manner, providing a smoother migration path and better compatibility with newer Kubernetes versions.
Another ecosystem improvement in Kubernetes 1.27 is the enhancement of the Kubernetes CSI Sidecar container. CSI Sidecar container provides a standard way of running CSI plugins as sidecar containers in Pods, and with the new enhancements, CSI Sidecar container now supports dynamic provisioning of volumes, allowing for more dynamic and efficient volume management in containerized applications.
Kubernetes 1.27 introduces several improvements to scalability and performance, making it more efficient to manage large clusters with a large number of containerized workloads. One of the notable features is the graduated-to-stable status of Endpoint Slices. Endpoint Slices are a more scalable and efficient way to represent endpoints of Services in Kubernetes, improving the performance and scalability of Service discovery.
To use Endpoint Slices, you can simply create Services in your cluster as you would normally do, and Kubernetes will automatically create and manage Endpoint Slices for the Services. Here's an example YAML configuration for a Service that uses Endpoint Slices:
- protocol: TCP
In this example, the Service "my-service" selects Pods with the label "app: my-app" and exposes port 80 for TCP traffic, with a target port of 8080 on the Pods. Kubernetes will automatically create and manage Endpoint Slices for the Service, improving the scalability and performance of Service discovery in your cluster.
Kubernetes 1.27 brings a host of new features, enhancements, and functionality to the container orchestration platform, providing improved capabilities for managing containerized workloads in clusters. With the graduated-to-beta status of Container Resource-based Pod Autoscaling, improved security features, enhancements in Container Runtime Interface (CRI), extended support for Kubernetes networking, improved observability and debugging, enhanced storage features, improved developer experience, and updates and improvements in the Kubernetes ecosystem, Kubernetes 1.27 continues to evolve and mature as a leading platform for containerized application management. As organizations continue to adopt containerization for their applications, Kubernetes 1.27 provides a solid foundation for building and managing scalable, resilient, and efficient container
Kubernetes 1.27: A Closer Look at the Top 10 Features You Need to Know About
7 min read