Kubernetes 1.27 : What You need to know

Greetings, tech enthusiasts! A new version of Kubernetes was recently released , it is Kubernetes v1.27, the first release of 2023!

This release consist of 60 enhancements. 18 of those enhancements are entering Alpha, 29 are graduating to Beta, and 13 are graduating to Stable.

In this blog post, we’ll explore the remarkable advancements introduced by Kubernetes 1.27, the latest major release, and shed light on its key features that solidify its position as an indispensable tool for modern application deployment.

5 New Features Kubernetes 1.27

Legacy k8s.gcr.io container image registry redirected to registry.k8s.io

Starting from Kubernetes version 1.25, the default image registry has been set to registry.k8s.io, which replaced the old registry k8s.gcr.io. This change was expedited to adopt a more sustainable infrastructure model, and the old registry will be frozen from April 3, 2023. The new registry endpoint is designed to spread traffic across a number of regions and cloud providers, allowing images to be served from the cloud provider closest to the user. Clients pulling images from registry.k8s.io will be securely redirected to fetch images from a storage service in the closest region of the relevant cloud provider 

To avoid issues with the changeover, it is important to take the following actions :

  • If you currently have strict domain name or IP address access policies that limit image pulls to k8s.gcr.io, revise them to match the new changes.
  • If you host your own image registry, copy the relevant images to your self-hosted repositories ,
  • If you currently mirror images to a private registry from k8s.gcr.io, update this to pull from the new public registry, registry.k8s.io.
  • Update your manifests and Helm charts to use the new registry if you are a maintainer of a subproject.
  • Consider hosting local image registry mirrors to increase the reliability of your cluster and remove dependency on the community-owned registry or if you are running Kubernetes in networks where external traffic is restricted.

To check which images are using the legacy registry and fix them, you can use the following options :

  • Use the kubectl command: 
kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" | tr -s '[[:space:]]' '\n' | sort | uniq -c
  • Use the kubectl krew plugin: kubectl krew install community-images and kubectl community-images
  • Run a search over your manifests and charts for “k8s.gcr.io”.

It is important to note that new Kubernetes images from version 1.27 and patches for older versions won’t be published to the k8s.gcr.io registry, and setting the default image registry to k8s.gcr.io will fail for new releases after April as they won’t be present in the old registry. The frozen registry will remain available for image pulls to assist end users in their migration away from it, but the community cannot make long-term guarantees around it.

The Kubernetes project runs a community-owned image registry called registry.k8s.io to host its container images. This changeover from k8s.gcr.io to registry.k8s.io could make the project more vendor-neutral and open to a wider range of cloud providers, promoting interoperability and avoiding vendor lock-in 

Enhanced Container Resource-based Pod Autoscaling (beta ContainerResource )

Kubernetes 1.27 has a new feature called Container Resource-based Pod Autoscaling, which allows Horizontal Pod Autoscaler (HPA) to scale workloads based on the resource usage of individual containers within a Pod. This is more efficient than scaling based on the aggregated usage of all containers in the Pod.

Here’s an example of how to do this:

  1. Define the resource limits and requests for your container in your Kubernetes manifest file:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-deployment
  labels:
    app: example
spec:
  replicas: 3
  selector:
    matchLabels:
      app: example
  template:
    metadata:
      labels:
        app: example
    spec:
      containers:
      - name: example-container
        image: nginx
        resources:
          limits:
            cpu: 500m
            memory: 512Mi
          requests:
            cpu: 200m
            memory: 256Mi
  1. Create a HorizontalPodAutoscaler object for the Deployment you want to scale:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: example-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: example-deployment
  minReplicas: 1
  maxReplicas: 10
  metrics:
  - type: ContainerResource
    containerResource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 50

In this example, we create a HorizontalPodAutoscaler object for the “example-deployment” Deployment. The HPA scales the number of replicas based on CPU utilization, targeting an average utilization of 50%. The HPA will maintain a minimum of 1 replica and a maximum of 10 replicas. The type of metric used in this example is ContainerResource, specifically the CPU resource of the “example-container” container.

This example assumes that you have a Kubernetes cluster up and running, and that you have a metrics server installed to collect resource utilization metrics from the pods in your deployment.

Documentation : https://kubernetes.io/blog/2023/05/02/hpa-container-resource-metric/

In-place Resource Resize for Kubernetes Pods (alpha)

Kubernetes 1.27 introduces an alpha feature that allows users to resize CPU and memory resources allocated to pods without restarting the containers, called In-place Resource Resize. This feature is useful because changing the resource values of pods previously involved disrupting running workloads by restarting the pod.

To use this feature, you need to have a Kubernetes cluster 1.27 up and running, and pods scheduled with resources and the InPlacePodVerticalScaling feature gate must be enabled on the Kubernetes cluster. This feature gate is disabled by default, and enabling it allows pods to be resized in-place without restarting the containers.

Here’s an example to do that :

  1. Create a YAML file for a pod with resource limits and requests:
apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
  - name: my-container
    image: myimage
    resources:
      requests:
        cpu: 200m
        memory: 256Mi
      limits:
        cpu: 500m
        memory: 512Mi
  1. Create the pod using the kubectl apply command:
  kubectl apply -f pod.yaml
  1. Check the current resource allocation of the pod using the kubectl describe pod command:
kubectl describe pod my-pod
  1. Update the resources in the pod’s containers by patching the pod spec using the kubectl patch command. For example, to increase the CPU limit of a container to 1 core, you can use the following command:
kubectl patch pod my-pod -p '{"spec":{"containers":[{"name":"my-container","resources":{"limits":{"cpu":"1"}}}]}}'
  1. Verify that the resource allocation of the pod has been updated using the kubectl describe pod command or Use the kubectl get command with the --output=jsonpath option to check the resource allocation of the patched pod:
kubectl get pod my-pod --output=jsonpath='{.spec.containers[0].resources.limits}'

Check this video for a demonstration :

Introducing an API For Volume Group Snapshots

The Volume Group Snapshot feature was introduced in Kubernetes v1.27 to allow users to take crash-consistent snapshots for multiple volumes together using a label selector to group multiple PersistentVolumeClaims for snapshotting. This feature is only supported for CSI volume drivers. The feature is useful because:

  • Some storage systems provide the ability to create a crash-consistent snapshot of multiple volumes. A group snapshot represents “copies” from multiple volumes that are taken at the same point-in-time. A group snapshot can be used either to rehydrate new volumes (pre-populated with the snapshot data) or to restore existing volumes to a previous state (represented by the snapshots). 
  • Group snapshots can simplify the process of taking snapshots of multiple volumes. Instead of taking snapshots of each volume individually, users can group the volumes together and take a single snapshot. This can save time and reduce the complexity of snapshot management. 
  • The feature allows users to create a new PersistentVolumeClaim to be created from a VolumeSnapshot object that is part of a VolumeGroupSnapshot. This will trigger provisioning of a new volume that is pre-populated with data from the specified snapshot. The user should repeat this until all volumes are created from all the snapshots that are part of a group snapshot.

Overall, the Volume Group Snapshot feature simplifies the process of taking multiple snapshots by grouping the volumes together and taking a single snapshot. This can save time and reduce the complexity of snapshot management.

Kubernetes 1.27: Safer, More Performant Pruning in kubectl apply

Kubernetes version 1.5 introduced the –prune flag for kubectl apply that automatically cleans up previously applied resources removed from the current configuration. However, the existing implementation of –prune has design flaws that diminish its performance and can result in unexpected behaviors. Version 1.27 of kubectl introduces an alpha version of a revamped pruning implementation that addresses these issues with a concept called ApplySet [0].

An ApplySet is a group of resources associated with a parent object on the cluster, as identified and configured through standardized labels and annotations. Additional standardized metadata allows for accurate identification of ApplySet member objects within the cluster, simplifying operations like pruning .

To leverage ApplySet-based pruning, set the KUBECTL_APPLYSET=true environment variable and include the flags –prune and –applyset in your kubectl apply invocation: KUBECTL_APPLYSET=true kubectl apply -f <directory/> --prune --applyset=<name> .

Sure, here’s a step-by-step example to explain how to use the new feature KUBECTL_APPLYSET=true kubectl apply -f <directory/> --prune --applyset=<name>:

  1. Create a directory containing the YAML files for the Kubernetes objects you want to apply. For example, you could create a directory called my-app with the following files:
    • deployment.yaml for your deployment object
    • service.yaml for your service object
  1. Set the KUBECTL_APPLYSET environment variable to true to enable ApplySet-based pruning:
   export KUBECTL_APPLYSET=true
  1. Run the kubectl apply command with the --prune and --applyset flags to apply the objects in your directory and enable pruning based on the specified ApplySet:
   kubectl apply -f ./my-app/ --prune --applyset=my-app

This command tells Kubernetes to apply the objects in the my-app directory, enable pruning based on the my-app ApplySet, and delete any previously applied objects that are not in the current configuration.

  1. Verify that your objects were created and are running correctly:
   kubectl get deployments,services

This command should show your deployment and service objects with the desired number of replicas and ports, respectively.

  1. To delete your objects and clean up any associated resources, simply delete the ApplySet object using kubectl delete:
   kubectl delete secret my-app

This command deletes the my-app secret, which was created as the default parent object for the my-app ApplySet. This will trigger the deletion of any associated objects that were created using the my-app ApplySet.

And that’s it! With ApplySet-based pruning in Kubernetes 1.27, you can more safely and efficiently manage your Kubernetes resources and avoid the performance and compatibility issues of the previous --prune implementation.

🔥[40 % Off] The Linux Foundation, Oct 25, 2023 (5PM EST) – Oct 31, 2023 ! [RUNNING NOW !]

Shop Certifications (Kubernetes CKAD , CKA and CKS) , Bootcamps, SkillCreds, or Courses .

Coupon: use code SCARY40 at checkout

Hurry Up: Offer valid for Oct 25, 2023 (5PM EST) – Oct 31, 2023. ⏳

Offer valid for Oct 25, 2023 (5PM EST) – Oct 31, 2023  ⏳

Use code SCARY40  at CHECKOUT

Conclusion

In our blog, we have covered five standout features of Kubernetes 1.27, which enhance container orchestration capabilities and improve workload management in clusters. These features include advancements in scalability, security, networking, observability, storage, developer experience, and overall ecosystem updates.

If you are interested in the Kubernetes ecosystem and certification exams, I encourage you to check out my related posts in this blog.

0 Shares:
Leave a Reply
You May Also Like