In this article, we will discuss the key steps and recommendations for implementing and upgrading the component in the Blackbox Exporter cluster and upgrading the actual Kubernetes. MetalLB, ArgoCD, Portworx and prometheus-stack were discussed in the first and second parts of the article.

Blackbox Exporter

For the present cluster, upgrading the Blackbox Exporter version was essential to maintain the stability and efficiency of monitoring services. The Blackbox Exporter update process is not significantly different from the one for the application, but here there was no need to create a separate application before implementing changes.

When it comes to the monitoring architecture, Blackbox Exporter plays a key role, enabling the collection and analysis of data on the status and availability of services. For this reason, it is worth using the existing monitoring application, which has been designed and adapted to for the update of the entire range of monitoring tools, including prometheus-stack.

With the monitoring application that has already been configured and optimized for requirements, the Blackbox Exporter update process could be carried out efficiently and the risk of possible complications or incompatibilities in the monitoring environment was minimized. This way, we ensured smooth monitoring operations and high-quality services by the cluster.

Setting up the manifest

To set up the manifest for the update of Blackbox Exporter, it is advised to rely on the values.yaml file as well as the Helm tool.

To generate the manifest, use the following command:

In the previous command, {{target-version}} corresponds to the current version of Blackbox on the cluster, based on the values-blackbox.yml file.

Thanks to this operation, we obtained a manifest ready to be implemented and that takes into account the specifications of the monitoring environment and ensures compliances with the clusters’ requirements.

Comparing ArgoCD to the cluster

Like with the previous versions, it is essential to precisely adapt the file to the requirements and configuration of the cluster. In order to identify the essential changes, it is worth using the AppDiff tool available in ArgoCD, that will allow you to compare te repository with the current state of the cluster.

The goal is to compare the ArgoCD manifest with the real state of the cluster, to verify that the configuration are compatible. This operation is not very different from the previous ones.

After identifying the differences and implementing the essential changes, the new manifest should be added to the /prometheus-stack/base/kustomization.yml file.

The final structure of the file in the main folder should look as follows:

After having compared the repository with the current state of the environment, we can proceed to the synchronization of the application with the help of ArgoCD. This way, we have a restore point and we ensure consistency and coherence of the application’s configuration with the requirements and expectations related to the cluster.

Updating Kubernetes – update of the Control-Plane nodes

Po zaktualizowaniu wszystkich komponentów, należy przystąpić do aktualizacji Kubernetesa. Pierwszym krokiem jest aktualizacja węzłów kontrolnych.

Updating the kubeadm tool

To perform the update, you must first determine the appropriate version that you want to install. Then, after selecting the appropriate version, you can install it by:

After installing the appropriate version of the kubeadm tool, it is time to prepare the upgrade plan by executing this command:

Updating the Kubernetes cluster

Następnie, trzeba wykonać aktualizację klastra Kubernetesa poprzez:

After the upgrade of the kubeadm tool, you also need to update kubelet and kubectl. Before the update, it is best to deactivate their wall, and to reactivate it after the update. To update kubelet and kubectl, follow these steps:

After the node was designated for updating, it was eliminated from the tasks and the correctness of the information was ensured.

Then we updated kubelet and kubectl.

After installing the new versions, restart the kubelet service:

Ultimately, it is worth verifying that the node was put back to work with the help of the following command. Additionally, check that <node-to-drain> is the name for the nodes that have been updated earlier.

Nody worker

Updating the worker nodes is made possible with the Ansible playbook, which is separated into 2 files:

  1. update-node-k8s.yml – responsible for the update of Kubernetes,
  2. update-node-os.yml – responsible for the update of the operation system and for the restart of the server.

Please note, that those operations should be executed one by one, node by node, to ensure the continuity of the service availability.

Conclusion

This article ends the series on updating the Kubernetes cluster from the 1.25 version to the 1.26 one. We encourage you to read the first and second part.

In the newest version of the Kubernetes cluster, the PodSecurityPolicy (PSP) mechanism was switched to a more flexible and modern method, called Pod Security Admission (PSA). It is a significant change, that requires careful preparation and strict adherence to procedures during the implementation and updating process of tools in the cluster. The article presented the key steps to complete before updating the cluster, focusing on the services that used rely on PSP.

In summary, the article provides a comprehensive overview of the key aspects related to the update of the Kubernetes cluster, emphasizing the importance of taking into account changes in the security mechanism and complying with the update procedures for each of the cluster components. Please refer to the previous parts of the series to fully understand the migration process and to avoid potential issues during the upgrade.