kubernetes集群-Master节点升级-kubeadm,kubectl,kubelet升级

2023-10-26

kubernetes - Master单节点升级

kubeadm 升级

kubelet 升级

kubectl 升级

生产环境注意事项:

  • 由于 kubeadm upgrade 不会升级 etcd,请确保已对其进行了备份。例如,您可以使用 etcdctl backup 命令完成这个工作。
  • 请注意,kubeadm upgrade 只会升级 Kubernetes 内建(Kubernetes-internal)组件,不会触及任何工作负载。作为最佳实践,您应该备份所有重要数据。例如任何应用层级的状态数据,如应用可能依赖的数据库(如 MySQL 或 MongoDB)等,在开始升级前必须对其进行备份。

一、升级最新版本的 kubeadm

1.配置 kubernetes 阿里云源

vim /etc/yum.repos/ali-k8s.repo

[aliyun]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

2.下载最新版本 Kubeadm

yum -y install kubeadm kubectl

kubeadm version

kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5", GitCommit:"e6503f8d8f769ace2f338794c914a96fc335df0f", GitTreeState:"clean", BuildDate:"2020-06-26T03:45:16Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

3、检测当前master节点能否升级

kubeadm upgrade plan

[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.18.3
[upgrade/versions] kubeadm version: v1.18.5
[upgrade/versions] Latest stable version: v1.18.5
[upgrade/versions] Latest stable version: v1.18.5
[upgrade/versions] Latest version in the v1.18 series: v1.18.5
[upgrade/versions] Latest version in the v1.18 series: v1.18.5

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
Kubelet     3 x v1.18.3   v1.18.5

Upgrade to the latest version in the v1.18 series:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.18.3   v1.18.5
Controller Manager   v1.18.3   v1.18.5
Scheduler            v1.18.3   v1.18.5
Kube Proxy           v1.18.3   v1.18.5
CoreDNS              1.6.7     1.6.7
Etcd                 3.4.3     3.4.3-0

You can now apply the upgrade by executing the following command:

	kubeadm upgrade apply v1.18.5

4、下载所有的 v1.18.5 版本的组件镜像

省略…我这里是用的自己打好的组件包

5、升级

kubeadm config view > kubeadm-config.yaml

kubeadm upgrade apply v1.18.5 --config kubeadm-config.yaml

阻塞日志:

W0703 15:56:44.656029   43781 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[upgrade/config] Making sure the configuration is correct:
W0703 15:56:44.663263   43781 common.go:94] WARNING: Usage of the --config flag for reconfiguring the cluster during upgrade is not recommended!
W0703 15:56:44.664271   43781 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.18.5"
[upgrade/versions] Cluster version: v1.18.3
[upgrade/versions] kubeadm version: v1.18.5
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.18.5"...
Static pod: kube-apiserver-k8s-master hash: 3bc606c369be503f38f7c28b4dcbc568
Static pod: kube-controller-manager-k8s-master hash: d6f97a3d7a863017ff6b125231213d1e
Static pod: kube-scheduler-k8s-master hash: a8caea92c80c24c844216eb1d68fe417
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/etcd] Non fatal issue encountered during upgrade: the desired etcd version for this Kubernetes version "v1.18.5" is "3.4.3-0", but the current etcd version is "3.4.3". Won't downgrade etcd, instead just continue
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests661056484"
W0703 15:56:58.088381   43781 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-03-15-56-57/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-k8s-master hash: 3bc606c369be503f38f7c28b4dcbc568
Static pod: kube-apiserver-k8s-master hash: ac83eb96b7b8816da72bf7ae5151edae
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-03-15-56-57/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-k8s-master hash: d6f97a3d7a863017ff6b125231213d1e
Static pod: kube-controller-manager-k8s-master hash: 57a64a0372a1ea1dde51660c8c2f17c7
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-03-15-56-57/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-k8s-master hash: a8caea92c80c24c844216eb1d68fe417
Static pod: kube-scheduler-k8s-master hash: 3415bde3e2a04810cc416f7719a3f6aa
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.18.5". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.


6、验证

查看是否还能继续升级:kubeadm upgrade plan

[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.18.5
[upgrade/versions] kubeadm version: v1.18.5
[upgrade/versions] Latest stable version: v1.18.5
[upgrade/versions] Latest stable version: v1.18.5
[upgrade/versions] Latest version in the v1.18 series: v1.18.5
[upgrade/versions] Latest version in the v1.18 series: v1.18.5

Awesome, you're up-to-date! Enjoy!

查看kubectl 版本kubectl version

Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5", GitCommit:"e6503f8d8f769ace2f338794c914a96fc335df0f", GitTreeState:"clean", BuildDate:"2020-06-26T03:47:41Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5", GitCommit:"e6503f8d8f769ace2f338794c914a96fc335df0f", GitTreeState:"clean", BuildDate:"2020-06-26T03:39:24Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}


二、升级最新版本的kubelet

yum -y install kubelet 安装最新的 kubelet

kubelet --version 查看当前集群kubelet版本号
kubectl get nodes 查看当前集群版本号

NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   30d   v1.18.3

发现版本号依旧是 v1.18.3,因为没有重启 kubelet

[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl restart kubelet
[root@k8s-master ~]# kubectl get nodes

NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   30d   v1.18.5
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

kubernetes集群-Master节点升级-kubeadm,kubectl,kubelet升级 的相关文章

随机推荐

  • 银行业务系统数据库设计与实现

    银行业务系统数据库的设计与实现 1 创建数据库银行业务系统数据库 bankDB Drop database if EXISTS bankDB 删除bindDB数据库 即使没有数据库也不报错 CREATE database bankDB 创建
  • preprocessing.LabelEncoder()使用

    preprocessing LabelEncoder 使用 e g 1 from sklearn import preprocessing le preprocessing LabelEncoder arr gf 1 2 3 wom wom
  • 基于Zigbee的智能路灯控制系统的Qt操作界面

    本项目已经用于参加过比赛 在加之本人确实有点忙 所以拖到现在才发 这里只详细说明关于Qt控制界面的相关功能说明 本来是19年写的 代码量有点大 具体的地方 我自己都可能有点遗忘了 不过还是发出来供大家参考使用 因为当初时间紧迫可能代码格式不
  • 华为OD题目: 投篮大赛

    package com darling boot order od od9 import com sun org apache bcel internal generic IF ACMPEQ import java util 投篮大赛 知识
  • Android Studio 创建项目不自动生成BuildConfig文件

    今天在AS上新建项目发现找不到BuildConfig文件 怎么clear都不行 通过多方面查找发现原来gradle版本不同造成的 Gradle 8 0默认不生成 BuildConfig 文件 如上图 8 0版本是没有source文件夹 上图
  • 软件测试-网站测试

    按照以下六个顺序进行测试 1 黑盒测试 在测试网站时 首先应该建立状态表 第5章 把每个网页当作不同的状态 超级链接当作状态之间的连接线 完整的状态图有利于对整个任务更好地进行审视 查找具体网页缺陷的思路 文本 把网页文本当作文档对待 根据
  • 无线基本概述(二)

    1 一些参数 MAC MAC 即Medium MediaAccess Control 介质访问控制 是数据链路层的一部分 MAC地址是烧录在NetworkInterfaceCard 即网卡 简称NIC 里的 它也叫硬件地址 是由48位 即b
  • 队列的基本运算实现

    队列 queue 队列是一种先进先出 first in first out FIFO 的线性表 它只允许在表的一端 队尾 rear 插入元素 而在另一端 队头 front 删除元素 插入操作称为入队或进队 删除操作称为出队或离队 队列示意图
  • matlab读jpg有三个通道,图像为“灰度图像”

    最近用matlab读取 灰度图 jpg格式 居然有三个通道 且灰度值还不一样 那么这是为什么呢 1 灰度图 其实是 灰度图 概念的问题 并不是灰色的图片就是灰度图 正常来说灰度图是某个波段的成像 是由ccd对该波段对应波长的光线的强度感应形
  • 健康保健产品爬虫:Python爬虫获取保健品信息和用户评价

    目录 第一部分 选择目标网站 第二部分 分析网站结构和查询参数
  • 经纬恒润与辉羲智能达成战略投资与业务合作,加速产业智能化进程

    近日 经纬恒润战略投资辉羲智能 并与辉羲智能签署战略合作协议 双方将聚焦未来智慧出行 共同打造基于国产高性能SoC的自动驾驶量产解决方案 助力客户快速实现包括轻地图城市NOA在内的高阶自动驾驶功能量产落地 目前自动驾驶规模化处于全面爆发前夕
  • Nightingale滴滴夜莺监控系统入门(三)--页面功能说明

    Nightingale滴滴夜莺监控系统入门 三 功能模块 V3 4 1 用户资源中心 资产管理系统 任务执行中心 监控告警系统 监控看图 监控大盘 告警策略 部署客户端 生产环境开放服务端端口 部署客户端 这章节主要是介绍夜莺的功能使用 各
  • k8s的安装

    我这里使用vmware创建了三台虚拟机 k8s的虚拟机建议最少2核 4G内存 我的电脑配置不高采用的2核 3G的配置 安装k8s之前需要先安装docker docker的安装参考 docker的安装及使用 docker的安装和使用 骑士99
  • ubuntu16.4虚拟机开机,进入tty1命令终端,无法进入桌面问题始末

    现象 1 ubuntu虚拟机开机频繁出现error failed to start network manager 2 进入tty1 vm login 分析 1 回想到前一天编译工程 由于 lib i386 linux gnu下缺少libu
  • visual studio code导入自定义模块(pycharm中能够运行的文件,vs code报错:未找到指定模块)

    一 先看下目录结构 二 在main py中导入Utils中的模块 直接导入即可 from Utils custom event parse import CustomEventParse 三 在custom event parse py中导
  • [运维

    READS SQL DATA 是 MySQL 存储过程和函数中的一种权限修饰符 用于标识该存储过程或函数只读取数据库的数据而不修改它 这个修饰符通常用于声明存储过程或函数的权限 以告知数据库管理系统该过程或函数不会对数据库进行写操作 从而允
  • 动手学数据分析 Task5

    动手学数据分析 Task5 一 逻辑回归 二 随机森林 三 模型评估 3 1 k折交叉验证 3 2 混淆矩阵 3 3 ROC曲线 一 逻辑回归 LogisticRegression penalty l2 dual False tol 0 0
  • 如何将Zookeeper和Kafka的log4j升级到2.16

    1 删除lib下的jar文件 对于kafka lib 删除 slf4j api 1 7 25 jar slf4j log4j12 1 7 25 jar log4j 1 2 17 jar 对于zk lib 删除 log4j 1 2 17 ja
  • 毕业设计 - stm32单片机的远程WIFI密码锁 - 物联网 嵌入式

    文章目录 0 前言 1 简介 主要器件 实现效果 4 硬件设计 WIFI模块 OLED显示屏 相关原理图 硬件接线 5 软件说明 开发环境介绍 程序下载配置 设备初始化打印的信息 6 部分核心代码 7 最后 0 前言 这两年开始毕业设计和毕
  • kubernetes集群-Master节点升级-kubeadm,kubectl,kubelet升级

    kubernetes Master单节点升级 kubeadm 升级 kubelet 升级 kubectl 升级 生产环境注意事项 由于 kubeadm upgrade 不会升级 etcd 请确保已对其进行了备份 例如 您可以使用 etcdc