手把手教你一套完善且高效的k8s离线部署方案

2023-05-16

作者:郝建伟

背景

面对更多项目现场交付,偶而会遇到客户环境不具备公网条件,完全内网部署,这就需要有一套完善且高效的离线部署方案。

系统资源

编号主机名称IP资源类型CPU内存磁盘
01k8s-master110.132.10.91CentOS-74c8g40g
02k8s-master110.132.10.92CentOS-74c8g40g
03k8s-master110.132.10.93CentOS-74c8g40g
04k8s-worker110.132.10.94CentOS-78c16g200g
05k8s-worker210.132.10.95CentOS-78c16g200g
06k8s-worker310.132.10.96CentOS-78c16g200g
07k8s-worker410.132.10.97CentOS-78c16g200g
08k8s-worker510.132.10.98CentOS-78c16g200g
09k8s-worker610.132.10.99CentOS-78c16g200g
10k8s-harbor&deploy10.132.10.100CentOS-74c8g500g
11k8s-nfs10.132.10.101CentOS-72c4g2000g
12k8s-lb10.132.10.120lb内网2c4g40g

参数配置

注:在全部节点执行以下操作

系统基础设置

工作、日志及数据存储目录设定

$ mkdir -p /export/servers
$ mkdir -p /export/logs
$ mkdir -p /export/data
$ mkdir -p /export/upload

内核及网络参数优化

$ vim /etc/sysctl.conf

# 设置以下内容
fs.file-max = 1048576
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 5
net.ipv4.neigh.default.gc_stale_time = 120
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.lo.arp_announce = 2 
vm.max_map_count = 262144

# 及时生效
sysctl -w vm.max_map_count=262144

ulimit优化

$ vim /etc/security/limits.conf 

# 设置以下内容
* soft memlock unlimited
* hard memlock unlimited
* soft nproc 102400
* hard nproc 102400
* soft nofile 1048576
* hard nofile 1048576

基础环境准备

ansible安装

1. 环境说明

名称说明
操作系统CentOS Linux release 7.8.2003
ansible2.9.27
节点deploy

2. 部署说明

物联管理平台机器数量繁多,需要ansible进行批量操作机器,节省时间,需要从部署节点至其他节点root免密。

注:在不知道root密码情况下,可以手动操作名密,按以下操作步骤执行:
# 需要在部署机器上执行以下命令生成公钥
$ ssh-keygen -t rsa
# 复制~/.ssh/id_rsa.pub内容,并粘贴至其他节点~/.ssh/authorized_keys文件里面
# 如果没有authorized_keys文件,可先执行创建创建在进行粘贴操作
$ touch ~/.ssh/authorized_keys

3. 部署步骤

1) 在线安装

$ yum -y install https://releases.ansible.com/ansible/rpm/release/epel-7-x86_64/ansible-2.9.27-1.el7.ans.noarch.rpm

2) 离线安装

# 提前上传ansible及所有依赖rpm包,并切换至rpm包目录
$ yum -y ./*rpm

3) 查看版本

$ ansible --version
ansible 2.9.27
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, Apr  2 2020, 13:16:51) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]

4) 设置管理主机列表

$ vim /etc/ansible/hosts

[master]
10.132.10.91    node_name=k8s-master1
10.132.10.92    node_name=k8s-master2
10.132.10.93    node_name=k8s-master3

[worker]
10.132.10.94    node_name=k8s-worker1
10.132.10.95    node_name=k8s-worker2
10.132.10.96    node_name=k8s-worker3
10.132.10.97    node_name=k8s-worker4
10.132.10.98    node_name=k8s-worker5
10.132.10.99    node_name=k8s-worker6

[etcd]
10.132.10.91    etcd_name=etcd1
10.132.10.92    etcd_name=etcd2
10.132.10.93    etcd_name=etcd3

[k8s:children]
master
worker

5) 禁用ssh主机检查

$ vi /etc/ansible/ansible.cfg
# 修改以下设置
# uncomment this to disable SSH key host checking
host_key_checking = False

6) 取消SELINUX设定及放开防火墙

$ ansible k8s -m command -a "setenforce 0"
$ ansible k8s -m command -a "sed --follow-symlinks -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config"
$ ansible k8s -m command -a "firewall-cmd --set-default-zone=trusted"
$ ansible k8s -m command -a "firewall-cmd --complete-reload"
$ ansible k8s -m command -a "swapoff -a"

7)hosts设置

$ cd /export/upload && vim hosts_set.sh
#设置以下脚本内容
#!/bin/bashcat > /etc/hosts << EOF
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.132.10.100 deploy harbor
10.132.10.91 master01
10.132.10.92 master02
10.132.10.93 master03
10.132.10.94 worker01
10.132.10.95 worker02
10.132.10.96 worker03
10.132.10.97 worker04
10.132.10.98 worker05
10.132.10.99 worker06
EOF

$ ansible new_worker -m copy -a 'src=/export/upload/hosts_set.sh dest=/export/upload'
$ ansible new_worker -m command -a 'sh /export/upload/hosts_set.sh'

docker安装

1. 环境说明

名称说明
操作系统CentOS Linux release 7.8.2003
dockerdocker-ce-20.10.17
节点deploy

2. 部署说明

k8s容器运行环境docker部署

3. 部署方法

1) 在线安装

$ yum -y install docker-ce-20.10.17

2) 离线安装

# 提前上传docker及所有依赖rpm包,并切换至rpm包目录
$ yum -y ./*rpm

3) 重新加载配置文件,启动并查看状态

$ systemctl start docker
$ systemctl status docker

4) 设置开机自启

$ systemctl enable docker 

5) 查看版本

$ docker version
Client: Docker Engine - Community
 Version:           20.10.17 
 API version:       1.41 
 Go version:        go1.17.11 
 Git commit:        100c701 
 Built:             Mon Jun  6 23:05:12 2022 
 OS/Arch:           linux/amd64 
 Context:           default 
 Experimental:      true
 
Server: Docker Engine - Community
 Engine:  
  Version:          20.10.17  
  API version:      1.41 (minimum version 1.12)  
  Go version:       go1.17.11  
  Git commit:       a89b842  
  Built:            Mon Jun  6 23:03:33 2022  
  OS/Arch:          linux/amd64  
  Experimental:     false 
 containerd:  
  Version:          1.6.8  
  GitCommit:        9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 
 runc: 
  Version:          1.1.4  
  GitCommit:        v1.1.4-0-g5fd4c4d 
 docker-init:  
 Version:          0.19.0  
 GitCommit:        de40ad0

docker-compose安装

1. 环境说明

名称说明
操作系统CentOS Linux release 7.8.2003
docker-composedocker-compose-linux-x86_64
节点deploy

2. 部署说明

harbor私有镜像库依赖。

3. 部署方法

1) 下载docker-compose并上传至服务器

$ curl -L https://github.com/docker/compose/releases/download/v2.9.0/docker-compose-linux-x86_64 -o docker-compose

2) 修改docker-compose执行权限

$ mv docker-compose /usr/local/bin/
$ chmod +x /usr/local/bin/docker-compose
$ docker-compose version

3) 查看版本

$ docker-compose version
Docker Compose version v2.9.0

harbor安装

1. 环境说明

名称说明
操作系统CentOS Linux release 7.8.2003
harborharbor-offline-installer-v2.4.3
节点harbor

2. 部署说明

私有镜像仓库。

3. 下载harbor离线安装包并上传至服务器

$ wget https://github.com/goharbor/harbor/releases/download/v2.4.3/harbor-offline-installer-v2.4.3.tgz

4. 解压安装包

$ tar -xzvf harbor-offline-installer-v2.4.3.tgz -C /export/servers/
$ cd /export/servers/harbor

5. 修改配置文件

$ mv harbor.yml.tmpl harbor.yml
$ vim harbor.yml

6. 设置以下内容

hostname: 10.132.10.100
http.port: 8090
data_volume: /export/data/harbor
log.location: /export/logs/harbor

7. 导入harbor镜像

$ docker load -i harbor.v2.4.3.tar.gz 
# 等待导入harbor依赖镜像文件
$ docker images
REPOSITORY                      TAG       IMAGE ID       CREATED       SIZE
goharbor/harbor-exporter        v2.4.3    776ac6ee91f4   4 weeks ago   81.5MB
goharbor/chartmuseum-photon     v2.4.3    f39a9694988d   4 weeks ago   172MB
goharbor/redis-photon           v2.4.3    b168e9750dc8   4 weeks ago   154MB
goharbor/trivy-adapter-photon   v2.4.3    a406a715461c   4 weeks ago   251MB
goharbor/notary-server-photon   v2.4.3    da89404c7cf9   4 weeks ago   109MB
goharbor/notary-signer-photon   v2.4.3    38468ac13836   4 weeks ago   107MB
goharbor/harbor-registryctl     v2.4.3    61243a84642b   4 weeks ago   135MB
goharbor/registry-photon        v2.4.3    9855479dd6fa   4 weeks ago   77.9MB
goharbor/nginx-photon           v2.4.3    0165c71ef734   4 weeks ago   44.4MB
goharbor/harbor-log             v2.4.3    57ceb170dac4   4 weeks ago   161MB
goharbor/harbor-jobservice      v2.4.3    7fea87c4b884   4 weeks ago   219MB
goharbor/harbor-core            v2.4.3    d864774a3b8f   4 weeks ago   197MB
goharbor/harbor-portal          v2.4.3    85f00db66862   4 weeks ago   53.4MB
goharbor/harbor-db              v2.4.3    7693d44a2ad6   4 weeks ago   225MB
goharbor/prepare                v2.4.3    c882d74725ee   4 weeks ago   268MB

8. 启动harbor

./prepare  # 如果有二次修改harbor.yml文件,请执行使配置文件生效
./install.sh --help # 查看启动参数
./install.sh --with-chartmuseum

运行环境搭建

docker安装

1. 环境说明

名称说明
操作系统CentOS Linux release 7.8.2003
dockerdocker-ce-20.10.17
节点k8s集群全部节点

2. 部署说明

k8s容器运行环境docker部署

3. 部署方法

1) 上传docker及依赖rpm包

$ ls /export/upload/docker-rpm.tgz 

2) 分发安装包

$ ansible k8s -m copy -a "src=/export/upload/docker-rpm.tgz dest=/export/upload/"
# 全部节点返回以下信息
CHANGED => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    }, 
    "changed": true, 
    "checksum": "acd3897edb624cd18a197bcd026e6769797f4f05", 
    "dest": "/export/upload/docker-rpm.tgz", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "3ba6d9fe6b2ac70860b6638b88d3c89d", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "system_u:object_r:usr_t:s0", 
    "size": 103234394, 
    "src": "/root/.ansible/tmp/ansible-tmp-1661836788.82-13591-17885284311930/source", 
    "state": "file", 
    "uid": 0
}

3) 执行解压并安装

$ ansible k8s -m shell -a "tar xzvf /export/upload/docker-rpm.tgz -C /export/upload/ && yum -y install /export/upload/docker-rpm/*"

4) 设置开机自启并启动

$ ansible k8s -m shell -a "systemctl enable docker && systemctl start docker"

5) 查看版本

$ ansible k8s -m shell -a "docker version"
# 全部节点返回以下信息
CHANGED | rc=0 >>
Client: Docker Engine - Community
 Version:           20.10.17
 API version:       1.41
 Go version:        go1.17.11
 Git commit:        100c701
 Built:             Mon Jun  6 23:05:12 2022
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.17
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.17.11
  Git commit:       a89b842
  Built:            Mon Jun  6 23:03:33 2022
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.8
  GitCommit:        9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6
 runc:
  Version:          1.1.4
  GitCommit:        v1.1.4-0-g5fd4c4d
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

kubernetes安装

有网环境安装

# 添加阿里云YUM的软件源:
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

下载离线安装包

# 创建rpm软件存储目录:
mkdir -p /export/download/kubeadm-rpm

# 执行命令:
yum install -y kubelet-1.22.4 kubeadm-1.22.4 kubectl-1.22.4 --downloadonly --downloaddir /export/download/kubeadm-rpm

无网环境安装

1) 上传kubeadm及依赖rpm包

$ ls /export/upload/
kubeadm-rpm.tgz 

2) 分发安装包

$ ansible k8s -m copy -a "src=/export/upload/kubeadm-rpm.tgz dest=/export/upload/"
# 全部节点返回以下信息
CHANGED => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    }, 
    "changed": true, 
    "checksum": "3fe96fe1aa7f4a09d86722f79f36fb8fde69facb", 
    "dest": "/export/upload/kubeadm-rpm.tgz", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "80d5bda420db6ea23ad75dcf0f76e858", 
    "mode": "0644", 
    "owner": "root", 
    "secontext": "system_u:object_r:usr_t:s0", 
    "size": 67423355, 
    "src": "/root/.ansible/tmp/ansible-tmp-1661840257.4-33361-139823848282879/source", 
    "state": "file", 
    "uid": 0
}

3) 执行解压并安装

$ ansible k8s -m shell -a "tar xzvf /export/upload/kubeadm-rpm.tgz -C /export/upload/ && yum -y install /export/upload/kubeadm-rpm/*"

4) 设置开机自启并启动

$ ansible k8s -m shell -a "systemctl enable kubelet && systemctl start kubelet"
注:此时kubelet启动失败,会进入不断重启,这个是正常现象,执行init或join后问题会自动解决,对此官网有如下描述,也就是此时不用理会kubelet.service,可执行发下命令查看kubelet状态。
$ journalctl -xefu kubelet

4) 分发依赖镜像至集群节点

# 可以在有公网环境提前下载镜像
$ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.4
$ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0
$ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.4
$ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.4
$ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.4
$ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.4
$ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5
$ docker pull rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
$ docker pull rancher/mirrored-flannelcni-flannel:v0.19.1
# 导出镜像文件,上传部署节点并导入镜像库
$ ls /export/upload

$ docker load -i google_containers-coredns-v1.8.4.tar
$ docker load -i google_containers-etcd:3.5.0-0.tar
$ docker load -i google_containers-kube-apiserver:v1.22.4.tar
$ docker load -i google_containers-kube-controller-manager-v1.22.4.tar
$ docker load -i google_containers-kube-proxy-v1.22.4.tar
$ docker load -i google_containers-kube-scheduler-v1.22.4.tar
$ docker load -i google_containers-pause-3.5.tar
$ docker load -i rancher-mirrored-flannelcni-flannel-cni-plugin-v1.1.0.tar
$ docker load -i rancher-mirrored-flannelcni-flannel-v0.19.1.tar

# 镜像打harbor镜像库tag
$ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.4 10.132.10.100:8090/community/coredns:v1.8.4
$ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0 10.132.10.100:8090/community/etcd:3.5.0-0
$ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.4 10.132.10.100:8090/community/kube-apiserver:v1.22.4
$ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.4 10.132.10.100:8090/community/kube-controller-manager:v1.22.4
$ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.4 10.132.10.100:8090/community/kube-proxy:v1.22.4
$ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.4 10.132.10.100:8090/community/kube-scheduler:v1.22.4
$ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5 10.132.10.100:8090/community/pause:3.5
$ docker tag rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0 10.132.10.100:8090/community/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
$ docker tag rancher/mirrored-flannelcni-flannel:v0.19.1 10.132.10.100:8090/community/mirrored-flannelcni-flannel:v0.19.1

# 推送至harbor镜像库
$ docker push 192.168.186.120:8090/community/coredns:v1.8.4
$ docker push 192.168.186.120:8090/community/etcd:3.5.0-0
$ docker push 192.168.186.120:8090/community/kube-apiserver:v1.22.4
$ docker push 192.168.186.120:8090/community/kube-controller-manager:v1.22.4
$ docker push 192.168.186.120:8090/community/kube-proxy:v1.22.4
$ docker push 192.168.186.120:8090/community/kube-scheduler:v1.22.4
$ docker push 192.168.186.120:8090/community/pause:3.5
$ docker push 192.168.186.120:8090/community/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
$ docker push 192.168.186.120:8090/community/mirrored-flannelcni-flannel:v0.19.1

5)部署首个master

$ kubeadm init \
--control-plane-endpoint "10.132.10.91:6443" \
--image-repository 10.132.10.100/community \
--kubernetes-version v1.22.4 \
--service-cidr=172.16.0.0/16 \
--pod-network-cidr=10.244.0.0/16 \
--token "abcdef.0123456789abcdef" \
--token-ttl "0" \
--upload-certs

# 显示以下信息
[init] Using Kubernetes version: v1.22.4
[preflight] Running pre-flight checks
	[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master01] and IPs [172.16.0.1 10.132.10.91]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master01] and IPs [10.132.10.91 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master01] and IPs [10.132.10.91 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 11.008638 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
9151ea7bb260a42297f2edc486d5792f67d9868169310b82ef1eb18f6e4c0f13
[mark-control-plane] Marking the node master01 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 10.132.10.91:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:4884a98b0773bc89c36dc5fa51569293103ff093e9124431c4c8c2d5801a96a2 \
	--control-plane --certificate-key 9151ea7bb260a42297f2edc486d5792f67d9868169310b82ef1eb18f6e4c0f13

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.132.10.91:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:4884a98b0773bc89c36dc5fa51569293103ff093e9124431c4c8c2d5801a96a2

6)生成kubelet环境配置文件

# 执行命令
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

7)配置网络插件flannel

# 创建flannel.yml文件
$ touch /export/servers/kubernetes/flannel.yml
$ vim /export/servers/kubernetes/flannel.yml
# 设置以下内容,需要关注有网无网时对应的地址切换
---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        # 在有网环境下可以切换下面地址
        # image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
        # 在无网环境下需要使用私有harbor地址
        image: 10.132.10.100:8090/community/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        # 在有网环境下可以切换下面地址
        # image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.1
        # 在无网环境下需要使用私有harbor地址
        image: 10.132.10.100:8090/community/mirrored-flannelcni-flannel:v0.19.1
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        # 在有网环境下可以切换下面地址
        # image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.1
        # 在无网环境下需要使用私有harbor地址
        image: 10.132.10.100:8090/community/mirrored-flannelcni-flannel:v0.19.1
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

8)安装网络插件flannel

# 生效yml配置文件
$ kubectl apply -f kube-flannel.yml

# 查看pods状态
$ kubectl get pods -A
NAMESPACE      NAME                               READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-kjmt4              1/1     Running   0          148m
kube-system    coredns-7f84d7b4b5-7qr8g           1/1     Running   0          4h18m
kube-system    coredns-7f84d7b4b5-fljws           1/1     Running   0          4h18m
kube-system    etcd-master01                      1/1     Running   0          4h19m
kube-system    kube-apiserver-master01            1/1     Running   0          4h19m
kube-system    kube-controller-manager-master01   1/1     Running   0          4h19m
kube-system    kube-proxy-wzq2t                   1/1     Running   0          4h18m
kube-system    kube-scheduler-master01            1/1     Running   0          4h19m

9)加入其他master节点

# 在master01执行如下操作
# 查看token列表
$ kubeadm token list

# master01执行init操作后生成加入命令如下
$ kubeadm join 10.132.10.91:6443 \
--token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:4884a98b0773bc89c36dc5fa51569293103ff093e9124431c4c8c2d5801a96a2 \
--control-plane --certificate-key 9151ea7bb260a42297f2edc486d5792f67d9868169310b82ef1eb18f6e4c0f13

# 在其他master节点执行如下操作
# 分别执行上一步的加入命令,加入master节点至集群
$ kubeadm join 10.132.10.91:6443 \
--token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:4884a98b0773bc89c36dc5fa51569293103ff093e9124431c4c8c2d5801a96a2 \
--control-plane --certificate-key 9151ea7bb260a42297f2edc486d5792f67d9868169310b82ef1eb18f6e4c0f13

# 此处如果报错,一般是certificate-key过期,可以在master01执行如下命令更新
$ kubeadm init phase upload-certs --upload-certs
3b647155b06311d39faf70cb094d9a5e102afd1398323e820cfb3cfd868ae58f

# 将上面生成的值替换certificate-key值再次在其他master节点执行如下命令
$ kubeadm join 10.132.10.91:6443 \
--token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:4884a98b0773bc89c36dc5fa51569293103ff093e9124431c4c8c2d5801a96a2     
--control-plane 
--certificate-key 3b647155b06311d39faf70cb094d9a5e102afd1398323e820cfb3cfd868ae58f

# 生成kubelet环境配置文件
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

# 在任意master节点执行查看节点状态命令
$ kubectl get nodes
NAME       STATUS   ROLES                  AGE     VERSION
master01   Ready    control-plane,master   5h58m   v1.22.4
master02   Ready    control-plane,master   45m     v1.22.4
master03   Ready    control-plane,master   44m     v1.22.4

9)加入worker节点

# 在其他worker节点执行master01执行init操作后生成的加入命令如下
# 分别执行上一步的加入命令,加入master节点至集群
$ kubeadm join 10.132.10.91:6443 \
--token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:4884a98b0773bc89c36dc5fa51569293103ff093e9124431c4c8c2d5801a96a2

# 此处如果报错,一般是token过期,可以在master01执行如下命令重新生成加入命令
$ kubeadm token create --print-join-command
kubeadm join 10.132.10.91:6443 \
--token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:cf30ddd3df1c6215b886df1ea378a68ad5a9faad7933d53ca9891ebbdf9a1c3f

# 将上面生成的加入命令再次在其他worker节点执行
# 查看集成状态
$ kubectl get nodes
NAME       STATUS   ROLES                  AGE     VERSION
master01   Ready    control-plane,master   6h12m   v1.22.4
master02   Ready    control-plane,master   58m     v1.22.4
master03   Ready    control-plane,master   57m     v1.22.4
worker01   Ready    <none>                 5m12s   v1.22.4
worker02   Ready    <none>                 4m10s   v1.22.4
worker03   Ready    <none>                 3m42s   v1.22.4

10)配置kubernetes dashboard

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 31001
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.5.0
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.7
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

11)生成dashboard自签证书

$ mkdir -p /export/servers/kubernetes/certs && cd /export/servers/kubernetes/certs/
$ openssl genrsa -out dashboard.key 2048
$ openssl req -days 3650 -new -key dashboard.key -out dashboard.csr -subj /C=CN/ST=BEIJING/L=BEIJING/O=JD/OU=JD/CN=172.16.16.42
$ openssl x509 -req -days 3650 -in dashboard.csr -signkey dashboard.key -out dashboard.crt

12)执行以下操作命令

# 去除主节点的污点
$ kubectl taint nodes --all node-role.kubernetes.io/master-
# 创建命名空间
$ kubectl create namespace kubernetes-dashboard
# 创建Secret
$ kubectl create secret tls kubernetes-dashboard-certs -n kubernetes-dashboard --key dashboard.key \
--cert dashboard.crt

13)生效dashboard yml配置文件

$ kubectl apply -f /export/servers/kubernetes/dashboard.yml
# 查看pods状态
$ kubectl get pods -A | grep kubernetes-dashboard
kubernetes-dashboard   dashboard-metrics-scraper-c45b7869d-rbdt4   1/1     Running   0               15m
kubernetes-dashboard   kubernetes-dashboard-764b4dd7-rt66t         1/1     Running   0               15m

14)访问dashboard页面

# web浏览器访问地址:IP地址为集群任意节点(可以是LB地址)https://192.168.186.121:31001/#/login

15)制作访问token

# 新增配置文件 dashboard-adminuser.yaml
$ touch /export/servers/kubernetes/dashboard-adminuser.yaml && vim /export/servers/kubernetes/dashboard-adminuser.yaml
# 输入以下内容
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
  
# 执行yaml文件
$ kubectl create -f /export/servers/kubernetes/dashboard-adminuser.yaml
# 预期输出结果
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
# 说明:上面创建了一个叫admin-user的服务账号,并放在kubernetes-dashboard命名空间下,并将cluster-admin角色绑定到admin-user账户,这样admin-user账户就有了管理员的权限。默认情况下,kubeadm创建集群时已经创建了cluster-admin角色,直接绑定即可

# 查看admin-user账户的token
$ kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
# 预期输出结果
Name:         admin-user-token-9fpps
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 72c1aa28-6385-4d1a-b22c-42427b74b4c7

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1099 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IjFEckU0NXB5Yno5UV9MUFkxSUpPenJhcTFuektHazM1c2QzTGFmRzNES0EifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTlmcHBzIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI3MmMxYWEyOC02Mzg1LTRkMWEtYjIyYy00MjQyN2I3NGI0YzciLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.oA3NLhhTaXd2qvWrPDXat2w9ywdWi_77SINk4vWkfIIzMmxBEHnqvDIBvhRC3frIokNSvT71y6mXN0KHu32hBa1YWi0MuzF165ZNFtM_rSQiq9OnPxeFvLaKS-0Vzr2nWuBx_-fTt7gESReSMLEJStbPb1wOnR6kqtY66ajKK5ILeIQ77I0KXYIi7GlPEyc6q4bIjweZ0HSXDPR4JSnEAhrP8Qslrv3Oft4QZVNj47x7xKC4dyyZOMHUIj9QhkpI2gMbiZ8XDUmNok070yDc0TCxeTZKDuvdsigxCMQx6AesD-8dca5Hb8Sm4mEPkGJekvMzkLkM97y_pOBPkfTAIA

# 把上面命令执行获取到的Token复制到登录界面的Token输入框中,即可正常登录dashboard

13)登录dashboard如下





kubectl安装

1. 环境说明

名称说明
操作系统CentOS Linux release 7.8.2003
kubectlkubectl-1.22.4-0.x86_64
节点deploy

2. 部署说明

Kubernetes kubectl客户端。

3. 解压之前上传的kubadm-rpm包

$ tar xzvf kubeadm-rpm.tgz 

4. 执行安装

$ rpm -ivh bc7a9f8e7c6844cfeab2066a84b8fecf8cf608581e56f6f96f80211250f9a5e7-kubectl-1.22.4-0.x86_64.rpm

5. 增加执行权限

# 生成kubelet环境配置文件
$ mkdir -p $HOME/.kube
$ sudo touch $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 从任意master节点复制内容至上面的配置文件

6. 查看版本

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.4", GitCommit:"b695d79d4f967c403a96986f1750a35eb75e75f1", GitTreeState:"clean", BuildDate:"2021-11-17T15:48:33Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.4", GitCommit:"b695d79d4f967c403a96986f1750a35eb75e75f1", GitTreeState:"clean", BuildDate:"2021-11-17T15:42:41Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}

helm安装

1. 环境说明

名称说明
操作系统CentOS Linux release 7.8.2003
helmhelm-v3.9.3-linux-amd64.tar.gz
节点deploy

2. 部署说明

Kubernetes资源包及配置管理工具。

3. 下载helm离线安装包并上传至服务器

$ wget https://get.helm.sh/helm-v3.9.3-linux-amd64.tar.gz

4. 解压安装包

$ tar -zxvf helm-v3.9.3-linux-amd64.tar.gz -C /export/servers/
$ cd /export/servers/linux-amd64

5. 增加执行权限

$ cp linux-amd64/helm /usr/local/bin/
$ chmod +x /usr/local/bin/helm

6. 查看版本

$ helm version
version.BuildInfo{Version:"v3.9.3", GitCommit:"414ff28d4029ae8c8b05d62aa06c7fe3dee2bc58", GitTreeState:"clean", GoVersion:"go1.17.13"}

设置本地存储挂载nas

$ mkdir /export/servers/helm_chart/local-path-storage && cd /export/servers/helm_chart/local-path-storage/local-path-storage.yaml
$ vim local-path-storage.yaml
# 设置以下内容,设置"paths":["/home/admin/local-path-provisioner"] 为nas目录,没有目录需要创建
apiVersion: v1
kind: Namespace
metadata:
  name: local-path-storage
 
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: local-path-provisioner-service-account
  namespace: local-path-storage
 
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: local-path-provisioner-role
rules:
  - apiGroups: [ "" ]
    resources: [ "nodes", "persistentvolumeclaims", "configmaps" ]
    verbs: [ "get", "list", "watch" ]
  - apiGroups: [ "" ]
    resources: [ "endpoints", "persistentvolumes", "pods" ]
    verbs: [ "*" ]
  - apiGroups: [ "" ]
    resources: [ "events" ]
    verbs: [ "create", "patch" ]
  - apiGroups: [ "storage.k8s.io" ]
    resources: [ "storageclasses" ]
    verbs: [ "get", "list", "watch" ]
 
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: local-path-provisioner-bind
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: local-path-provisioner-role
subjects:
  - kind: ServiceAccount
    name: local-path-provisioner-service-account
    namespace: local-path-storage
 
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: local-path-provisioner
  namespace: local-path-storage
spec:
  replicas: 1
  selector:
    matchLabels:
      app: local-path-provisioner
  template:
    metadata:
      labels:
        app: local-path-provisioner
    spec:
      serviceAccountName: local-path-provisioner-service-account
      containers:
        - name: local-path-provisioner
          image: rancher/local-path-provisioner:v0.0.21
          imagePullPolicy: IfNotPresent
          command:
            - local-path-provisioner
            - --debug
            - start
            - --config
            - /etc/config/config.json
          volumeMounts:
            - name: config-volume
              mountPath: /etc/config/
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
      volumes:
        - name: config-volume
          configMap:
            name: local-path-config
 
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-path
provisioner: rancher.io/local-path
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
 
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: local-path-config
  namespace: local-path-storage
data:
  config.json: |-
    {
            "nodePathMap":[
            {
                    "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
                    "paths":["/nas_data/jdiot/local-path-provisioner"] 
            }
            ]
    }
  setup: |-
    #!/bin/sh
    while getopts "m:s:p:" opt
    do
        case $opt in
            p)
            absolutePath=$OPTARG
            ;;
            s)
            sizeInBytes=$OPTARG
            ;;
            m)
            volMode=$OPTARG
            ;;
        esac
    done
 
    mkdir -m 0777 -p ${absolutePath}
  teardown: |-
    #!/bin/sh
    while getopts "m:s:p:" opt
    do
        case $opt in
            p)
            absolutePath=$OPTARG
            ;;
            s)
            sizeInBytes=$OPTARG
            ;;
            m)
            volMode=$OPTARG
            ;;
        esac
    done
 
    rm -rf ${absolutePath}
  helperPod.yaml: |-
    apiVersion: v1
    kind: Pod
    metadata:
      name: helper-pod
    spec:
      containers:
      - name: helper-pod
        image: busybox

注:以上依赖镜像需要从公网环境下载依赖并导入镜像库,需要设置以上对应镜像地址从私有镜像库拉取镜像

生效本地存储yaml

$ kubectl apply -f local-path-storage.yaml -n local-path-storage

设置k8s默认存储

$ kubectl patch storageclass local-path  -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

注:后面部署的中间件及服务需要修改对应的存储为本地存储:"storageClass": "local-path"

 

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

手把手教你一套完善且高效的k8s离线部署方案 的相关文章

  • typescript(五)--ts中抽象类、继承、多态

    如题 xff0c 本篇我们介绍下ts中抽象类 继承 多态 ts中类抽象类 多态 xff1a 抽象类 abstract 修饰 xff0c 里面可以没有抽象方法 但有抽象方法 abstract method 的类必须声明为抽象类 abstrac
  • typescript(九)--ts中泛型、泛型方法、泛型类、泛型接口

    如题 xff0c 本篇我们介绍写ts中的泛型 什么是泛型呢 xff1f 我们可以理解为泛型就是在编译期间不确定方法的类型 广泛之意思 xff0c 在方法调用时 xff0c 由程序员指定泛型具体指向什么类型 泛型在传统面向对象编程语言中是极为
  • springboot中spring.profiles.include的妙用

    springboot中spring profiles include的妙用 我们有这样的一个springboot项目 项目分为开发 测试 生产三个不同阶段 环境 xff0c 每个阶段都会有db ftp redis等的不同配置信息 我们可以使
  • “联通”两字在windows平台下的乱码问题

    windows 平台下 xff1a 新建文本文档 输入 39 联通 39 ctrl 43 s保存 gt 退出记事本 再双击打开该文本 神奇的事情发生了 联通两个字变成黑点 xff01 原因 xff1a 保存和打开的时候没有指定统一的编码解码
  • Ant Design table 自动对齐表头

    在table的属性中加入 xff1a scroll 61 34 x 39 max content 39 34 即可不用设置列宽度 xff0c 自适应全表内容为一行显示 xff0c 自动对齐表头
  • Python pip 源设置成国内源,阿里云源,清华大学源,最方便的方式,都在这里了

    文章目录 背景代码替换设置阿里源 推荐这个 设置清华大学的 手动替换windows 替换Mac 替换 国内源列表 xff08 推荐用阿里云的 xff09 推荐阅读 背景 由于 python 自带的源下载速度非常慢 xff0c 特别是安装一些
  • Linux 休眠和挂起

    Linux休眠和挂起 xff08 2008新版 xff09 Linux2 6内核已经有了非常多的变化 xff0c 配置也要相应的改变 The only thing that not changes is Change xff1a xff09
  • 使用dpkg命令安装deb文件包

    1 使用dpkg命令进行安装 sudo dpkg i deb文件名 2 根据经验 xff0c 通常情况下会报依赖关系的错误 xff0c 我们可以使用以下的命令修复安装 sudo apt get install f 3 如果要卸载安装的应用我
  • UITableViewController (列表视图控制器)

    tableview里Cell的小对勾颜色改成别的颜色 xff1f mTableView tintColor 61 UIColor redColor 怎么在不新建一个Cell的情况下调整separaLine的位置 xff1f 一 myTabl
  • Python零代码小游戏 · FreeGames

    Python在日常的办公或者其他领域都有涉及 xff0c 如网站开发 数据分析 爬虫 可视化等等 我们其实还可以选择用Python开发小游戏 xff0c 回忆童年的美好时光 这次并非用PyGame制作的 xff0c 而是一个很有趣的库 Fr
  • Linux 系统使用 git 提交代码-- git 的安装及使用(简明教学指南)

    序 2023 02 09 晚 鉴于本篇文章收藏量比较多 xff0c 那就给大家分享点在实际工作中使用频率最高的工作流命令吧 场景如下 多人共同开发一个项目 xff1a 我叫小明 xff0c 参与了一个名为 chatGPT 的项目 xff0c
  • pyperclip 粘贴失效

    最近在pyautogui自动化输入时 xff0c 发现英文数字都可以 xff0c 中文比较麻烦 xff0c 而且还牵扯到输入法切换问题 xff0c pyautogui typewrite 这是相当于键盘输入 xff0c 如果现在是中文 xf
  • mac 安装问题汇总

    1 问题 xff1a 应用程序添加到登陆项后需要输入密码 xff0c 怎么才能不让它提示输入密码 xff0c 直接运行 xff1f 回答 xff1a 终端内输入 sudo s 输入密码 chown root Applications Gen
  • 如何在Java中加密和解密zip文件?

    在本文中 xff0c 我们来学习如何用Zip4j库创建受密码保护的压缩文件并将其解压 依赖 让我们先把 zip4j 依赖关系添加到我们的 pom xml 文件中 lt dependency gt lt groupId gt net ling
  • Alibaba技术大牛丢给我一份Spring Cloud笔记,在GitHub的热度居然高达81.6k标星,太强了!

    前言 阿里巴巴 xff0c 作为国内互联网公司的Top xff0c 算是业界的标杆 xff0c 有阿里背景的程序员 xff0c 也更具有权威性 作为程序员 xff0c 都清楚阿里对于员工要求有多高 xff0c 技术人员掌握的技术水平更是望尘
  • VNC安装教程

    服务器远程访问工具 xff0c 图形化界面 xff0c VNC安装 需要先在服务器安装sever xff0c 然后在本地安装客户端进行访问 一 首先在服务器安装vncserver span class token comment 以root
  • Debian11安装Docker稳定版

    1 安装依赖包 apt get update amp amp apt get install ca certificates curl gnupg lsb release apt transport https software prope
  • 程序员成神之路,一年挖坑,五年扛旗,十年成神

    自人类社会诞生以来 xff0c 等级就一直存在 xff0c 有人指点江山 xff0c 称之为 大神 有人卸瓦搬砖 xff0c 称之为 小白 在程序员的世界里 xff0c 等级同样森严 特别是在1年 5年 10年时会有大不同 据说到达上面每一
  • [安装fastfds中的nginx执行make命令报错]src/core/ngx_murmurhash.c:37:11: error

    问题 在nginx文件夹里执行make命令报错 src core ngx murmurhash c 37 11 error this statement may fall through Werror 61 implicit fallthr
  • 七牛云融合CDN到底怎么配置?

    人生如戏 xff0c 你得先有故事 老李 由于来年头一个月公司产品接入了一个旅游项目 xff0c 为了保证系统的稳定性 xff0c 必须要对现有架构进行改进以应对大流量的冲击 那么问题来了 xff1f 怎么改 xff1f 首先 xff0c

随机推荐

  • 【二、Arm平台直接安装QT】

    在目标开发平台空间资源不紧张的情况下 xff0c 可直接安装QT常规库 xff0c 省去交叉编译QT源码的时间 span class token function sudo span span class token function ap
  • ubuntu sudo apt-get update时执行失败应该怎么办

    命中 1 http security ubuntu com ubuntu focal security InRelease 命中 2 http us archive ubuntu com ubuntu focal InRelease 命中
  • 不吹不黑,逛GitHub没看过这10个开源项目,绝对血亏

    今天的分享 xff0c 也算是一次简单的复盘 xff0c 我们花了点时间梳理了一下 xff0c 以便诸位在空余时间可以研究学习 下面开始进入正题 xff1a 1 Build Your Own X GitHub Star xff1a 61 3
  • 快速精准的人头检测,代码已开源

    昨天arXiv一篇新上论文 FCHD A fast and accurate head detector xff0c 来自江森自控 xff08 Johnson Controls Inc xff09 的软件工程师Aditya Vora分享了一
  • UDP 用户数据报协议

    UDP 用户数据报协议 引言 UDP是一种保留消息边界 xff08 不合并 xff0c 不拆分 xff09 的简单的面向数据报的传输层协议 使用UDP协议的时候 xff0c 一般来说 xff0c 每个被应用程序请求的UDP输出操作只生产一个
  • 有新家了

    我在CSDN有个小窝了 我是一个JAVA初学者 虽然不是从事IT业 但对计算机有着浓厚的兴趣 希望在CSDN这个大家庭里 能得到朋友们的帮助 当然 我也会力所能及的帮助其它初学者解决一些简单问题的 以后我会把每天学习的内容 来这里发表一下
  • 动态绑定和多态

    class Animal private String name Animal String name this name 61 name public void enjoy System out println 34 叫声 34 clas
  • 离线安装gitlab

    1 下载跟Linux版本相关的 rpm包 地址 xff1a https packages gitlab com gitlab gitlab ce 2 将下载的rpm包上传到机器 3 解压 rpm ivh gitlab ce 15 6 2 c
  • windows子系统 WSL 的根目录位置

    根目录对应位置 我安装的子系统是 Ubuntu18 04 xff0c 根目录对应的位置是 xff1a C Users Administrator AppData Local Packages CanonicalGroupLimited Ub
  • 2020阿里云学生服务器操作步骤!

    前言 年龄在12岁 24岁之间的大陆个人实名认证用户 和 大陆全日制在校大学生在学生认证有效期内 xff0c 满足上述任一条件即可享受优惠价格 xff0c 同一用户只能保有一台学生优惠弹性计算产品 xff0c 一台数据库RDS产品 xff0
  • python用Selenium爬取携程网机票信息

    一 问题说明 1 selenium库是爬虫过程中比较讨巧的一个第三方库 xff0c 它能够跳过js ajax等交互 xff0c 上手比较容易 2 基础代码是根据其他博主参考而来 xff0c 但携程网站不断变化 xff0c 除ID等不变的信息
  • Docker---Docker-compose安装部署Samba服务

    Docker compose安装部署Samba服务 目录 Docker compose安装部署Samba服务一 环境准备二 创建docker compose yaml文件三 测试服务 一 环境准备 1 拉取samba镜像 xff1a doc
  • 金山词霸2005专业版序列号,绝对正确 JQ7M7-XCD38-834H2-TRTWJ-J7BG4

    金山词霸2005专业版序列号 xff0c 绝对正确 JQ7M7 XCD38 834H2 TRTWJ J7BG4
  • Java数据结构——用顺序表编写一个简易通讯录

    Java数据结构 用顺序表编写一个简易通讯录 1 定义线性表的抽象数据类型 xff08 接口 xff09 2 编写顺序表 xff08 类 xff09 3 编写测试程序 xff08 main方法所在的可运行类 xff09 Java数据结构 用
  • sprintf和snprintf用法

    1 sprintf 函数 sprintf 函数原型为 intsprintf char str const char format 其中的格式控制字符串与 printf 的格式控制字符串的作用是一样的 xff0c 表示的是参数的格式 xff0
  • 官网的订阅发布节点

    发布话题 1 usr bin env python 2 license removed for brevity 3 import rospy 4 from std msgs msg import String 5 6 def talker
  • Tkinter教程之Pack篇

    39 39 39 Tkinter教程之Pack篇 39 39 39 Pack为一布局管理器 xff0c 可将它视为一个弹性的容器 39 39 39 1 一个空的widget 39 39 39 不使用pack coding cp936 fro
  • Sqlserver中解析JSON

    参考 xff1a https www red gate com simple talk sql t sql programming consuming json strings in sql server 主要的过程代码单独贴出来 xff1
  • 解决逃离塔科夫0.12.9离线版修改商人可回收所有物品的问题

    复制这里的代码替换 xff0c 不会出现问题 span class token string property property 34 sell category 34 span span class token operator span
  • 手把手教你一套完善且高效的k8s离线部署方案

    作者 xff1a 郝建伟 背景 面对更多项目现场交付 xff0c 偶而会遇到客户环境不具备公网条件 xff0c 完全内网部署 xff0c 这就需要有一套完善且高效的离线部署方案 系统资源 编号主机名称IP资源类型CPU内存磁盘01k8s m