k8s1.26+containerd安装-二进制安装

2023-05-16

k8s1.26+containerd安装

1.机器

iphostname
192.168.137.133k8smaster
192.168.137.132k8snode1
192.168.137.134k8snode2

2.下载所需二进制包

# 1.下载kubernetes1.26.+的二进制包
#github二进制包下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md
 
curl -L -o kubernetes-server-linux-amd64.tar.gz https://dl.k8s.io/v1.26.0/kubernetes-server-linux-amd64.tar.gz
 
# 2.下载etcdctl二进制包
# github二进制包下载地址:https://github.com/etcd-io/etcd/releases
 
curl -L -o etcd-v3.5.6-linux-amd64.tar.gz https://storage.googleapis.com/etcd/v3.5.6/etcd-v3.5.6-linux-amd64.tar.gz
 
# 3.docker-ce二进制包下载地址
# 二进制包下载地址:https://download.docker.com/linux/static/stable/x86_64/
 
# 这里需要下载20.10.+版本
 
curl -L -o docker-20.10.22.tgz https://download.docker.com/linux/static/stable/x86_64/docker-20.10.22.tgz

# 4.下载cri-docker 
# 二进制包下载地址:https://github.com/Mirantis/cri-dockerd/releases/
 
curl -L -o cri-dockerd-0.2.6.amd64.tgz  https://ghproxy.com/https://github.com/Mirantis/cri-dockerd/releases/download/v0.2.6/cri-dockerd-0.2.6.amd64.tgz


# 5.containerd二进制包下载
# github下载地址:https://github.com/containerd/containerd/releases
 
# containerd下载时下载带cni插件的二进制包。
 
curl -L -o cri-containerd-cni-1.6.6-linux-amd64.tar.gz https://github.com/containerd/containerd/releases/download/v1.6.6/cri-containerd-cni-1.6.6-linux-amd64.tar.gz

# 6.下载cfssl二进制包
# github二进制包下载地址:https://github.com/cloudflare/cfssl/releases
 
curl -L -o cfssl_1.6.1_linux_amd64 https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl_1.6.1_linux_amd64
curl -L -o cfssljson_1.6.1_linux_amd64 https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssljson_1.6.1_linux_amd64
curl -L -o cfssl-certinfo_1.6.1_linux_amd64 https://github.com/cloudflare/cfssl/releases/download/v1.6.1/cfssl-certinfo_1.6.1_linux_amd64
 
# 7.cni插件下载
# github下载地址:https://github.com/containernetworking/plugins/releases
 
curl -L -o cni-plugins-linux-amd64-v1.1.1.tgz https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz
 
# 8.crictl客户端二进制下载
# github下载:https://github.com/kubernetes-sigs/cri-tools/releases
 
curl -L -o crictl-v1.24.2-linux-amd64.tar.gz https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz

2.1.机器初始化操作

每个机器设置对应的hostname,并查看

 hostnamectl set-hostname k8smaster
 hostname

在master机器配置host文件

 echo '''
192.168.137.133 k8smaster
192.168.137.132 k8snode1
192.168.137.134 k8snode2
''' >> /etc/hosts

每台机器都设置 转发 IPv4 并让 iptables 看到桥接流量

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# 设置所需的 sysctl 参数,参数在重新启动后保持不变
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# 应用 sysctl 参数而不重新启动
sudo sysctl --system

如果想要更好的网络性能就配置ipvs,可以不配置。

yum install ipvsadm ipset sysstat conntrack libseccomp -y

cat >> /etc/modules-load.d/ipvs.conf <<EOF 
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF
# 生效
systemctl restart systemd-modules-load.service
# 验证是否配置成功
lsmod | grep -e ip_vs -e nf_conntrack

每台机器都设置 时间同步

yum install chrony -y
systemctl start chronyd
systemctl enable chronyd
chronyc sources

每台机器 如果有防火墙关闭防火墙

systemctl stop firewalld
systemctl disable firewalld

每台机器 关闭 swap

# 临时关闭;关闭swap主要是为了性能考虑
swapoff -a
# 可以通过这个命令查看swap是否关闭了
free
# 永久关闭
sed -ri 's/.*swap.*/#&/' /etc/fstab

每台机器 禁用 SELinux

# 临时关闭
setenforce 0
# 永久禁用
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config

3.每台机器 安装containerd

3.1.master执行 将下载的包分发到其他机器

scp * root@192.168.137.132:/root
scp * root@192.168.137.134:/root

3.2.安装

# 解压cri-containerd-cni-1.6.6-linux-amd64.tar.gz
tar zxf cri-containerd-cni-1.6.6-linux-amd64.tar.gz
# 复制文件到指定位置
cp -r etc opt usr /
# 分发文件到其他机器
scp -r etc opt usr root@192.168.137.132:/
scp -r etc opt usr root@192.168.137.134:/
# 删除当前解压出的文件夹
rm -rf ./etc ./opt ./usr

# 创建默认配置文件
mkdir /etc/containerd
containerd config default > /etc/containerd/config.toml

# 设置aliyun地址,不设置会连接不上, 如果无法下载镜像检查一下配置是否替换 cat /etc/containerd/config.toml |grep sandbox_image
sed -i "s#registry.k8s.io/pause#registry.aliyuncs.com/google_containers/pause#g" /etc/containerd/config.toml
sed -i "s#k8s.gcr.io/pause#registry.aliyuncs.com/google_containers/pause#g" /etc/containerd/config.toml
# 设置驱动为systemd
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
# 设置dicker地址为aliyun镜像地址
sed -i '/\[plugins\."io\.containerd\.grpc\.v1\.cri"\.registry\.mirrors\]/a\      [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]\n        endpoint = ["https://8aj710su.mirror.aliyuncs.com" ,"https://registry-1.docker.io"]' /etc/containerd/config.toml
# 查看是否安装成功,和docker命令差不多
crictl info
crictl images

3.3.master执行 将配置文件分发到其他机器

scp -r /etc/containerd root@192.168.137.132:/etc/containerd
scp -r /etc/containerd root@192.168.137.134:/etc/containerd

3.4.重启服务

systemctl daemon-reload
systemctl enable --now containerd
systemctl restart containerd

# 验证是否安装成功
crictl info

4.部署etcd 服务

4.1.设置签名证书

# 移动文件并设置权限
mv cfssl_1.6.1_linux_amd64  /usr/bin/cfssl
mv cfssljson_1.6.1_linux_amd64 /usr/bin/cfssljson
mv cfssl-certinfo_1.6.1_linux_amd64 /usr/bin/cfssl-certinfo
chmod +x /usr/bin/cfssl*

mkdir -p ~/TLS/{etcd,k8s}
# 自签CA:
cat > ~/TLS/etcd/ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF
cat > ~/TLS/etcd/ca-csr.json << EOF
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF

# 生成证书:会生成ca.pem和ca-key.pem文件
cd ~/TLS/etcd/
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

# 使用自签CA签发Etcd HTTPS证书
# 创建证书申请文件:
cat > server-csr.json << EOF
{
    "CN": "etcd",
    "hosts": [
    "192.168.137.132",
    "192.168.137.133",
    "192.168.137.134"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
EOF

# 注:上述文件hosts字段中IP为所有etcd节点的集群内部通信IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。
#生成证书:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

# 会生成server.pem和server-key.pem文件。

4.2.安装Etcd

Etcd 是一个分布式键值存储系统,Kubernetes使用Etcd进行数据存储,所以先准备一个Etcd数据库,为解决Etcd单点故障,应采用集群方式部署,这里使用3台组建集群,可容忍1台机器故障,当然,你也可以使用5台组建集群,可容忍2台机器故障。

mkdir /opt/etcd/{bin,cfg,ssl} -p
tar zxvf etcd-v3.5.6-linux-amd64.tar.gz
mv etcd-v3.5.6-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
#拷贝刚才生成的证书
#把刚才生成的证书拷贝到配置文件中的路径:
cp ~/TLS/etcd/ca*pem ~/TLS/etcd/server*pem /opt/etcd/ssl/

#master机器 etcd 配置文件
cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.137.133:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.137.133:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.137.133:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.137.133:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.137.133:2380,etcd-2=https://192.168.137.132:2380,etcd-3=https://192.168.137.134:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
#---
#ETCD_NAME:节点名称,集群中唯一
#ETCD_DATA_DIR:数据目录
#ETCD_LISTEN_PEER_URLS:集群通信监听地址
#ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
#ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
#ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
#ETCD_INITIAL_CLUSTER:集群节点地址
#ETCD_INITIAL_CLUSTER_TOKEN:集群Token
#ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群

systemd管理etcd

cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
--logger=zap
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

同步配置文件到其他机器

scp -r /opt/etcd/ root@192.168.137.132:/opt/
scp -r /opt/etcd/ root@192.168.137.134:/opt/
scp /usr/lib/systemd/system/etcd.service root@192.168.137.132:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/etcd.service root@192.168.137.134:/usr/lib/systemd/system/

node1机器修改配置

# 替换节点名字
sed -i 's/ETCD_NAME="etcd-1"/ETCD_NAME="etcd-2"/g' /opt/etcd/cfg/etcd.conf
# 替换IP地址,最后一个不替换
sed -i '1,8s/192\.168\.137\.133/192\.168\.137\.132/g' /opt/etcd/cfg/etcd.conf

node2机器修改配置

# 替换节点名字
sed -i 's/ETCD_NAME="etcd-1"/ETCD_NAME="etcd-3"/g' /opt/etcd/cfg/etcd.conf
# 替换IP地址,最后一个不替换
sed -i '1,8s/192\.168\.137\.133/192\.168\.137\.134/g' /opt/etcd/cfg/etcd.conf

启动etcd,先启动node,再启动master

systemctl daemon-reload
systemctl start etcd
systemctl enable etcd

master执行验证

ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.137.132:2379,https://192.168.137.133:2379,https://192.168.137.134:2379" endpoint health --write-out=table

5.部署k8s

5.1.生成k8s1.26.x 证书

cd ~/TLS/k8s

cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF
cat > ca-csr.json << EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

# 生成证书:会生成ca.pem和ca-key.pem文件。
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

#使用自签CA签发kube-apiserver HTTPS证书
#创建证书申请文件:
cat > server-csr.json << EOF
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "192.168.137.132",
      "192.168.137.133",
      "192.168.137.134",
      "192.168.137.1",
      "192.168.135.1",
      "192.168.137.200",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

#注:上述文件hosts字段中IP为所有Master/LB/VIP IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

#会生成server.pem和server-key.pem文件。

5.2.安装k8s

tar -zxvf kubernetes-server-linux-amd64.tar.gz
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin
cp kubectl /usr/bin/
cp kubectl /usr/local/bin/

#拷贝刚才生成的证书
#把刚才生成的证书拷贝到配置文件中的路径:
cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl/

5.2.1.部署kube-apiserver

#部署kube-apiserver
#创建配置文件

cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
--v=2 \
--etcd-servers=https://192.168.137.132:2379,https://192.168.137.133:2379,https://192.168.137.134:2379 \
--bind-address=192.168.137.133 \
--secure-port=6443 \
--advertise-address=192.168.137.133 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth=true \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-32767 \
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--service-account-issuer=api \
--service-account-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \
--requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \
--proxy-client-cert-file=/opt/kubernetes/ssl/server.pem \
--proxy-client-key-file=/opt/kubernetes/ssl/server-key.pem \
--requestheader-allowed-names=kubernetes \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-group-headers=X-Remote-Group \
--requestheader-username-headers=X-Remote-User \
--enable-aggregator-routing=true \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
EOF

• —v:日志等级
• --etcd-servers:etcd集群地址
• --bind-address:监听地址
• --secure-port:https安全端口
• --advertise-address:集群通告地址
• --allow-privileged:启用授权
• --service-cluster-ip-range:Service虚拟IP地址段
• --enable-admission-plugins:准入控制模块
• --authorization-mode:认证授权,启用RBAC授权和节点自管理
• --enable-bootstrap-token-auth:启用TLS bootstrap机制
• --token-auth-file:bootstrap token文件
• --service-node-port-range:Service nodeport类型默认分配端口范围
• --kubelet-client-xxx:apiserver访问kubelet客户端证书
• --tls-xxx-file:apiserver https证书
• 1.20版本必须加的参数:–service-account-issuer,–service-account-signing-key-file
• --etcd-xxxfile:连接Etcd集群证书
• --audit-log-xxx:审计日志
• 启动聚合层相关配置:–requestheader-client-ca-file,–proxy-client-cert-file,–proxy-client-key-file,–requestheader-allowed-names,–requestheader-extra-headers-prefix,–requestheader-group-headers,–requestheader-username-headers,–enable-aggregator-routing

启用 TLS Bootstrapping 机制

TLS Bootstraping:Master apiserver启用TLS认证后,Node节点kubelet和
kube-proxy要与kube-apiserver进行通信,必须使用CA签发的有效证书才可以,
当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。
为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,
kubelet会以一个低权限用户自动向apiserver申请证书,
kubelet的证书由apiserver动态签署。
所以强烈建议在Node上使用这种方式,目前主要用于kubelet,kube-proxy
还是由我们统一颁发一个证书。

# 生成token
head -c 16 /dev/urandom | od -An -t x | tr -d ' '
# 创建配置文件中token文件:格式:token,用户名,UID,用户组
cat > /opt/kubernetes/cfg/token.csv << EOF
9bfe0c44e7b4ba6f7d0c9e6f43aa88b8,kubelet-bootstrap,10001,"system:node-bootstrapper"
EOF

systemd管理apiserver

#systemd管理apiserver
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

#启动并设置开机启动
systemctl daemon-reload
systemctl start kube-apiserver 
systemctl enable kube-apiserver

5.2.2.部署kube-controller-manager

#1. 创建配置文件
cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS=" \\
--v=2 \\
--leader-elect=true \\
--kubeconfig=/opt/kubernetes/cfg/kube-controller-manager.kubeconfig \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.244.0.0/16 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--cluster-signing-duration=87600h0m0s"
EOF

•–kubeconfig:连接apiserver配置文件
•–leader-elect:当该组件启动多个时,自动选举(HA)
•–cluster-signing-cert-file/–cluster-signing-key-file:自动为kubelet颁发证书的CA,与apiserver保持一致

生成kubeconfig文件

# 生成kube-controller-manager证书:
# 切换工作目录
cd ~/TLS/k8s

# 创建证书请求文件
cat > ~/TLS/k8s/kube-controller-manager-csr.json << EOF
{
  "CN": "system:kube-controller-manager",
  "hosts": [
      "127.0.0.1",
      "192.168.137.132",
      "192.168.137.133",
      "192.168.137.134",
      "192.168.137.1"
 ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing", 
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

# 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
# 复制证书
cp ~/TLS/k8s/kube-controller-manager*pem /opt/kubernetes/ssl/

生成kubeconfig文件(以下是shell命令,直接在终端执行):

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=https://192.168.137.133:6443 \
  --kubeconfig=/opt/kubernetes/cfg/kube-controller-manager.kubeconfig
kubectl config set-credentials kube-controller-manager \
  --client-certificate=/opt/kubernetes/ssl/kube-controller-manager.pem \
  --client-key=/opt/kubernetes/ssl/kube-controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=/opt/kubernetes/cfg/kube-controller-manager.kubeconfig
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-controller-manager \
  --kubeconfig=/opt/kubernetes/cfg/kube-controller-manager.kubeconfig
kubectl config use-context default --kubeconfig=/opt/kubernetes/cfg/kube-controller-manager.kubeconfig

systemd管理controller-manager

# systemd管理controller-manager
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

#启动并设置开机启动
systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager

5.2.3.部署kube-scheduler

# 创建配置文件
cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS=" \\
--v=2 \\
--leader-elect \\
--kubeconfig=/opt/kubernetes/cfg/kube-scheduler.kubeconfig \\
--bind-address=127.0.0.1"
EOF

•–kubeconfig:连接apiserver配置文件
•–leader-elect:当该组件启动多个时,自动选举(HA)

生成kube-scheduler证书

# 切换工作目录
cd ~/TLS/k8s

# 创建证书请求文件
cat > ~/TLS/k8s/kube-scheduler-csr.json << EOF
{
  "CN": "system:kube-scheduler",
  "hosts": [
      "127.0.0.1",
      "192.168.137.132",
      "192.168.137.133",
      "192.168.137.134",
      "192.168.137.1"
 ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

# 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

# 复制证书
cp ~/TLS/k8s/kube-scheduler*pem /opt/kubernetes/ssl/

# 生成kubeconfig文件:

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=https://192.168.137.133:6443 \
  --kubeconfig=/opt/kubernetes/cfg/kube-scheduler.kubeconfig
kubectl config set-credentials kube-scheduler \
  --client-certificate=/opt/kubernetes/ssl/kube-scheduler.pem \
  --client-key=/opt/kubernetes/ssl/kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=/opt/kubernetes/cfg/kube-scheduler.kubeconfig
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-scheduler \
  --kubeconfig=/opt/kubernetes/cfg/kube-scheduler.kubeconfig
kubectl config use-context default --kubeconfig=/opt/kubernetes/cfg/kube-scheduler.kubeconfig

systemd管理scheduler

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

启动并设置开机启动
systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler

5.2.4.查看集群状态

#生成kubectl连接集群的证书:
cd ~/TLS/k8s/
cat > ~/TLS/k8s/admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [
      "127.0.0.1",
      "192.168.137.132",
      "192.168.137.133",
      "192.168.137.134",
      "192.168.137.1"
 ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

# 复制证书
mkdir -p ~/kubernetes/ssl/ && cp ~/TLS/k8s/admin*pem ~/kubernetes/ssl/

# 生成kubeconfig文件:
mkdir /root/.kube

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=https://192.168.137.133:6443 \
  --kubeconfig=/root/.kube/config
kubectl config set-credentials cluster-admin \
  --client-certificate=/root/kubernetes/ssl/admin.pem \
  --client-key=/root/kubernetes/ssl/admin-key.pem \
  --embed-certs=true \
  --kubeconfig=/root/.kube/config
kubectl config set-context default \
  --cluster=kubernetes \
  --user=cluster-admin \
  --kubeconfig=/root/.kube/config
kubectl config use-context default --kubeconfig=/root/.kube/config

# 通过kubectl工具查看当前集群组件状态:
kubectl get cs
#NAME                STATUS    MESSAGE             ERROR
#scheduler             Healthy   ok                  
#controller-manager       Healthy   ok                  
#etcd-2               Healthy   {"health":"true"}   
#etcd-1               Healthy   {"health":"true"}   
#etcd-0               Healthy   {"health":"true"} 
#如上输出说明Master节点组件运行正常。

# 授权kubelet-bootstrap用户允许请求证书
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

5.3.部署worker节点-master机器

# 在所有worker node创建工作目录:
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} 
# 从master节点拷贝:
# 找到原来解压的kubernetes文件
cd kubernetes/server/bin
cp kubelet kube-proxy /opt/kubernetes/bin

5.3.1.部署kubelet

# 创建配置文件
cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS=" \\
--v=2 \\
--hostname-override=k8smaster \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--runtime-request-timeout=15m  \\
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \\
--pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9" \\
--container-runtime=remote  \\
--cgroup-driver=systemd \\
--node-labels=node.kubernetes.io/node=''
EOF

#配置参数文件
cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local 
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem 
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF

#生成kubelet初次加入集群引导kubeconfig文件
# 生成 kubelet bootstrap kubeconfig 配置文件
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=https://192.168.137.133:6443 \
  --kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig
# token与token.csv里保持一致
kubectl config set-credentials "kubelet-bootstrap" \
  --token=9bfe0c44e7b4ba6f7d0c9e6f43aa88b8 \
  --kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig
kubectl config set-context default \
  --cluster=kubernetes \
  --user="kubelet-bootstrap" \
  --kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig
kubectl config use-context default --kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig

# systemd管理kubelet
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

#启动并设置开机启动
systemctl daemon-reload
systemctl restart kubelet
systemctl enable kubelet

批准kubelet证书申请并加入集群

# 查看kubelet证书请求
kubectl get csr
#NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           REQUESTEDDURATION   CONDITION
#node-csr-1uybUt5GGtn5FVvI0CgJAsgKFXI44MotYi-oik3V1eI   82s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   <none>              Pending


# 批准申请
kubectl certificate approve node-csr-1uybUt5GGtn5FVvI0CgJAsgKFXI44MotYi-oik3V1eI
# 查看节点 如果没有显式则执行reboot重启机器
kubectl get node

5.3.2.部署kube-proxy

# 创建配置文件
cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS=" \\
--v=2 \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF

# 配置参数文件
cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8smaster
clusterCIDR: 10.244.0.0/16
mode: ipvs
ipvs:
  scheduler: "rr"
iptables:
  masqueradeAll: true
EOF

创建kube-proxy证书并生成kube-proxy.kubeconfig文件

# 切换工作目录
cd ~/TLS/k8s

# 创建证书请求文件
cat > kube-proxy-csr.json << EOF
{
  "CN": "system:kube-proxy",
  "hosts": [
      "127.0.0.1",
      "192.168.137.132",
      "192.168.137.133",
      "192.168.137.134",
      "192.168.137.1"
 ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

# 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
# 复制证书
cp ~/TLS/k8s/kube-proxy*pem /opt/kubernetes/ssl/

# 生成kubeconfig文件:
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=https://192.168.137.133:6443 \
  --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
  --client-certificate=/opt/kubernetes/ssl/kube-proxy.pem \
  --client-key=/opt/kubernetes/ssl/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig

systemd管理kube-proxy

cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

# 启动并设置开机启动
systemctl daemon-reload
systemctl restart kube-proxy
systemctl enable kube-proxy

5.3.3.部署calico或者kube-flannel网络 (CNI)

网络组件有很多种,只需要部署其中一个即可,推荐Calico。
Calico是一个纯三层的数据中心网络方案,Calico支持广泛的平台,包括Kubernetes、OpenStack等。
Calico 在每一个计算节点利用 Linux Kernel 实现了一个高效的虚拟路由器( vRouter) 来负责数据转发,而每个 vRouter 通过 BGP 协议负责把自己上运行的 workload 的路由信息向整个 Calico 网络内传播。
此外,Calico 项目还实现了 Kubernetes 网络策略,提供ACL功能。
1.下载Calico
curl -L -o calico.yaml https://docs.projectcalico.org/manifests/calico.yaml

sed -i 's/\# - name: CALICO_IPV4POOL_CIDR/- name: CALICO_IPV4POOL_CIDR/' calico.yaml
sed -i 's/\#   value: "192.168.0.0\/16"/  value: "192.168.0.0\/16"/' calico.yaml

#cat calico.yaml |grep image
#crictl pull docker.io/calico/cni:v3.25.0
#crictl pull docker.io/calico/node:v3.25.0
#crictl pull docker.io/calico/kube-controllers:v3.25.0
# 应用网络配置
kubectl apply -f calico.yaml
# 如果想删除网络配置
# kubectl delete -f calico.yaml

# 查看运行状态
kubectl get pod -n kube-system
# 如果查找的运行状态不是Running,并且crictl ps没有容器
# systemctl status kubele 里面有报错信息是无法创建pod,则删除master的kubelet的注册信息重新配置
kubectl get nodes
#NAME        STATUS                     ROLES    AGE    VERSION
#k8smaster   Ready,SchedulingDisabled   <none>   131m   v1.26.0
#驱逐节点
kubectl cordon k8smaster
#设置节点为不可调度
kubectl  drain k8smaster --ignore-daemonsets
#删除该节点
kubectl delete node k8smaster
# 删除认证文件
rm /opt/kubernetes/ssl/kubelet*
# 然后重新安装kubelet,要注意地址和镜像仓库

授权apiserver访问kubelet
应用场景:例如kubectl logs

cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - pods/log
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF

kubectl apply -f apiserver-to-kubelet-rbac.yaml

kube-flannel网络安装

cat > kube-flannel.yaml << EOF
---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
       #image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
       #image: flannelcni/flannel:v0.19.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
       #image: flannelcni/flannel:v0.19.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate
EOF
kubectl apply -f kube-flannel.yaml
#查看状态
kubectl get pods -n kube-flannel

5.4.添加node节点-node1机器

#拷贝已部署好的Node相关文件到新节点
#在Master节点将Worker Node涉及文件拷贝到新节点

scp -r /opt/kubernetes root@192.168.137.132:/opt/
scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.137.132:/usr/lib/systemd/system

在node节点机器 删除kubelet证书和kubeconfig文件
注:这几个文件是证书申请审批后自动生成的,每个Node不同,必须删除

rm -rf /opt/kubernetes/cfg/kubelet.kubeconfig 
rm -rf /opt/kubernetes/ssl/kubelet*
rm -rf /opt/kubernetes/logs/*

# 替换节点名字
sed -i 's/hostname-override=k8smaster/hostname-override=k8snode1/' /opt/kubernetes/cfg/kubelet.conf
sed -i 's/hostnameOverride: k8smaster/hostnameOverride: k8snode1/' /opt/kubernetes/cfg/kube-proxy-config.yml

# 启动并设置开机启动
systemctl daemon-reload
systemctl restart kubelet kube-proxy
systemctl enable kubelet kube-proxy

在Master上批准新Node kubelet证书申请

# 再master机器执行
kubectl get csr 
#NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           REQUESTEDDURATION   CONDITION
#node-csr-C1Iha4lbvdEnEy_hWlGesf0fpIRWZ1RdbyFT7nxNR20   16m   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   <none>              Approved,Issued
#node-csr-h0rvtw61xkQg7uf1bc0inIEhmMFy7khq6g__Kuhnrm8   65s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   <none>              Pending
# 添加最后一个字段为Pending的节点
kubectl certificate approve node-csr-h0rvtw61xkQg7uf1bc0inIEhmMFy7khq6g__Kuhnrm8
# 查看
kubectl get pod -n kube-system 
kubectl get node

#测试
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
# 查看
kubectl get pod,svc

node2按照此步骤进行即可

5.5.删除节点方法

kubectl get nodes
#NAME        STATUS                     ROLES    AGE    VERSION
#k8smaster   Ready,SchedulingDisabled   <none>   131m   v1.26.0
#驱逐节点
kubectl cordon k8smaster
#设置节点为不可调度
kubectl  drain k8smaster --ignore-daemonsets
#删除该节点
kubectl delete node k8smaster
# 在要删除节点的机器上删除认证文件
rm -f /opt/kubernetes/ssl/kubelet*

6.部署Dashboard

6.1.部署Dashboard

# 如果无法访问配置host为 199.232.68.133  raw.githubusercontent.com
#github: https://github.com/kubernetes/dashboard/releases/
curl -L -o recommended.yaml https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
vim recommended.yaml

#更改下面内容 添加nodePort和type
#spec:
#  ports:
#    - port: 443
#      targetPort: 8443
#      nodePort: 30001
#  type: NodePort
#  selector:
#    k8s-app: kubernetes-dashboard

# 应用文件
kubectl apply -f recommended.yaml

#查看端口
kubectl get pods -n kubernetes-dashboard
kubectl get pods,svc -n kubernetes-dashboard

如果容器一直是创建中或者初始化,查看日志

kubectl get pods -n kube-flannel
#NAME                    READY   STATUS     RESTARTS        AGE
#kube-flannel-ds-dq5tl   0/1     Init:0/2   0               3h50m
#kube-flannel-ds-xrwt2   0/1     Init:0/2   0               3h50m
#kube-flannel-ds-xv8mg   1/1     Running    1 (3h25m ago)   3h50m
kubectl describe pod kube-flannel-ds-dq5tl -n kube-flannel

创建service account并绑定默认cluster-admin管理员集群角色:

cat >  dashadmin.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
EOF

kubectl apply -f dashadmin.yaml 
# 创建用户登录token
kubectl -n kubernetes-dashboard create token admin-user
#eyJhbGciOiJSUzI1NiIsImtpZCI6IlZSMWxoUkJtdGRVS1dha3pDQWZLQVhaNl9BME1jM3hFSEJNWk9NdGR6clUifQ.eyJhdWQiOlsiYXBpIiwiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjc0MTE5MzYzLCJpYXQiOjE2NzQxMTU3NjMsImlzcyI6ImFwaSIsImt1YmVybmV0ZXMuaW8iOnsibmFtZXNwYWNlIjoia3ViZXJuZXRlcy1kYXNoYm9hcmQiLCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoiYWRtaW4tdXNlciIsInVpZCI6IjQxZWNmMGE5LThiZWYtNDYxNC05MGU5LWY4ODg4MjY0ZDAxMSJ9fSwibmJmIjoxNjc0MTE1NzYzLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.fhiRHLKoGiyFPg04Pn2jGaN2vtdtCyGyFP1ZSBUyjykbsXlP6LKiecGe2fYqyz22HH9TE2-ULGabuZ7Y5ln8Z0Z9UbSidzCWPt3X65mR4QiIqJ0hPqRLoi4VDeSSoEqu1Qhg-COIDKq7f1bDwVMonngW793lqBWAChASsFbPVilfXZxWoPUCnVvbJTLUDbIudXM76XayPZGT8lC2MY0MrZdOmhr8VP-7alXp3VDB43R5Cio82pCNKfWY2Zi4W_GvvcU6be4E-oRV84UgWgLRY0LQMX7Ho_-ZQaf9d6ss4EieyEt-UWpCZUYWcrLrGaP6wAARfyz8-2SLvyhTMqmXAQ
# 浏览器访问https://192.168.137.132:30001
# 如果无法访问,三个节点的地址都试试

参考链接:https://blog.51cto.com/flyfish225/5988774

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

k8s1.26+containerd安装-二进制安装 的相关文章

  • CF 1156B Ugly Pairs

    CF 1156B Ugly Pairs 题目链接题面题目大意题目分析代码 题目链接 我是题目链接戳我呀 gt lt 题面 题目大意 有 T 个字符串 xff0c 对于每个字符串可任意更改其位置 xff0c 使相邻的两个字符的ascii码相差
  • 傅里叶变换与EEG傅里叶变换处理

    傅里叶变换与EEG傅里叶变换处理 EEG与傅里叶变换 The Basics of Signal Processing Fourier transforms Nyquist frequency sampling theorem and ali
  • 【详解】计算机视觉算法导读篇

    目录 1 深度学习发展史2 计算机视觉概述2 1 定义2 2 任务分解2 3 应用场景2 4 计算机视觉发展史 1 深度学习发展史 起源 xff1a 深度学习所需要的神经网络技术起源于20世纪50年代 xff0c 叫做感知机 当时也通常使用

随机推荐

  • Linux ManJaro 换源、安装应用

    1 换源 sudo pacman mirrors i c China m rank pacman文件配置 sudo nano etc pacman conf 在末尾插入 xff08 可以先浏览器打开源看是否可用 xff09 archlinu
  • 树莓派(RPi) CentOS7安装配置PHP7

    在树莓派配置了Nginx之后 xff0c 自然要上PHP大法 在下萌新 xff0c 学习PHP的时候直接入手的PHP7 xff0c 所以我就在树莓派上也配置了PHP7 xff0c 现在我将安装过程写下来分享一下 0 环境说明 设备 xff1
  • 树莓派使用apt-get安装配置Nginx+PHP7+MySQL(MariaDB)附带部分细节

    最近使用树莓派搞定了一个小项目 xff0c 现在树莓派闲置了 xff0c 正好拿来做一个小型Web服务器进行功能测试 没想到配置的过程比我想象的复杂 xff0c 好多小细节是用云服务器的时候没遇见过的 我已经尝试写的很简洁了 xff0c 各
  • UnknownError:Fail to find the dnn implementation解决方法

    程序加上下面代码 gpus 61 tf config experimental list physical devices 39 GPU 39 if not gpus return 34 No GPU available 34 try Cu
  • 设置pandas打印所有列

    pd set option display max columns None
  • 使用tensorflow时报错Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR

    使用tf2 1时 xff0c cuda和cudnn都安装好了 xff0c 报错 Could not create cudnn handle CUDNN STATUS INTERNAL ERROR 最开始以为是cudnn版本问题 xff0c
  • linux安装mongo数据库软件robo3T(转载)

    robo 3T xff08 robomongo xff09 在ubuntu16 04上安装记录 96 王南北丶 2017 10 08 22 19 字数 450 阅读 1622评论 2喜欢 3 robo 3T是mongodb的一个非常好用的可
  • spss入门基本用法

    一 xff0e 数据 1 个案排序 xff1a 对数据视图中的某个个案进行排序 xff0c 具体排序规则可以点进去选择 2 变量排序 xff1a 对变量视图中某个变量进行排序 xff0c 具体规则可以点进去选择 3 转置 xff1a 行列互
  • 点积的概念

    在数学中 xff0c 数量积 xff08 dot product scalar product xff0c 也称为点积 xff09 是接受在实数R上的两个向量并返回一个实数值标量的二元运算 它是欧几里得空间的标准内积 两个向量a 61 a1
  • 如何在jupyter notebook直接安装模块

    pip install 模块名 注意要加 xff01
  • networkx 不能显示中文的解决办法

    修改pythonx lib site packages matplotlib mpl data matplotlibrc 用记事本打开 找到font family sans serif xff0c 将前面的 去掉 找到font sans s
  • sift = cv2.xfeatures2d.SIFT_create()运行报错解决方案

    可以把原opencv卸载 xff08 pip uninstall opencv xff0c 然后安装扩展版本的opencv xff0c pip install opencv contrib python 61 61 3 4 2 16 xff
  • early EOF index-pack failed的解决办法

    git chone时报错如下 xff1a fatal The remote end hung up unexpectedly fatal early EOF fatal index pack failed 网上找了各种办法后 xff0c 又
  • CentOS7.9通过rpm离线安装mysql8.0

    mysql5 6安装参考 xff1a https blog csdn net lgxzzz article details 124409836 mysql5 7安装参考 xff1a https blog csdn net weixin 44
  • excel复制后卡死的解决办法

    excel复制表格中的内容后 xff0c 整个excel表格会卡死 xff0c 下面给出两个解决办法 第一步点击 文件 选项 加载项 转到 xff0c 取消方框内所有选项 xff0c 第二步点击 文件 选项 公式 xff0c 在工作簿计算中
  • nodebb接入已有的账号体系及实现单点登陆、更改nodebb样式及页面

    一 前言 首先 xff0c 当接到这个实现nodebb单点登陆这个功能需求时 xff0c 自己还不太了解单点登陆的概念或者说过程原理 所以就只能一步一步入手 xff0c 从接入自己的账号体系 xff0c 覆盖已有的登陆体系开始 二 接入自己
  • Linux系统安装go环境的方法

    在Linux系统中安装go环境 下面介绍两种方法 一 基于Debian的发行版本 xff0c 使用apt get安装go环境 1 安装命令 xff1a sudo apt get install golang 2 设置环境变量 有三个变量GO
  • db2迁移到mysql方案

    导读 对应db2迁移到mysql方案在网上都是使用navcat xff0c 这个种方案在生产环境不现实 xff0c 因为生产环境基本上时命令行方式 xff0c 所以优先想到的是使用命令行到处txt文件 xff0c 然后导入到mysql xf
  • 管理右键的新建菜单

    一 桌面 xff0c 按鼠标右键 xff0c 确认想要新增 删除项的后缀名 二 win 43 R xff0c 输入regedit xff0c 进入注册表编辑 三 搜索 HKEY CURRENT USER Software Microsoft
  • k8s1.26+containerd安装-二进制安装

    k8s1 26 43 containerd安装 1 机器 iphostname192 168 137 133k8smaster192 168 137 132k8snode1192 168 137 134k8snode2 2 下载所需二进制包