Kubernetes v1.21.14二进制搭建单节点集群

2023-05-16

1. 集群环境准备

1.1. 主机规划

IP主机名主机角色操作系统安装组件
192.168.11.71k8s-master1master,workerCentos7.9api-server, controller-manager, scheduler, etcd, kubectl, kubelet, kube-proxy, containerd, runc

1.2. 软件版本

软件版本备注
kubernetesv1.21.14
etcdv3.5.4
calicov3.19.4
corendsv1.8.4
containerdv1.6.10
runcv1.1.0
crictlv1.24.2
cniv1.1.1
cfsslv1.6.3

1.3. 网络分配

网络名称网段备注
Node网络192.168.11.0/24
Service网络10.96.0.0/12
Pod网络172.16.0.0/12

2. 集群部署

2.1. 主机准备

UUID也需要修改,每个主机的UUID必须要设置不一致,MAC地址也必须不一样,对于通过虚拟机clone的方式尤其需要注意,他们的UUID, MAC地址极有可能是一样的
分别修改每台主机的IP地址,以及hostname;由于我这里每台主机都是通过VMware克隆出来的,所以其主机IP都是一样的

# 时间同步
yum install -y chrony
systemctl start chronyd
systemctl enable chronyd
chronyc sources

# 关闭防火墙
systemctl disable --now firewalld

# 关闭selinux
setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
sestatus

# 关闭交换分区
sed -ri 's/.*swap.*/#&/' /etc/fstab
swapoff -a && sysctl -w vm.swappiness=0
cat /etc/fstab

# 时间同步
yum install -y chrony
systemctl start chronyd
systemctl enable chronyd
chronyc sources

# 网络配置
# 方式一
# systemctl disable --now NetworkManager
# systemctl start network && systemctl enable network

# 方式二
cat > /etc/NetworkManager/conf.d/calico.conf << EOF
[keyfile]
unmanaged-devices=interface-name:cali*;interface-name:tunl*
EOF
systemctl restart NetworkManager

# 主机系统优化
ulimit -SHn 65535
cat >> /etc/security/limits.conf <<EOF
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* seft memlock unlimited
* hard memlock unlimitedd
EOF

yum install ipvsadm ipset sysstat conntrack libseccomp -y

tee /etc/modules-load.d/ipvs.conf << 'EOF'
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF

systemctl restart systemd-modules-load.service

# 查看模块是否已经被启用
lsmod | grep -e ip_vs -e nf_conntrack

# 优化内核参数
tee /etc/sysctl.d/k8s.conf << 'EOF'
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384

net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 1

EOF

sysctl --system

# 加载containerd相关内核模块
modprobe overlay
modprobe br_netfilter

tee /etc/modules-load.d/containerd.conf << 'EOF'
overlay
br_netfileter
EOF

systemctl enable --now systemd-modules-load.service

# 安装必备工具
yum -y install wget jq psmisc vim net-tools tree telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl -y

reboot

重启之后需要查看一下ipvs, containerd相关模块加载情况

lsmod | grep --color=auto -e ip_vs -e nf_conntrack

lsmod | egrep 'br_netfilter | overlay'

2.2. 下载etcd, k8s, cfssl软件安装包

  • K8S版本可以参考该目录
    • github参考地址
# 创建工作目录
mkdir -p /data/work
cd /data/work

# 下载cfssl工具
CFSSL_VERSION=1.6.3
#wget -O cfssl_linux-amd64 https://ghproxy.com/https://github.com/cloudflare/cfssl/releases/download/v${CFSSL_VERSION}/cfssl_${CFSSL_VERSION}_linux_amd64
#wget -O cfssljson_linux-amd64 https://ghproxy.com/https://github.com/cloudflare/cfssl/releases/download/v${CFSSL_VERSION}/cfssljson_${CFSSL_VERSION}_linux_amd64
#wget -O cfssl-certinfo_linux-amd64 https://ghproxy.com/https://github.com/cloudflare/cfssl/releases/download/v${CFSSL_VERSION}/cfssl-certinfo_${CFSSL_VERSION}_linux_amd64

chmod +x cfssl*
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo



ETCD_VERSION=v3.5.4
#wget https://ghproxy.com/https://github.com/etcd-io/etcd/releases/download/${ETCD_VERSION}/etcd-${ETCD_VERSION}-linux-amd64.tar.gz
tar -zxvf etcd-${ETCD_VERSION}-linux-amd64.tar.gz
cp -ar etcd-${ETCD_VERSION}-linux-amd64/etcd* /usr/local/bin
chmod +x /usr/local/bin/etcd*
ls -ll /usr/local/bin/etcd*
etcdctl version


K8S_VERSION=v1.21.14
#wget https://storage.googleapis.com/kubernetes-release/release/${K8S_VERSION}/kubernetes-server-linux-amd64.tar.gz
tar zxvf kubernetes-server-linux-amd64.tar.gz
cd  kubernetes/server/bin/

cp kube-apiserver kube-controller-manager kube-scheduler kubectl kubelet kube-proxy /usr/local/bin/

# 退出目录,回到/data/work工作目录中,后续所有的操作依然在本目录中
cd /data/work/

2.3. 生成CA证书

tee ca-csr.json << 'EOF'
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "Kubernetes",
      "OU": "Kubernetes-manual"
    }
  ],
  "ca": {
    "expiry": "876000h"
  }
}
EOF

cfssl gencert -initca ca-csr.json  | cfssljson -bare ca

tee ca-config.json << 'EOF'
{
  "signing": {
    "default": {
      "expiry": "876000h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "876000h"
      }
    }
  }
}
EOF

mkdir -p /etc/kubernetes/pki
cp ca*.pem /etc/kubernetes/pki/

2.4. 部署ETCD

2.4.1. 启动ETCD

# 创建etcd csr请求文件
tee etcd-csr.json << 'EOF'
{
  "CN": "etcd",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "Kubernetes",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF

# 生成etcd相关证书
cfssl gencert \
   -ca=ca.pem \
   -ca-key=ca-key.pem \
   -config=ca-config.json \
   -hostname=127.0.0.1,k8s-master1,192.168.11.71 \
   -profile=kubernetes \
   etcd-csr.json | cfssljson -bare etcd
ls -ll etcd*.pem

# 创建ETCD服务配置文件
tee etcd-config.yml << 'EOF'
name: 'etcd1'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.11.71:2380'
listen-client-urls: 'https://192.168.11.71:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.11.71:2380'
advertise-client-urls: 'https://192.168.11.71:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'etcd1=https://192.168.11.71:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF

# 创建etcd服务启动文件
tee etcd.service << 'EOF'
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target

[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd-config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
Alias=etcd3.service
EOF

mkdir -p /etc/etcd/ssl
cp etcd*.pem /etc/etcd/ssl/
cp etcd-config.yml /etc/etcd/
cp etcd.service /usr/lib/systemd/system/

mkdir -p /etc/kubernetes/pki/etcd
ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/
mkdir -p /var/lib/etcd

# 启动ETCD
systemctl daemon-reload && systemctl enable etcd.service && systemctl start etcd.service

2.4.2. 验证ETCD的状态

# 查看ETCD各节点的健康状态
ETCDCTL_API=3 && /usr/local/bin/etcdctl --write-out=table --cacert=/etc/kubernetes/pki/ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem --endpoints=https://192.168.11.71:2379 endpoint health

# 查看成员列表
ETCDCTL_API=3 && /usr/local/bin/etcdctl --write-out=table --cacert=/etc/kubernetes/pki/ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem --endpoints=https://192.168.11.71:2379 member list

# 查看数据库大小以及哪个节点是Leader节点
ETCDCTL_API=3 && /usr/local/bin/etcdctl --write-out=table --cacert=/etc/kubernetes/pki/ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem --endpoints=https://192.168.11.71:2379 endpoint status

# 检查ETCD的性能
ETCDCTL_API=3 && /usr/local/bin/etcdctl --write-out=table --cacert=/etc/kubernetes/pki/ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem --endpoints=https://192.168.11.71:2379 check perf

执行结果

[root@k8s-master1 work]# # 查看ETCD各节点的健康状态
[root@k8s-master1 work]# ETCDCTL_API=3 && /usr/local/bin/etcdctl --write-out=table --cacert=/etc/kubernetes/pki/ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem --endpoints=https://192.168.11.71:2379 endpoint health
+----------------------------+--------+-------------+-------+
|          ENDPOINT          | HEALTH |    TOOK     | ERROR |
+----------------------------+--------+-------------+-------+
| https://192.168.11.71:2379 |   true | 14.286573ms |       |
+----------------------------+--------+-------------+-------+
[root@k8s-master1 work]#
[root@k8s-master1 work]# # 查看成员列表
[root@k8s-master1 work]# ETCDCTL_API=3 && /usr/local/bin/etcdctl --write-out=table --cacert=/etc/kubernetes/pki/ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem --endpoints=https://192.168.11.71:2379 member list
+------------------+---------+-------+----------------------------+----------------------------+------------+
|        ID        | STATUS  | NAME  |         PEER ADDRS         |        CLIENT ADDRS        | IS LEARNER |
+------------------+---------+-------+----------------------------+----------------------------+------------+
| 5ce02800cf5f635d | started | etcd1 | https://192.168.11.71:2380 | https://192.168.11.71:2379 |      false |
+------------------+---------+-------+----------------------------+----------------------------+------------+
[root@k8s-master1 work]#
[root@k8s-master1 work]# # 查看数据库大小以及哪个节点是Leader节点
[root@k8s-master1 work]# ETCDCTL_API=3 && /usr/local/bin/etcdctl --write-out=table --cacert=/etc/kubernetes/pki/ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem --endpoints=https://192.168.11.71:2379 endpoint status
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|          ENDPOINT          |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.11.71:2379 | 5ce02800cf5f635d |   3.5.4 |   20 kB |      true |      false |         2 |          5 |                  5 |        |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
[root@k8s-master1 work]#
[root@k8s-master1 work]# # 检查ETCD的性能
[root@k8s-master1 work]# ETCDCTL_API=3 && /usr/local/bin/etcdctl --write-out=table --cacert=/etc/kubernetes/pki/ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem --endpoints=https://192.168.11.71:2379 check perf
 59 / 60 Booooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooom  !  98.33%PASS: Throughput is 150 writes/s
PASS: Slowest request took 0.073586s
PASS: Stddev is 0.002288s
PASS
[root@k8s-master1 work]#

2.5. 部署apiserver

2.5.1. 启动apiserver

tee kube-apiserver-csr.json << 'EOF'
{
  "CN": "kube-apiserver",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "Kubernetes",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF

# 生成apiserver证书
cfssl gencert   \
  -ca=ca.pem   \
  -ca-key=ca-key.pem   \
  -config=ca-config.json   \
  -hostname=10.96.0.1,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,192.168.11.71  \
  -profile=kubernetes   kube-apiserver-csr.json | cfssljson -bare kube-apiserver

# 这里必须使用cat命令,不能使用tee命令,否者这里的$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')不会被解析
cat > token.csv << EOF
$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

# 创建apiserver配置文件
tee kube-apiserver.conf << 'EOF'
tee /etc/kubernetes/kube-apiserver.conf << 'EOF'
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds \
  --anonymous-auth=false \
  --bind-address=192.168.11.71 \
  --secure-port=6443 \
  --advertise-address=192.168.11.71 \
  --authorization-mode=Node,RBAC \
  --runtime-config=api/all=true \
  --enable-bootstrap-token-auth=true \
  --service-cluster-ip-range=10.96.0.0/12 \
  --token-auth-file=/etc/kubernetes/token.csv \
  --service-node-port-range=0-50000 \
  --tls-cert-file=/etc/kubernetes/pki/kube-apiserver.pem  \
  --tls-private-key-file=/etc/kubernetes/pki/kube-apiserver-key.pem \
  --client-ca-file=/etc/kubernetes/pki/ca.pem \
  --kubelet-client-certificate=/etc/kubernetes/pki/kube-apiserver.pem \
  --kubelet-client-key=/etc/kubernetes/pki/kube-apiserver-key.pem \
  --service-account-key-file=/etc/kubernetes/pki/ca-key.pem \
  --service-account-signing-key-file=/etc/kubernetes/pki/ca-key.pem  \
  --service-account-issuer=https://kubernetes.default.svc.cluster.local \
  --etcd-cafile=/etc/kubernetes/pki/ca.pem \
  --etcd-certfile=/etc/kubernetes/pki/etcd/etcd.pem \
  --etcd-keyfile=/etc/kubernetes/pki/etcd/etcd-key.pem \
  --etcd-servers=https://192.168.11.71:2379 \
  --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
  --allow-privileged=true \
  --apiserver-count=3 \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/var/log/kube-apiserver-audit.log \
  --event-ttl=1h \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=4"
EOF

# 创建apiserver服务启动文件
tee kube-apiserver.service << 'EOF'
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service

[Service]
EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

# 移动apiserver配置文件到相应位置
cp kube-apiserver*.pem /etc/kubernetes/pki/
cp token.csv /etc/kubernetes/
cp kube-apiserver.conf /etc/kubernetes/
cp kube-apiserver.service /usr/lib/systemd/system/
mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes

# 启动服务
systemctl daemon-reload && systemctl enable kube-apiserver && systemctl start kube-apiserver

2.5.2. 校验api-server服务是否正常启动

systemctl status kube-apiserver

curl --insecure https://192.168.11.71:6443/

执行结果

[root@k8s-master1 ~]# systemctl status kube-apiserver
● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled;                                                                                vendor preset: disabled)
   Active: active (running) since Thu 2022-11-17 22:16:20 CST; 1min 25s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 52956 (kube-apiserver)
   CGroup: /system.slice/kube-apiserver.service
           └─52956 /usr/local/bin/kube-apiserver --enable-admission-plugi...

Nov 17 22:17:22 k8s-master1 kube-apiserver[52956]: I1117 22:17:22.478545 ...
Nov 17 22:17:22 k8s-master1 kube-apiserver[52956]: I1117 22:17:22.484930 ...
Nov 17 22:17:32 k8s-master1 kube-apiserver[52956]: I1117 22:17:32.464028 ...
Nov 17 22:17:32 k8s-master1 kube-apiserver[52956]: I1117 22:17:32.469795 ...
Nov 17 22:17:32 k8s-master1 kube-apiserver[52956]: I1117 22:17:32.481656 ...
Nov 17 22:17:32 k8s-master1 kube-apiserver[52956]: I1117 22:17:32.485503 ...
Nov 17 22:17:42 k8s-master1 kube-apiserver[52956]: I1117 22:17:42.463590 ...
Nov 17 22:17:42 k8s-master1 kube-apiserver[52956]: I1117 22:17:42.467854 ...
Nov 17 22:17:42 k8s-master1 kube-apiserver[52956]: I1117 22:17:42.480078 ...
Nov 17 22:17:42 k8s-master1 kube-apiserver[52956]: I1117 22:17:42.486241 ...
Hint: Some lines were ellipsized, use -l to show in full.
[root@k8s-master1 ~]#
[root@k8s-master1 ~]# curl --insecure https://192.168.11.71:6443/
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
[root@k8s-master1 ~]#

2.6. 部署kubectl

2.6.1. 创建CSR文件

tee admin-csr.json << 'EOF'
{
  "CN": "admin",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:masters",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF

# 生成admin证书
cfssl gencert \
    -ca=ca.pem \
    -ca-key=ca-key.pem \
    -config=ca-config.json \
    -profile=kubernetes \
    admin-csr.json | cfssljson -bare admin


kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.11.71:6443 --kubeconfig=kube.config

# 设置客户端认证参数
kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config

# 设置上下文参数
kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config

# 设置当前上下文
kubectl config use-context kubernetes --kubeconfig=kube.config


mkdir -p ~/.kube
cp kube.config ~/.kube/config
cp kube.config /etc/kubernetes/admin.conf

# 授权kubernetes证书访问kubelet api权限
kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes --kubeconfig=/root/.kube/config

yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
kubectl completion bash > ~/.kube/completion.bash.inc
source '/root/.kube/completion.bash.inc'
source $HOME/.bash_profile

2.6.2. 查看集群状态

kubectl cluster-info
kubectl get componentstatuses
kubectl get all --all-namespaces

2.7. 部署kube-controller-manager

2.7.1. 启动kube-controller-manager

tee kube-controller-manager-csr.json << 'EOF'
{
  "CN": "system:kube-controller-manager",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:kube-controller-manager",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF

# 创建controller-manager证书
cfssl gencert \
   -ca=ca.pem \
   -ca-key=ca-key.pem \
   -hostname=127.0.0.1,192.168.11.71 \
   -config=ca-config.json \
   -profile=kubernetes \
   kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager


# 创建Kube-controller-manager的kubeconfig
kubectl config set-cluster kubernetes \
     --certificate-authority=ca.pem \
     --embed-certs=true \
     --server=https://192.168.11.71:6443 \
     --kubeconfig=kube-controller-manager.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials system:kube-controller-manager \
     --client-certificate=kube-controller-manager.pem \
     --client-key=kube-controller-manager-key.pem \
     --embed-certs=true \
     --kubeconfig=kube-controller-manager.kubeconfig

# 设置上下文参数
kubectl config set-context system:kube-controller-manager@kubernetes \
    --cluster=kubernetes \
    --user=system:kube-controller-manager \
    --kubeconfig=kube-controller-manager.kubeconfig

# 设置当前上下文
kubectl config use-context system:kube-controller-manager@kubernetes \
     --kubeconfig=kube-controller-manager.kubeconfig

# 创建controller-manager配置文件
tee kube-controller-manager.conf << 'EOF'
KUBE_CONTROLLER_MANAGER_OPTS="--secure-port=10257 \
  --bind-address=127.0.0.1 \
  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
  --service-cluster-ip-range=10.96.0.0/12 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
  --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
  --allocate-node-cidrs=true \
  --cluster-cidr=172.16.0.0/12 \
  --root-ca-file=/etc/kubernetes/pki/ca.pem \
  --service-account-private-key-file=/etc/kubernetes/pki/ca-key.pem \
  --leader-elect=true \
  --feature-gates=RotateKubeletServerCertificate=true \
  --controllers=*,bootstrapsigner,tokencleaner \
  --horizontal-pod-autoscaler-sync-period=10s \
  --tls-cert-file=/etc/kubernetes/pki/kube-controller-manager.pem \
  --tls-private-key-file=/etc/kubernetes/pki/kube-controller-manager-key.pem \
  --use-service-account-credentials=true \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2"
EOF

# 创建服务启动文件
tee kube-controller-manager.service << 'EOF'
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF


# 复制配置文件到对应目录
cp kube-controller-manager*.pem /etc/kubernetes/pki/
cp kube-controller-manager.kubeconfig /etc/kubernetes/
cp kube-controller-manager.conf /etc/kubernetes/
cp kube-controller-manager.service /usr/lib/systemd/system/


systemctl daemon-reload  &&systemctl enable kube-controller-manager && systemctl start kube-controller-manager && systemctl status kube-controller-manager

2.7.2. 验证controller-manager服务是否启动成功

kubectl get componentstatuses

2.8. 部署kube-scheduler

2.8.1. 启动kube-scheduler

tee kube-scheduler-csr.json << 'EOF'
{
  "CN": "system:kube-scheduler",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:kube-scheduler",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF

# 创建证书
cfssl gencert \
   -ca=ca.pem \
   -ca-key=ca-key.pem \
   -hostname=127.0.0.1,192.168.11.71,192.168.11.72,192.168.11.73 \
   -config=ca-config.json \
   -profile=kubernetes \
   kube-scheduler-csr.json | cfssljson -bare kube-scheduler


# 创建kube-scheduler的kubeconfig文件
kubectl config set-cluster kubernetes \
     --certificate-authority=ca.pem \
     --embed-certs=true \
     --server=https://192.168.11.71:6443 \
     --kubeconfig=kube-scheduler.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials system:kube-scheduler \
     --client-certificate=kube-scheduler.pem \
     --client-key=kube-scheduler-key.pem \
     --embed-certs=true \
     --kubeconfig=kube-scheduler.kubeconfig

# 设置上下文参数
kubectl config set-context system:kube-scheduler@kubernetes \
     --cluster=kubernetes \
     --user=system:kube-scheduler \
     --kubeconfig=kube-scheduler.kubeconfig

# 设置当前上下文
kubectl config use-context system:kube-scheduler@kubernetes \
     --kubeconfig=kube-scheduler.kubeconfig


# 创建kube-scheduler的配置文件
tee kube-scheduler.conf << 'EOF'
KUBE_SCHEDULER_OPTS="--bind-address=127.0.0.1 \
  --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
  --leader-elect=true \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2"
EOF


# 创建kube-scheduler服务启动文件
tee kube-scheduler.service << 'EOF'
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF


# 拷贝kube-scheduler到合适的集群节点中
cp kube-scheduler*.pem /etc/kubernetes/ssl/
cp kube-scheduler.kubeconfig /etc/kubernetes/
cp kube-scheduler.conf /etc/kubernetes/
cp kube-scheduler.service /usr/lib/systemd/system/


systemctl daemon-reload &&  systemctl enable kube-scheduler && systemctl start kube-scheduler && systemctl status kube-scheduler

2.8.2. 验证scheduler是否启动成功

kubectl get componentstatuses

2.9. 安装containerd

直接在每个节点上执行以下脚本即可

# 下载containerd
#wget https://ghproxy.com/https://github.com/containerd/containerd/releases/download/v1.6.10/cri-containerd-cni-1.6.10-linux-amd64.tar.gz
# 直接把containerd所有的可执行文件解压至根目录
tar -xvzf cri-containerd-cni-*-linux-amd64.tar.gz -C /

# 替换containerd默认自带的runc
#wget https://ghproxy.com/https://github.com/opencontainers/runc/releases/download/v1.1.2/runc.amd64
mv runc.amd64 runc
chmod +x runc
mv -f runc /usr/local/sbin/

#创建服务启动文件
tee /etc/systemd/system/containerd.service << 'EOF'
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target

[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target
EOF

# 配置containerd所需要的模块
cat <<EOF | tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

# 加载overlay, br_netfilter模块
systemctl restart systemd-modules-load.service

# 配置contaienrd所需要的内核参数
cat <<EOF | tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

# 加载内核参数
sysctl --system

# 创建默认配置文件
mkdir -p /etc/containerd
# 创建containerd默认配置文件
#containerd config default | tee /etc/containerd/config.toml
#sed -i "s#SystemdCgroup = false#SystemdCgroup = true#" /etc/containerd/config.toml
#sed -i "s#registry.k8s.io#registry.aliyuncs.com/google_containers#" /etc/containerd/config.toml

# 直接生成配置文件,注意以下版本,这里生成的是containerd 1.6.10的配置,其它版本的配置应该生成默认的配置,然后进行更改
tee /etc/containerd/config.toml << 'EOF'
disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/var/lib/containerd"
state = "/run/containerd"
temp = ""
version = 2

[cgroup]
  path = ""

[debug]
  address = ""
  format = ""
  gid = 0
  level = ""
  uid = 0

[grpc]
  address = "/run/containerd/containerd.sock"
  gid = 0
  max_recv_message_size = 16777216
  max_send_message_size = 16777216
  tcp_address = ""
  tcp_tls_ca = ""
  tcp_tls_cert = ""
  tcp_tls_key = ""
  uid = 0

[metrics]
  address = ""
  grpc_histogram = false

[plugins]

  [plugins."io.containerd.gc.v1.scheduler"]
    deletion_threshold = 0
    mutation_threshold = 100
    pause_threshold = 0.02
    schedule_delay = "0s"
    startup_delay = "100ms"

  [plugins."io.containerd.grpc.v1.cri"]
    device_ownership_from_security_context = false
    disable_apparmor = false
    disable_cgroup = false
    disable_hugetlb_controller = true
    disable_proc_mount = false
    disable_tcp_service = true
    enable_selinux = false
    enable_tls_streaming = false
    enable_unprivileged_icmp = false
    enable_unprivileged_ports = false
    ignore_image_defined_volumes = false
    max_concurrent_downloads = 3
    max_container_log_line_size = 16384
    netns_mounts_under_state_dir = false
    restrict_oom_score_adj = false
    sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"
    selinux_category_range = 1024
    stats_collect_period = 10
    stream_idle_timeout = "4h0m0s"
    stream_server_address = "127.0.0.1"
    stream_server_port = "0"
    systemd_cgroup = false
    tolerate_missing_hugetlb_controller = true
    unset_seccomp_profile = ""

    [plugins."io.containerd.grpc.v1.cri".cni]
      bin_dir = "/opt/cni/bin"
      conf_dir = "/etc/cni/net.d"
      conf_template = ""
      ip_pref = ""
      max_conf_num = 1

    [plugins."io.containerd.grpc.v1.cri".containerd]
      default_runtime_name = "runc"
      disable_snapshot_annotations = true
      discard_unpacked_layers = false
      ignore_rdt_not_enabled_errors = false
      no_pivot = false
      snapshotter = "overlayfs"

      [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
        base_runtime_spec = ""
        cni_conf_dir = ""
        cni_max_conf_num = 0
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        runtime_engine = ""
        runtime_path = ""
        runtime_root = ""
        runtime_type = ""

        [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]

      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]

        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          base_runtime_spec = ""
          cni_conf_dir = ""
          cni_max_conf_num = 0
          container_annotations = []
          pod_annotations = []
          privileged_without_host_devices = false
          runtime_engine = ""
          runtime_path = ""
          runtime_root = ""
          runtime_type = "io.containerd.runc.v2"

          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            BinaryName = ""
            CriuImagePath = ""
            CriuPath = ""
            CriuWorkPath = ""
            IoGid = 0
            IoUid = 0
            NoNewKeyring = false
            NoPivotRoot = false
            Root = ""
            ShimCgroup = ""
            SystemdCgroup = true

      [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
        base_runtime_spec = ""
        cni_conf_dir = ""
        cni_max_conf_num = 0
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        runtime_engine = ""
        runtime_path = ""
        runtime_root = ""
        runtime_type = ""

        [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]

    [plugins."io.containerd.grpc.v1.cri".image_decryption]
      key_model = "node"

    [plugins."io.containerd.grpc.v1.cri".registry]
      config_path = ""

      [plugins."io.containerd.grpc.v1.cri".registry.auths]

      [plugins."io.containerd.grpc.v1.cri".registry.configs]

      [plugins."io.containerd.grpc.v1.cri".registry.headers]

      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
          endpoint = ["https://docker.mirrors.ustc.edu.cn", "http://hub-mirror.c.163.com"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"]
          endpoint = ["https://gcr.mirrors.ustc.edu.cn"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
          endpoint = ["https://gcr.mirrors.ustc.edu.cn/google-containers/"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"]
          endpoint = ["https://quay.mirrors.ustc.edu.cn"]

    [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
      tls_cert_file = ""
      tls_key_file = ""

  [plugins."io.containerd.internal.v1.opt"]
    path = "/opt/containerd"

  [plugins."io.containerd.internal.v1.restart"]
    interval = "10s"

  [plugins."io.containerd.internal.v1.tracing"]
    sampling_ratio = 1.0
    service_name = "containerd"

  [plugins."io.containerd.metadata.v1.bolt"]
    content_sharing_policy = "shared"

  [plugins."io.containerd.monitor.v1.cgroups"]
    no_prometheus = false

  [plugins."io.containerd.runtime.v1.linux"]
    no_shim = false
    runtime = "runc"
    runtime_root = ""
    shim = "containerd-shim"
    shim_debug = false

  [plugins."io.containerd.runtime.v2.task"]
    platforms = ["linux/amd64"]
    sched_core = false

  [plugins."io.containerd.service.v1.diff-service"]
    default = ["walking"]

  [plugins."io.containerd.service.v1.tasks-service"]
    rdt_config_file = ""

  [plugins."io.containerd.snapshotter.v1.aufs"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.btrfs"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.devmapper"]
    async_remove = false
    base_image_size = ""
    discard_blocks = false
    fs_options = ""
    fs_type = ""
    pool_name = ""
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.native"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.overlayfs"]
    root_path = ""
    upperdir_label = false

  [plugins."io.containerd.snapshotter.v1.zfs"]
    root_path = ""

  [plugins."io.containerd.tracing.processor.v1.otlp"]
    endpoint = ""
    insecure = false
    protocol = ""

[proxy_plugins]

[stream_processors]

  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
    accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
    path = "ctd-decoder"
    returns = "application/vnd.oci.image.layer.v1.tar"

  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
    accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
    path = "ctd-decoder"
    returns = "application/vnd.oci.image.layer.v1.tar+gzip"

[timeouts]
  "io.containerd.timeout.bolt.open" = "0s"
  "io.containerd.timeout.shim.cleanup" = "5s"
  "io.containerd.timeout.shim.load" = "5s"
  "io.containerd.timeout.shim.shutdown" = "3s"
  "io.containerd.timeout.task.state" = "2s"

[ttrpc]
  address = ""
  gid = 0
  uid = 0
EOF



systemctl daemon-reload
systemctl enable --now containerd
systemctl restart  containerd

# 验证containerd是否安装成功
crictl info

# 验证是否可以下载镜像
ctr images pull docker.io/library/redis:alpine

2.10. 部署kubelet

2.10.1. 启动kubelet

BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/token.csv)

kubectl config set-cluster kubernetes \
  --certificate-authority=ca.pem \
  --embed-certs=true \
  --server=https://192.168.11.71:6443 \
  --kubeconfig=kubelet-bootstrap.kubeconfig

kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=kubelet-bootstrap.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=kubelet-bootstrap.kubeconfig

kubectl config use-context default \
  --kubeconfig=kubelet-bootstrap.kubeconfig

kubectl create clusterrolebinding cluster-system-anonymous \
  --clusterrole=cluster-admin \
  --user=kubelet-bootstrap

kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \
  --user=kubelet-bootstrap

kubectl describe clusterrolebinding cluster-system-anonymous
kubectl describe clusterrolebinding kubelet-bootstrap

# 创建kubelet配置文件
tee kubelet.json << 'EOF'
{
  "kind": "KubeletConfiguration",
  "apiVersion": "kubelet.config.k8s.io/v1beta1",
  "authentication": {
    "x509": {
      "clientCAFile": "/etc/kubernetes/pki/ca.pem"
    },
    "webhook": {
      "enabled": true,
      "cacheTTL": "2m0s"
    },
    "anonymous": {
      "enabled": false
    }
  },
  "authorization": {
    "mode": "Webhook",
    "webhook": {
      "cacheAuthorizedTTL": "5m0s",
      "cacheUnauthorizedTTL": "30s"
    }
  },
  "address": "192.168.11.71",
  "port": 10250,
  "readOnlyPort": 10255,
  "cgroupDriver": "systemd",
  "hairpinMode": "promiscuous-bridge",
  "serializeImagePulls": false,
  "featureGates": {
    "RotateKubeletServerCertificate": true
  },
  "clusterDomain": "cluster.local.",
  "clusterDNS": ["10.96.0.10"]
}
EOF

# 创建kubelet服务启动配置文件
tee kubelet.service << 'EOF'
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \
  --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \
  --cert-dir=/etc/kubernetes/pki \
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
  --config=/etc/kubernetes/kubelet.json \
  --cni-bin-dir=/etc/cni/bin \
  --cni-conf-dir=/etc/cni/net.d \
  --container-runtime=remote \
  --container-runtime-endpoint=unix:///run/containerd/containerd.sock \
  --network-plugin=cni \
  --rotate-certificates \
  --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.2 \
  --root-dir=/etc/cni/net.d \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF


mkdir -p /etc/kubernetes/pki
mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes
cp kubelet-bootstrap.kubeconfig /etc/kubernetes/
cp kubelet.json /etc/kubernetes/
cp kubelet.service /usr/lib/systemd/system/


systemctl daemon-reload && systemctl enable kubelet &&  systemctl start kubelet && systemctl status kubelet

2.10.2. 验证kubelet是否启动成功

kubectl get nodes -owide
kubectl get csr

执行结果,注意kubelet的启动需要一定时间,等个一两分钟再执行此命令如果还不是该结果,那么多半是配置哪里出问题了

[root@k8s-master1 work]#
[root@k8s-master1 work]# kubectl get nodes -owide
NAME          STATUS   ROLES    AGE   VERSION    INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION              CONTAINER-RUNTIME
k8s-master1   Ready    <none>   53s   v1.21.14   192.168.11.71   <none>        CentOS Linux 7 (Core)   6.0.7-1.el7.elrepo.x86_64   containerd://1.6.10
[root@k8s-master1 work]# kubectl get csr
NAME        AGE     SIGNERNAME                                    REQUESTOR           CONDITION
csr-z56vc   2m12s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
[root@k8s-master1 work]#
[root@k8s-master1 work]#

2.11. 部署kubeproxy

tee kube-proxy-csr.json << 'EOF'
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:kube-proxy",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF


cfssl gencert \
   -ca=ca.pem \
   -ca-key=ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   kube-proxy-csr.json | cfssljson -bare kube-proxy


kubectl config set-cluster kubernetes     \
  --certificate-authority=ca.pem     \
  --embed-certs=true     \
  --server=https://192.168.11.71:6443     \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy  \
  --client-certificate=kube-proxy.pem     \
  --client-key=kube-proxy-key.pem     \
  --embed-certs=true     \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default  \
  --cluster=kubernetes     \
  --user=kube-proxy     \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default \
  --kubeconfig=kube-proxy.kubeconfig


tee kube-proxy.yaml << 'EOF'
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 192.168.11.71
clientConnection:
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 172.16.0.0/12
healthzBindAddress: 192.168.11.71:10256
kind: KubeProxyConfiguration
metricsBindAddress: 192.168.11.71:10249
mode: "ipvs"
EOF


tee kube-proxy.service << 'EOF'
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \
  --config=/etc/kubernetes/kube-proxy.yaml \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

mkdir -p /var/lib/kube-proxy
cp kube-proxy*.pem /etc/kubernetes/pki/
cp kube-proxy.kubeconfig kube-proxy.yaml /etc/kubernetes/
cp kube-proxy.service /usr/lib/systemd/system/


systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy && systemctl status kube-proxy

2.12. 部署网络组件calico

网友反映说 centos7 要升级libseccomp 不然 无法安装calico

wget https://docs.projectcalico.org/v3.19/manifests/calico.yaml

sed -i 's/# - name: CALICO_IPV4POOL_CIDR/- name: CALICO_IPV4POOL_CIDR/g' calico.yaml
sed -i 's/#   value: "192.168.0.0\/16"/  value: "172.16.0.0\/12"/g' calico.yaml
sed -i 's/"type": "calico-ipam"/"type": "calico-ipam",\n              "assign_ipv4": "true"/g' calico.yaml

#创建calico组件
kubectl apply -f calico.yaml

#查看calico pod是否成功创建
kubectl get pods -n kube-system -owide

执行过程

[root@k8s-master1 work]#
[root@k8s-master1 work]#
[root@k8s-master1 work]# kubectl apply -f calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created
[root@k8s-master1 work]#
[root@k8s-master1 work]#
[root@k8s-master1 work]#
[root@k8s-master1 work]#
[root@k8s-master1 work]#
[root@k8s-master1 work]#
[root@k8s-master1 work]# kubectl get pods -n kube-system -owide
NAME                                       READY   STATUS    RESTARTS   AGE     IP              NODE        NOMINATED NODE   READINESS GATES
calico-kube-controllers-7cc8dd57d9-dpcqw   1/1     Running   0          3m16s   10.88.0.2       k8s-node1   <none>           <none>
calico-node-hr9f5                          1/1     Running   0          3m15s   192.168.11.74   k8s-node1   <none>           <none>
[root@k8s-master1 work]#

2.13. 部署CoreDNS

tee coredns.yaml  << 'EOF'
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
rules:
  - apiGroups:
    - ""
    resources:
    - endpoints
    - services
    - pods
    - namespaces
    verbs:
    - list
    - watch
  - apiGroups:
    - discovery.k8s.io
    resources:
    - endpointslices
    verbs:
    - list
    - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health {
          lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf {
          max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. Default is 1.
  # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        kubernetes.io/os: linux
      affinity:
         podAntiAffinity:
           preferredDuringSchedulingIgnoredDuringExecution:
           - weight: 100
             podAffinityTerm:
               labelSelector:
                 matchExpressions:
                   - key: k8s-app
                     operator: In
                     values: ["kube-dns"]
               topologyKey: kubernetes.io/hostname
      containers:
      - name: coredns
        image: registry.cn-beijing.aliyuncs.com/dotbalo/coredns:1.8.6 
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 8181
            scheme: HTTP
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.96.0.10 
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP
EOF

kubectl apply -f coredns.yaml

2.14. 安装Metric Server

tee metrics-server.yaml << 'EOF'
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        - --kubelet-insecure-tls
        - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # change to front-proxy-ca.crt for kubeadm
        - --requestheader-username-headers=X-Remote-User
        - --requestheader-group-headers=X-Remote-Group
        - --requestheader-extra-headers-prefix=X-Remote-Extra-
        image: registry.cn-beijing.aliyuncs.com/dotbalo/metrics-server:0.5.0
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
        - name: ca-ssl
          mountPath: /etc/kubernetes/pki
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
      - name: ca-ssl
        hostPath:
          path: /etc/kubernetes/pki

---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100
EOF

kubectl  apply -f metrics-server.yaml

2.15. 集群验证

tee pod.yaml << 'EOF'
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: docker.io/library/busybox:1.28
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
EOF

kubectl apply -f pod.yaml

# 查看
kubectl  get pod
#NAME      READY   STATUS    RESTARTS   AGE
#busybox   1/1     Running   0          17s

2.16. 用pod解析默认命名空间中的kubernetes

kubectl get svc
#NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
#kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   17h

kubectl exec  busybox -n default -- nslookup kubernetes
#3Server:    10.96.0.10
#Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

#Name:      kubernetes
#Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

2.16.1. 测试跨命名空间是否可以解析

kubectl exec  busybox -n default -- nslookup kube-dns.kube-system
#Server:    10.96.0.10
#Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

#Name:      kube-dns.kube-system
#Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

2.16.2. 每个节点都必须要能访问Kubernetes的kubernetes svc 443和kube-dns的service 53

telnet 10.96.0.1 443
Trying 10.96.0.1...
Connected to 10.96.0.1.
Escape character is '^]'.

 telnet 10.96.0.10 53
Trying 10.96.0.10...
Connected to 10.96.0.10.
Escape character is '^]'.

curl 10.96.0.10:53
curl: (52) Empty reply from server

2.16.3. Pod和Pod之前要能通

kubectl get po -owide
NAME      READY   STATUS    RESTARTS   AGE   IP              NODE         NOMINATED NODE   READINESS GATES
busybox   1/1     Running   0          17m   172.27.14.193   k8s-node02   <none>           <none>

 kubectl get po -n kube-system -owide
NAME                                       READY   STATUS    RESTARTS      AGE   IP               NODE           NOMINATED NODE   READINESS GATES
calico-kube-controllers-5dffd5886b-4blh6   1/1     Running   0             77m   172.25.244.193   k8s-master01   <none>           <none>
calico-node-fvbdq                          1/1     Running   1 (75m ago)   77m   192.168.1.61     k8s-master01   <none>           <none>
calico-node-g8nqd                          1/1     Running   0             77m   192.168.1.64     k8s-node01     <none>           <none>
calico-node-mdps8                          1/1     Running   0             77m   192.168.1.65     k8s-node02     <none>           <none>
calico-node-nf4nt                          1/1     Running   0             77m   192.168.1.63     k8s-master03   <none>           <none>
calico-node-sq2ml                          1/1     Running   0             77m   192.168.1.62     k8s-master02   <none>           <none>
calico-typha-8445487f56-mg6p8              1/1     Running   0             77m   192.168.1.65     k8s-node02     <none>           <none>
calico-typha-8445487f56-pxbpj              1/1     Running   0             77m   192.168.1.61     k8s-master01   <none>           <none>
calico-typha-8445487f56-tnssl              1/1     Running   0             77m   192.168.1.64     k8s-node01     <none>           <none>
coredns-5db5696c7-67h79                    1/1     Running   0             63m   172.25.92.65     k8s-master02   <none>           <none>
metrics-server-6bf7dcd649-5fhrw            1/1     Running   0             61m   172.18.195.1     k8s-master03   <none>           <none>

# 进入busybox ping其他节点上的pod

kubectl exec -ti busybox -- sh
/ # ping 192.168.1.64
PING 192.168.1.64 (192.168.1.64): 56 data bytes
64 bytes from 192.168.1.64: seq=0 ttl=63 time=0.358 ms
64 bytes from 192.168.1.64: seq=1 ttl=63 time=0.668 ms
64 bytes from 192.168.1.64: seq=2 ttl=63 time=0.637 ms
64 bytes from 192.168.1.64: seq=3 ttl=63 time=0.624 ms
64 bytes from 192.168.1.64: seq=4 ttl=63 time=0.907 ms

# 可以连通证明这个pod是可以跨命名空间和跨主机通信的

2.16.4. 创建三个副本,可以看到3个副本分布在不同的节点

cat > deployments.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: docker.io/library/nginx:1.14.2
        ports:
        - containerPort: 80

EOF

kubectl  apply -f deployments.yaml 
deployment.apps/nginx-deployment created

kubectl  get pod 
NAME                               READY   STATUS    RESTARTS   AGE
busybox                            1/1     Running   0          6m25s
nginx-deployment-9456bbbf9-4bmvk   1/1     Running   0          8s
nginx-deployment-9456bbbf9-9rcdk   1/1     Running   0          8s
nginx-deployment-9456bbbf9-dqv8s   1/1     Running   0          8s

# 删除nginx

kubectl delete -f deployments.yaml
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

Kubernetes v1.21.14二进制搭建单节点集群 的相关文章

随机推荐

  • VNC连接失败:The connection was refused by the host computer

    解决方法 xff1a 1 用Xshell登陆自己的服务器 2 在命令行中输入vncserver 命令行中出现 xff1a Warning optimal6 2 is taken because of tmp X2 lock Remove t
  • 业务层 、服务层、数据层、表现层

    一般说来 xff0c 业务逻辑层中的模块包含了系统所需要的所有功能上的算法和计算过程 xff0c 并与数据访问层和表现层交互 抽象的说 xff0c 业务逻辑层就是处理与业务相关的部分 xff0c 一般来说 xff0c 业务层包含一系列的执行
  • 计算机视觉:传统图像处理方法

    1 图像分割 经典的数字图像分割算法一般是基于灰度值的两个基本特征之一 xff1a 不连续性和相似性 xff08 1 xff09 基于阈值 xff1a 基于图像的灰度特征来计算一个或多个灰度阈值 xff0c 并将图像中每个像素的灰度值与阈值
  • 基于UGUI实现类似Excel表格功能

    曾经有一个类似这种需求 xff0c 想在Unity中实现类似Excel表中的一个功能 xff0c 能在Scene窗口中 新增行 可视化配置 所见所得 单元格合并 等功能 经过我对UGUI的一些深层次了解以及结合Editor编辑器窗口开发 x
  • 电子货架标签应用浅析(ESL)

    关注与分享 xff0c 是对原创最大的鼓励 年底了 xff0c 会断续介绍几个主要的BLE的应用及市场情况 xff0c 这篇文章介绍的是电子货架标签 xff0c ESL Electronic Shelf Label 文章将从应用简介 xff
  • 最详细UWB技术及特点介绍

    关注与分享 xff0c 是对原创最大的鼓励 这篇文章偏技术 xff0c 信息偏深 xff0c 建议大家可以先跳到感兴趣的章节阅读 xff1b 01 UWB简介 UWB是Ultra Wide Band缩写 xff0c 来源于很久之前的脉冲通信
  • 浅聊Matter协议 (原CHIP协议)

    聚焦 xff1a 芯产品 xff0c 芯市场 xff0c 芯资讯 因为Matter协议目前还没有发布 xff0c 标准只针对部分协会成员开放 xff1b 很多朋友可能听过这个名字 xff0c 然后知道是一个 上层 协议 xff0c 更多内容
  • 2021 MCU WiFi竞争新格局,国产MCU WiFi芯片盘点,附录2020/2021 MCU WiFi排行

    关注智联网事 iotthings xff1a 芯产品 xff0c 芯市场 xff0c 芯资讯 缺货 xff0c 是半导体产业2021年最主要的基调 xff0c 有公司拿不到产能 xff0c 有公司新芯片流片周期大幅拉长 xff1b 新冠病情
  • 芯科(Silabs) Matter 全栈解决方案,附录高质量Matter培训资

    关注智联网事 xff1a 芯产品 xff0c 芯市场 xff0c 芯资讯 对芯科的最初印象 xff0c 最早应该是2014 5年左右 xff0c 当时SI44xx系列渗透了很多市场的客户 xff0c 记得一个是低功耗的特性 xff0c 一个
  • WAF技术选型介绍

    WAF目前是企业必不可少的安全设备 xff0c 目前常见的开源技术选型包括 xff1a jxwafopenstarngx lua wafApache APISIXmodsecurity 介绍参考 xff1a https zhuanlan z
  • int 占多少字节

    char 1 int 4 long 8 float 4 double 8 xff08 1 xff09 使用VC xff0c int类型占4个字节 xff08 2 xff09 使用Turbo C xff0c int类型占2个字节 16位编译器
  • 海外LPWAN的王者是我,一文看懂Wi-Sun协议

    聚焦 xff1a 芯产品 xff0c 芯市场 xff0c 芯技术 注 xff1a 欢迎加入文章底部的 lt 物联坊间 gt 微信 刚刚毕业的我 xff0c 有参与城市照明系统的建设 xff0c 包括城市公交系统 xff0c 那个时候困扰我的
  • 22家国产汽车MCU公司及型号盘点

    专注芯片 xff0c 应用系统 xff0c 行销技能的公众号 如果有一家芯片MCU或模拟公司和你说 xff0c 他不做汽车方向芯片 xff0c 你可以内心欣喜的 xff0c 严肃的问一句 xff0c 为什么 xff1b 做汽车芯片 xff0
  • 2022 MCU公司交卷,总营收84.8亿人民币,排名第一和最后的分别是

    2022财报季结束 xff0c 我们看下上市MCU公司的最新排名 xff0c 毛利 xff0c 库存及库存周转率情况 xff1b 根据 Omdia 的数据 xff0c 2022 年中国 MCU 市场规模约为 82 亿美元 xff0c 小二统
  • 深度:旋转变压器原理,芯片,算法,选型

    之前介绍的新能源汽车电机控制器 MCU 和电动助力转向 EPS 文章中 xff0c 有提到电机的角度反馈可选择转旋转变压器方案 xff0c 今天做个分享 xff0c 欢迎留言交流 本文目录 xff1a 旋转变压器应用及参数概览 旋转变压器原
  • 实时微控制器的关键技术及国产玩家,国产DSP盘点

    小二用芯在写 xff0c 如果您觉得有帮助 xff0c 帮忙朋友圈推荐下 34 xff0c 感谢 xff01 在介绍OBC xff0c DCDC时候 xff0c 觉得有必要对主控芯片做个介绍 xff0c 比如为什么说数字电源的控制一般集成H
  • 天猫精灵的开发者生态

    文章转自 智联网事 欢迎关注 xff0c 每周一篇原创 xff0c 直至 No End https mp weixin qq com s biz 61 MzI3NDE2NDMwNQ 61 61 amp mid 61 2649905740 a
  • 蓝牙Mesh网络性能及网络特点总结(一)

    原文链接 xff1a 欢迎关注公众号 智联网事 xff0c 一周一篇原创文章 xff0c 一起探讨智联网 https mp weixin qq com s biz 61 MzI3NDE2NDMwNQ 61 61 amp mid 61 264
  • 华为物联网(IOT)开发者平台

    智联网事 关注与分享 xff0c 是对原创最大的鼓励 原文链接 https mp weixin qq com s biz 61 MzI3NDE2NDMwNQ 61 61 amp mid 61 2649905835 amp idx 61 1
  • Kubernetes v1.21.14二进制搭建单节点集群

    1 集群环境准备 1 1 主机规划 IP主机名主机角色操作系统安装组件192 168 11 71k8s master1master workerCentos7 9api server controller manager scheduler