双master节点+keepalived方式部署K8s 1.18.20

2023-05-16

相关部署方式也挺多,自己采用双master节点+单node节点方式,并且采用keepalived部署1.18.20版本,中间也出现过相关小问题,但都一一处理,记录以给需要的同仁们参考,希望大家都可以一起学习交流!!!

【部署环境】----参考之前已使用的环境

master01:192.168.66.200

master02:192.168.66.201

node01:192.168.66.250

vip:192.168.66.199

【操作步骤】

步骤1、虚机的初始化配置(相关配置已略去)

请参考K8s 1.15.0 版本部署安装_好好学习之乘风破浪的博客-CSDN博客

步骤2、双master节点配置keepalived

# yum install keepalived -y

# cd /etc/keepalived

修改keepalived.conf配置文件

[root@k8s-master01 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id LVS_DEVEL
}

vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
    interval 2
    weight -2
}

vrrp_instance VI_1 {
    state MASTER         ####master01上配置Master,master02上配置BACKUP
    interface ens33      #####需要根据虚机的网卡进行填写
    virtual_router_id 51
    priority 100           ####master01上配置100,master02上配置90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }

    track_script {
     check_nginx
    }

    virtual_ipaddress {
        192.168.66.199       #####配置VIP
    }
}
[root@k8s-master02 keepalived]# cat keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id LVS_DEVEL
}

vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
    interval 2
    weight -2
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 51
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
  
    track_script {
     check_nginx
    }
   
    virtual_ipaddress {
        192.168.66.199
    }
}

分别在两台节点上配置检查keepalived脚本 check_nginx.sh

[root@k8s-master01 keepalived]# cat check_nginx.sh
#!/bin/bash
count=`ps aux |grep nginx|grep -v grep|wc -l`
if [ $count -eq 0 ];then
    nginx
else
    sleep 3
    systemctl stop keepalived.service
fi

[root@k8s-master02 keepalived]# cat check_nginx.sh
#!/bin/bash
count=`ps aux |grep nginx|grep -v grep|wc -l`
if [ $count -eq 0 ];then
    nginx
else
    sleep 3
    systemctl stop keepalived.service
fi

配置完成后重启keepalive服务

# systemctl enable keepalived  && systemctl start keepalived  && systemctl status keepalived

若出现重启keepalived服务报错Configuration file ‘/etc/keepalived/keepalived.conf‘ is not a regular non-executable file,具体处理方式如下:

K8s 配置高可用提示Configuration file ‘/etc/keepalived/keepalived.conf‘ is not a regular non-executable file_好好学习之乘风破浪的博客-CSDN博客

 步骤三、安装docker

可以参考k8s 1.18.20版本部署_好好学习之乘风破浪的博客-CSDN博客

步骤四、安装软件包

 可以参考k8s 1.18.20版本部署_好好学习之乘风破浪的博客-CSDN博客

五、部署集群

以下操作在k8s-master01节点上操作

5.1 、生成初始化配置
kubeadm config print init-defaults > kubeadm.yaml

[root@k8s-master01 ~]# cat  kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.66.199    ####需要修改为VIP的地址
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io  
kind: ClusterConfiguration
kubernetesVersion: v1.18.20   ####需要修改为v1.18.20
controlPlaneEndpoint: 192.168.66.199:6443   ####需要修改为VIP的地址和端口
apiServer:                              ####新增
  certSANs:                              ####新增
  - 192.168.66.200                        ####新增
  - 192.168.66.201                          ####新增
  - 192.168.66.199                           ####新增
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12  ####保持默认
  podSubnet: 10.2.0.0/16     ####新增
scheduler: {}
 
---                                             ####新增
apiVersion: kubeproxy.config.k8s.io/v1alpha1    ####新增
kind: KubeProxyConfiguration                    ####新增
mode: ipvs                                      ####新增

5.2 、执行初始化配置
kubeadm init --config kubeadm.yaml
成功会显示如下信息

[root@k8s-master01 ~]# kubeadm init --config kubeadm.yaml
W0401 23:48:19.240599   47054 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta2", Kind:"ClusterConfiguration"}: error converting YAML to JSON: yaml: unmarshal errors:
  line 17: key "apiServer" already set in map
W0401 23:48:19.241866   47054 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.199 192.168.66.199 192.168.66.200 192.168.66.201 192.168.66.199]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.66.199 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.66.199 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0401 23:48:24.472613   47054 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0401 23:48:24.474030   47054 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.004197 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 192.168.66.199:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:bd8a870bc548c50dd036fcf84da9a7ff506903e785d767de5a022482011f8c58 \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.66.199:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:bd8a870bc548c50dd036fcf84da9a7ff506903e785d767de5a022482011f8c58 

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 192.168.66.199:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:bd8a870bc548c50dd036fcf84da9a7ff506903e785d767de5a022482011f8c58 \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.66.199:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:bd8a870bc548c50dd036fcf84da9a7ff506903e785d767de5a022482011f8c58

5.3 按照要求执行生成文件

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

5.4 查询集群状态

# kubectl get cs

怎样处理127.0.0.1:10251端口可以参考

k8s 1.18.20版本部署_好好学习之乘风破浪的博客-CSDN博客

 5.5 部署calico组件

calico.yaml文件

链接:https://pan.baidu.com/s/1-zvq1Ug4Gvny4ek5u0HWqw 
提取码:2sbq

执行calico文件

查询集群pod运行情况

 步骤6、添加节点

(1)将master02节点添加到集群中,直接使用

 kubeadm join 192.168.66.199:6443 --token abcdef.0123456789abcdef \
>     --discovery-token-ca-cert-hash sha256:bd8a870bc548c50dd036fcf84da9a7ff506903e785d767de5a022482011f8c58 \
>     --control-plane 
[preflight] Running pre-flight checks

提示缺少证书文件

 从master01上上传证书文件到master02上

# cd /etc/kubernetes/pki

# scp ca.* 192.168.66.201:/etc/kubernetes/pki/

#  scp sa.* 192.168.66.201:/etc/kubernetes/pki/

# scp front* 192.168.66.201:/etc/kubernetes/pki/

还有etcd证书文件

# cd /etc/kubernetes/pki/etcd

# scp ca.* 192.168.66.201:/etc/kubernetes/pki/etcd/

 上传完成后再次执行添加集群操作

kubeadm join 192.168.66.199:6443 --token abcdef.0123456789abcdef     --discovery-token-ca-cert-hash sha256:bd8a870bc548c50dd036fcf84da9a7ff506903e785d767de5a022482011f8c58     --control-plane 

[root@k8s-master02 kubernetes]#   kubeadm join 192.168.66.199:6443 --token abcdef.0123456789abcdef     --discovery-token-ca-cert-hash sha256:bd8a870bc548c50dd036fcf84da9a7ff506903e785d767de5a022482011f8c58     --control-plane 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using the existing "front-proxy-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master02 localhost] and IPs [192.168.66.201 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master02 localhost] and IPs [192.168.66.201 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master02 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.66.201 192.168.66.199 192.168.66.200 192.168.66.201 192.168.66.199]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
W0402 00:27:01.015532    5719 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0402 00:27:01.032698    5719 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0402 00:27:01.034211    5719 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
{"level":"warn","ts":"2023-04-02T00:27:17.644+0800","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://192.168.66.201:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node k8s-master02 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master02 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

[root@k8s-master02 kubernetes]# 

 按照要求进行操作

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

 查询master02节点添加情况,显示master02节点已经添加到集群中正常

 (2)将node01节点添加到集群中,直接使用

[root@k8s-node01 ~]# kubeadm join 192.168.66.199:6443 --token abcdef.0123456789abcdef \
>     --discovery-token-ca-cert-hash sha256:bd8a870bc548c50dd036fcf84da9a7ff506903e785d767de5a022482011f8c58

 查询集群节点node01情况和pod运行情况

 这样master、node节点都正常添加到集群中!

 后续还会添加增加prometheus监控,期待中~~~~

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

双master节点+keepalived方式部署K8s 1.18.20 的相关文章

  • centos 安装k8s

    第一步 每台机子都做 关闭防火墙 systemctl stop firewalld systemctl disable firewalld 第二步 每台机子都做 永久关闭selinux sed i s enforcing disabled
  • (5)minikube玩转k8s集群之访问pod里的服务

    配套视频教程 1 Minikube介绍 简单说 创建k8s集群很麻烦 minikube可以让我们快速搭建一个k8s集群用于学习 Minikube 是一种可以让您在本地轻松运行 Kubernetes 的工具 Minikube 在笔记本电脑上的
  • 社区的代码规范及e2e测试

    golangci lint 静态代码检查工具 是对golint gofmt的集成 速度更快 1 安装 go install github com golangci golangci lint cmd golangci lint v1 38
  • K8s部署自己的web项目

    一 静态网页项目 1 前端项目源码下载 链接 https pan baidu com s 15jCVawpyJxa0xhCJ9SwTCQ 提取码 m4an 2 编写nginx conf和Dockerfile 放在项目根目录下 1 创建ngi
  • Kubernetes Pod 故障归类与排查方法

    1 Pod 概念 Pod是kubernetes集群中最小的部署和管理的基本单元 协同寻址 协同调度 Pod是一个或多个容器的集合 是一个或一组服务 进程 的抽象集合 Pod中可以共享网络和存储 可以简单理解为一个逻辑上的虚拟机 但并不是虚拟
  • KVM-7、KVM 虚拟机创建的几种方式

    通过对 qemu kvm libvirt 的学习 总结三种创建虚拟机的方式 1 通过 qemu kvm 创建 2 通过 virt install 创建 3 通过 virt manager 创建 在使用这三种创建虚拟机前提是 宿主机必须支持
  • K8S 工作负载(一)

    K8S 工作负载 1 Pod Pod 是 Kubernetes 中创建 管理和调度的最小计算单元 用户可以在 K8S 中通过调用 Pod API生成一个 Pod 让 K8S 对其进行调度 Pod 是一组 一个或多个 容器 这些容器共享存储
  • 从Docker到Kubernetes——Kubernetes设计解读之Pod

    文章目录 Kubernetes是个什么样的项目 Kubernetes的设计解读 典型案例 GuestBook pod设计解读 pod使用实例 pod内容器网络与通信 Kubernetes是个什么样的项目 简单的说 k8s是一个管理跨主机容器
  • k8s英伟达GPU插件(nvidia-device-plugin)

    安装方法 Installation Guide NVIDIA Cloud Native Technologies documentation 1 本地节点添加 NVIDIA 驱动程序 要求 NVIDIA drivers 384 81 先确保
  • Prometheus监控 controller-manager scheduler etcd

    用prometheus插件监控kubernetes控制平面 例如 您使用kubeadm构建k8s集群 然后kube控制器管理器 kube调度程序和etcd需要一些额外的工作来进行发现 create service for kube cont
  • docker的联合文件系统(UnionFS)

    docker最大的贡献就是定义了容器镜像的分层的存储格式 docker镜像技术的基础是联合文件系统 UnionFS 其文件系统是分层的 这样既可以充分利用共享层 又可以减少存储空间占用 联合挂载系统的工作原理 读 如果文件在upperdir
  • calico分配网络使k8s节点指定固定网段

    文章目录 calico分配网络使k8s节点指定固定网段 1 配置calicoctl 1 1 下载calicoctl 1 2 配置calicoctl 1 3 测试calicoctl 2 配置ippool 3 添加ippool 4 创建pod测
  • Rancher 全球化部署最佳实践

    作者 万绍远 CNCF 基金会官方认证 Kubernetes CKA CKS 工程师 云原生解决方案架构师 对 ceph Openstack Kubernetes prometheus 技术和其他云原生相关技术有较深入的研究 参与设计并实施
  • k8s-node节点未找到flannel网络

    k8s node节点的flannel的IP地址不正确 问题描述 问题分析 1 检查node节点的cni和flannel网卡地址 2 检查master节点的flannel服务 如何重置flannel网络 1 删除node节点 master 2
  • k8s Failed to create pod sandbox错误处理

    错误信息 Failed to create pod sandbox rpc error code Unknown desc failed to get sandbox image k8s gcr io pause 3 2 failed to
  • k8s问题 CrashLoopBackOff

    我们创建资源发现资源出现CrashLoopBackOff解决 CrashLoopBackOff 告诉我们 Kubernetes 正在尽力启动这个 Pod 但是一个或多个容器已经挂了 或者正被删除 root localhost kubectl
  • 十二. Kubernetes Pod 与 探针

    目录 一 Pod Pod 中的多容器协同 Pod 的组成与paush 重要 Pod 的生命周期 Pod状态与重启策略 静态Pod 二 探针 1 livenessProbe存活探针 2 readinessProbe就绪探针 3 startup
  • K8S暴露服务的三种方式

    文章目录 暴露服务的三种方式 NodePort LoadBalane Ingress 内容参考 暴露服务的三种方式 NodePort 将服务的类型设置成NodePort 每个集群节点都会在节点上打 开 一 个端口 对于NodePort服务
  • K8S学习--Kubeadm-7--Ansible二进制部署

    K8S学习 Kubeadm 安装 kubernetes 1 组件简介 K8S学习 Kubeadm 安装 kubernetes 2 安装部署 K8S学习 Kubeadm 3 dashboard部署和升级 K8S学习 Kubeadm 4 测试运
  • flannel和calico区别

    k8s网络模式 Flannel数据包在主机间转发是由backend实现的 目前已经支持UDP VxLAN host gw等多种模式 VxLAN 使用内核中的VxLAN模块进行封装报文 也是flannel推荐的方式 host gw虽然VXLA

随机推荐

  • 互联网单点登录集成方案

    为了迎合公司互联网化经营 xff0c 业务部门均纷纷上马了互联网的项目 xff0c 部门应用之间各自为政 xff0c 无法形成公司整体品牌效应 xff0c 以及影响用户体验 xff0c 故 xff0c 有了以下的单点登录集成方案 概述 整合
  • 【软件教程】如何让vscode连接ssh时记住密码

    准备软件 客户机安装vscode xff08 vscode官网https code visualstudio com xff09 客户机和服务器配置ssh xff0c 确保能够连接 VSCode ssh记住密码教程 一 在Client客户机
  • 车辆检测--DAVE: A Unified Framework for Fast Vehicle Detection and Annotation

    DAVE A Unified Framework for Fast Vehicle Detection and Annotation ECCV2016 本文使用深度学习进行车辆检测和属性学习 提出的系统为 Detection and Ann
  • 对抗学习用于目标检测--A-Fast-RCNN: Hard Positive Generation via Adversary for Object Detection

    A Fast RCNN Hard Positive Generation via Adversary for Object Detection CVPR 2017 Caffe code https github com xiaolonw a
  • 人脸识别--SphereFace: Deep Hypersphere Embedding for Face Recognition

    SphereFace Deep Hypersphere Embedding for Face Recognition CVPR2017 https github com wy1iu sphereface pytorch https gith
  • 运动相机检测无人机-- Detecting Flying Objects using a Single Moving Camera

    Detecting Flying Objects using a Single Moving Camera PAMI 2017 http cvlab epfl ch research unmanned detection https dri
  • [转载]Python SMTP发送邮件-smtplib模块

    在进入正题之前 xff0c 我们需要对一些基本内容有所了解 xff1a 常用的电子邮件协议有SMTP POP3 IMAP4 xff0c 它们都隶属于TCP IP协议簇 xff0c 默认状态下 xff0c 分别通过TCP端口25 110和14
  • c语言和c++有什么区别

    差不多是win98跟winXP的关系 C 43 43 是在C的基础上增加了新的理论 xff0c 玩出了新的花样 所以叫C加加 C是一个结构化语言 xff0c 它的重点在于算法和数据结构 C程序的设计首要考虑的是如何通过一个过程 xff0c
  • 梳理LVM逻辑卷管理,

    在Linux操作系统会时不时碰到卷有关的操作 xff0c 以下也是罗列了相关操作内容 xff0c 仅供参考 创建PV VG LV的方法 将各物理磁盘或分区的系统类型设为Linux LVM xff0c 其system ID为8e xff0c
  • 使用sqlyog连接 Mysql 出现1251错误

    使用sqlyog连接 Mysql 出现1251错误 简述 xff1a 1251 client does not support authentication protocol requested by server consider upg
  • 准备给ubuntu18.04安装杀毒软件

    如题 xff0c 电脑最近总出现些奇奇怪怪的小问题 xff0c 还是得装个杀毒软件 xff0c 看是不是中病毒了 输入sudo apt get install clamtk 安装完成后 xff0c 输入clamtk 即可 xff0e 卸载方
  • 使用Nginx代理地址

    DotNetCore在Linux发布时候 xff0c 由于不止一个产品组发布网站 xff0c 不像以前大家都用IIS的80发布网站 那么就存在大家抢80端口的情况 xff0c 为了让大家不比加上端口为此用Nginx代理URL实现网站地址代理
  • CentOS安装Cache数据库

    适用CentOS7 6 CentOS8上安装Intersystem公司的Cache数据库 xff0c 资料基本是空白 xff0c 分享一下 首先安装解压软件unzip和libicu xff0c 最小化安装的缺 xff0c 全安装的不缺 yu
  • Cache数据库之ECP搭建

    Cache作为非关系数据库 xff0c 其强大毋庸置疑 首先其Globle结构 xff0c 直接暴露的表Globel数据 xff0c 以及提供的M语言操作Globle达到的最优查询速度 ECP xff08 企业缓存协议 xff09 更是提供
  • Sebia电泳绘图

    Sebia这仪器真是个奇葩的存在 自己仪器有图不存文件 xff0c LIS要的话还得自己按数据绘制 还有蛋白电泳 固定电泳 画不画参考线等不同要求 xff08 奇葩的很 xff09 按理这种事不属于lis范围 xff0c 无奈国内lis太卷
  • nginx代理与负载均衡

    随着谷歌浏览器不断的改变https调用websocket和非https资源的策略 xff0c 从谷歌大概70以后不允许https调用非https资源和ws的websocket 后面实现了wss解决谷歌这一策略的影响 随着谷歌到90后版本限制
  • FreeRTOS学习第一篇

    之前在STM32Nano开发板开发是基于裸机开发 xff0c 即自己在main方法写死循环 死循环轮流执行各个任务逻辑的方法 这样做直接简单 xff0c 但是不同任务有不同优先级 xff0c 对CPU响应要求不同 逻辑容易某个任务卡住了 x
  • FreeRTOS之heap4

    操作系统离不开内存管理 FreeRTOS提供了5种内存管理方法 实现在portable MemMang里heap1到heap5 每种管理方案策略不同 我采用的是比较有代表性的heap4管理方案 该模式定义了ucHeap全局数组充当堆内存池
  • FreeRTOSMini

    最近在研究实时操作系统FreeRTOS FreeRTOS作为开源的RTOS xff0c 源码规模不大 xff0c 可以供操作系统学习 xff0c 加上我的STM32 Nano开发板正好可以学习OS 借着五一放假宅家里学习 实现的FreeRT
  • 双master节点+keepalived方式部署K8s 1.18.20

    相关部署方式也挺多 xff0c 自己采用双master节点 43 单node节点方式 xff0c 并且采用keepalived部署1 18 20版本 xff0c 中间也出现过相关小问题 xff0c 但都一一处理 xff0c 记录以给需要的同