本文主要用于在内网(离线)环境安装k8s集群;linux环境 centos7.6
主要步骤有:
- 安装docker
- 创建dokcer 私有镜像库 registry
- 安装kubernetes
- 安装flannel
1.离线安装docker
下载离线安装包https://download.docker.com/linux/centos/7/x86_64/stable/Packages/
docker-ce-cli-18.09.7-3.el7.x86_64.rpm
docker-ce-18.09.7-3.el7.x86_64.rpm
container-selinux-2.107-1.el7_6.noarch.rpm
containerd.io-1.2.2-3.el7.x86_64.rpm
- 上传到指定服务器,安装命令 rpm -ivh *.rpm
- docker info 查看docker信息
- 修改cgoupdriver为systemd与k8b保持一致,vim /etc/docker/daemon.json
{
"registry-mirrors": ["https://registry.docker-cn.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
- systemctl daemon-reload
- systemctl restart docker
- 设置开机启动 systemctl enabel docker
2.创建dokcer 私有镜像库 registry
1.在有外网环境的docker中下载镜像,并启动;
docker pull registry:2
2.从image导出镜像
docker save -o registry.tar registry:2
3.上传registry.tar到离线服务器,导入
docker load -I registry
4.启动
docker run -d -v /registry:/var/lib/registry -p 5000:5000 --restart=always --name registry registry:2
5.修改k8s集群节点的dokcer daemon.json 支持https
{
"registry-mirrors": ["https://registry.docker-cn.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"insecure-registries": ["10.209.68.12:5000"]
}
systemctl daemon-reload
systemctl restart docker
3.安装k8s
环境准备
- 关闭 防火墙、SeLinux、swap
# 在 master 节点和 worker 节点都要执行
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
# 关闭 swap
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab
- hostnamectl set-hostname master
需要设置其他主机名称时,可将 master 替换为正确的主机名node1、node2即可。
- 配置内核参数,将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl –system
2,安装kubeadm/kubectl/kubelet
在有网络的服务器上下载需要的rpm安装包
- 配置kubeadm源
# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
- yum install --downloadonly --downloaddir=/home/centos/k8s kubeadm kubectl kubelet
- 上传rpm包到离线服务器 安装
rpm -ivh *.rpm
设置开机启动 systemctl enable kubelet.service
- 获取镜像列表
# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.18.0
k8s.gcr.io/kube-controller-manager:v1.18.0
k8s.gcr.io/kube-scheduler:v1.18.0
k8s.gcr.io/kube-proxy:v1.18.0
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7
- 离线镜像
编写脚本,从阿里云下载镜像
# cat pull-images.sh
#!/bin/bash
images=(
kube-apiserver:v1.18.0
kube-controller-manager:v1.18.0
kube-scheduler:v1.18.0
kube-proxy:v1.18.0
pause:3.2
etcd:3.4.3-0
coredns:1.6.7
)
for imageName in ${images[@]};
do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName}
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName} k8s.gcr.io/${imageName}
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName}
done
- 执行脚本,然后查看镜像
# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.18.0 43940c34f24f 7 days ago 117MB
k8s.gcr.io/kube-apiserver v1.18.0 74060cea7f70 7 days ago 173MB
k8s.gcr.io/kube-controller-manager v1.18.0 d3e55153f52f 7 days ago 162MB
k8s.gcr.io/kube-scheduler v1.18.0 a31f78c7c8ce 7 days ago 95.3MB
k8s.gcr.io/pause 3.2 80d28bedfe5d 6 weeks ago 683kB
k8s.gcr.io/coredns 1.6.7 67da37a9a360 2 months ago 43.8MB
k8s.gcr.io/etcd 3.4.3-0 303ce5db0e90 5 months ago
- 打包镜像
编写脚本打包镜像
# cat save-images.sh
#!/bin/bash
images=(
kube-apiserver:v1.18.0
kube-controller-manager:v1.18.0
kube-scheduler:v1.18.0
kube-proxy:v1.18.0
pause:3.2
etcd:3.4.3-0
coredns:1.6.7
)
for imageName in ${images[@]};
do
docker save -o `echo ${imageName}|awk -F ':' '{print $1}'`.tar k8s.gcr.io/${imageName}
done
压缩下载,上传到离线服务器;
tar czvf kubeadm-images-1.18.0.tar.gz *.tar
- 导入镜像
在安装节点分别导入离线镜像或者放入私有仓库使用
# cat load-image.sh
#!/bin/bash
ls /root/kubeadm-images-1.18.0 > /root/images-list.txt
cd /root/kubeadm-images-1.18.0
for i in $(cat /root/images-list.txt)
do
docker load -i $i
done
导入镜像
# ./load-image.sh
3,初始化master节点
kubeadm init --apiserver-advertise-address 10.209.69.12 --apiserver-bind-port 6443 --kubernetes-version 1.18.0 --pod-network-cidr 10.244.0.0/16 --service-cidr 10.1.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
4,加入子节点
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.209.69.12:6443 --token voj8z6.ytej05mfnul5gci7 \
--discovery-token-ca-cert-hash sha256:d12c6150f5752238e8eabe81403ff4defaf2aeb1a1c159ed7310e027b367b57b
安装flannel
下载https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml文件
把里面依赖的image都在有网络环境中下载下来;
导入私有镜像库;
docker tag xxx:vxxx 10.209.69.12:5000/xxx:vxxx
docker push 10.209.69.12:5000/xxx:vxxx
修改yml中镜像为私有镜像库中的包
部署
kubectl apply -f flannel.yml
查看节点 是否为ready
kubectl get nodes
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)