Setup Kubernetes on a Raspberry Pi Cluster easily the official way!

2023-05-16

转自:http://blog.hypriot.com/post/setup-kubernetes-raspberry-pi-cluster/

Kubernetes shares the pole position with Docker in the category “orchestration solutions for Raspberry Pi cluster”. However it’s setup process has been elaborate – until v1.4 with the kubeadm announcement. With that effort, Kubernetes changed this game completely and can be up and running officially within no time.

I am very happy to announce that this blog post has been written in collaboration with Lucas Käldström, an independent maintainer of Kubernetes (his story is very interesting, you can read it in a CNCF blogpost).

SwarmClusterHA

Why Kubernetes?

As shown in my recent talk, there are many software suites available to manage a cluster of computers. There is Kubernetes, Docker Swarm, Mesos, OpenStack, Hadoop YARN, Nomad… just to name a few.

However, at Hypriot we have always been in love with tiny devices. So when working with an orchestrator, the maximum power we wanna use is what’s provided by a Raspberry Pi. Why? We have IoT networks in mind that will hold a large share in tomorrow’s IT infrastructure. At their edges, the power required for large orchestrators simply is not available.

This boundary of resources leads to several requirements that need to be checked before we start getting our hands dirty with an orchestrator:

  • Lightweight: Software should be fit on a Raspberry Pi or smaller. As proofed in my talk mentioned above, Kubernetes painlessly runs on a Raspberry Pi.
  • ARM compatible: Since the ARM CPU architecture is designed for low energy consumption but still able to deliver a decent portion of power, the Raspberry Pi runs an ARM CPU. Thanks to Lucas, Kubernetes is ARM compatible.
  • General purpose: Hadoop or Apache Spark are great for data analysis. But what if your use case changes? We prefer general purpose software that allows to run anything. Kubernetes uses a container runtime (with Docker as the 100% supported runtime for the time being) that allows to run whatever you want.
  • Production ready: Since we compare Kubernetes against a production ready Docker suite, let’s be fair and only choose equivalents. Kubernetes itself is production ready, and while the ARM port has some small issues, it’s working exactly as expected when going the official kubeadm route, which also will mature with time.

So, Kubernetes seems to be a compelling competitor to Docker Swarm. Let’s get our hands on it!


Wait – what about Kubernetes-on-ARM?

If you followed the discussion of Kubernetes on ARM for some time, you probably know about Lucas’ project kubernetes-on-ARM. Since the beginning of the movement to bring Kubernetes on ARM in 2015, this project has always been the most stable and updated.

However, during 2016, Lucas’ contributions have successfully been merged into official Kubernetes repositories, such that there is no point any more for using the kubernetes-on-ARM project. In fact, the features of that project are far behind of what’s now implemented in the official repos, and that has been Lucas’ goal from the beginning.

So if you’re up to using Kubernetes, please stick to the official repos now. And as of the kubeadm documentation, the following setup is considered official for Kubernetes on ARM.


At first: Flash HypriotOS on your SD cards

As hardware, take at least two Raspberry Pis and make sure they are connected to each other and to the Internet.

First, we need an operating system. Download and flash HypriotOS. The fastest way to download and flash HypriotOS on your SD cards is by using our flash tool like so:

flash --hostname node01 https://github.com/hypriot/image-builder-rpi/releases/download/v1.1.3/hypriotos-rpi-v1.1.3.img.zip

Provision all Raspberry Pis you have like this and boot them up.

Afterwards, SSH into the Raspberry Pis with

ssh pirate@node01.local

The password hypriot will grant you access.


Install Kubernetes

The installation requries root privileges. Retrieve them by

sudo su -

To install Kubernetes and its dependencies, only some commands are required. First, trust the kubernetes APT key and add the official APT Kubernetes repository on every node:

$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
$ echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list

… and then just install kubeadm on every node:

$ apt-get update && apt-get install -y kubeadm

After the previous command has been finished, initialize Kubernetes on the master node with

$ kubeadm init --pod-network-cidr 10.244.0.0/16

It is important that you add the --pod-network-cidr command as given here, because we will use flannel. Read the next notes about flannel if you wanna know why.

Some notes about flannel: We picked flannel here because that’s the only available solution for ARM at the moment (this is subject to change in the future though).

flannel can use and is using in this example the Kubernetes API to store metadata about the Pod CIDR allocations, and therefore we need to tell Kubernetes first which subnet we want to use.

If you are connected via WIFI instead of Ethernet, add --api-advertise-addresses=<wifi-ip-address>as parameter to kubeadm init in order to publish Kubernetes’ API via WiFi. Feel free to explore the other options that exist for kubeadm init.

After Kubernetes has been initialized, the last lines of your terminal should look like this:

init

Next, as told by that output, let all other nodes join the cluster via the given kubeadm join command. It will look something like:

$ kubeadm join --token=bb14ca.e8bbbedf40c58788 192.168.0.34

After some seconds, you should see all nodes in your cluster when executing the following on the master node:

$ kubectl get nodes

Your terminal should look like this:

k8S

Finally, we need to setup flannel as the Pod network driver. Run this on the master node:

$ curl -sSL https://rawgit.com/coreos/flannel/master/Documentation/kube-flannel.yml | sed "s/amd64/arm/g" | kubectl create -f -

Your terminal should look like this:

k8S

Then wait until all flannel and all other cluster-internal Pods are Running before you continue:

$ kubectl get po --all-namespaces

Nice, it seems like they are all Running:

show-namespaces

That’s all for the setup of Kubernetes! Next, let’s actually spin up a service on the cluster!


Test your setup with a tiny service

Let’s start a simple service so see if the cluster actually can publish a service:

$ kubectl run hypriot --image=hypriot/rpi-busybox-httpd --replicas=3 --port=80

This command starts set of containers called hypriot from the image hypriot/rpi-busybox-httpd and defines the port the container listens on at 80. The service will be replicated with 3 containers.

Next, expose the Pods in the above created Deployment in a Service with a stable name and IP:

$ kubectl expose deployment hypriot --port 80

Great! Now, let’s check if all three desired containers are up and running:

$ kubectl get endpoints hypriot

You should see three endpoints (= containers) like this:

show-endpoints

Let’ curl one of them to see if the service is up:

curl-service

The HTML is the response of the service. Good, it’s up and running! Next, let’s see how we can access it from outside the cluster!


Finally access your service from outside the cluster

We will now deploy an example Ingress Controller to manage incoming requests from the outside world onto our tiny service. Also, in this example we we’ll use Traefik as load balancer. Read the following notes if you wanna know more about Ingress and Traefik.

In contrast to Docker Swarm, Kubernetes itself does not provide an option to define a specific port that you can use to access a service. According to Lucas is this an important design decision; routing of incoming requests should be handled by a third party, such as a load balancer or a webserver, but not by the core product. The core Kubernetes should be lean and extensible, and encourage others to build tools on top of it for their specific needs.

Regarding load balancers in front of a cluster, there is the Ingress API object and some sample Ingress Controllers. Ingress is a built-in way of exposing Services to the outside world via an Ingress Controller that anyone can build. An Ingress rule defines how traffic should flow from the node the Ingress controller runs on to services inside of the cluster.

First, let’s deploy traefik as load balancer:

$ kubectl apply -f https://raw.githubusercontent.com/hypriot/rpi-traefik/master/traefik-k8s-example.yaml

Label the node you want to be the load balancer. Then the Traefik Ingress Controller will land on the node you specified. Run:

$ kubectl label node <load balancer-node> nginx-controller=traefik

Lastly, create an Ingress object that makes Traefik load balance traffic on port 80 to the hypriot service:

$ cat > hypriot-ingress.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: hypriot
spec:
  rules:
  - http:
      paths:
      - path: /
        backend:
          serviceName: hypriot
          servicePort: 80
EOF
$ kubectl apply -f hypriot-ingress.yaml

Visit the loadbalancing node’s IP address in your browser and you should see a nice web page:

curl-service

If you don’t see a website there yet, run:

$ kubectl get pods

… and make sure all hypriot Pods are in the Running state.

Wait until you see that all Pods are running, and a nice Hypriot website should appear!


Tear down the cluster

If you wanna reset the whole cluster to the state after a fresh install, just run this on each node:

$ kubeadm reset


Optional: Deploy the Kubernetes dashboard

The dashboard is a wonderful interface to visualize the state of the cluster. Start it with:

$ curl -sSL https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml | sed "s/amd64/arm/g" | kubectl create -f -

The following command provides the port that the dashboard is exposed at on every node with the NodePort function of Services, which is another way to expose your Services to the outside of your cluster:

$ kubectl -n kube-system get service kubernetes-dashboard -o template --template="{{ (index .spec.ports 0).nodePort }}" | xargs echo

Then you can checkout the dashboard on any node’s IP address on that port!

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

Setup Kubernetes on a Raspberry Pi Cluster easily the official way! 的相关文章

随机推荐

  • 使用cmake构建VS工程并进行工程管理

    文章目录 xff11 xff0e 综述 xff12 xff0e 具体实施过程2 1 管理所有库的 dll2 2 管理所有依赖库的cmake module2 3 工程构建2 4 参考 xff11 xff0e 综述 在windows平台进行c
  • 八数码的几种做法的总结以及是否有解的判断

    经典的八数码问题 xff0c 这几天尝试了一些不同的做法 xff0c 现在总结下 1 广搜 43 哈希 这是最容易想到的一种做法 xff0c 哈希的方法是康托展开 xff0c 组合数学上有介绍 广搜 43 哈希 2 双向广搜 43 哈希 双
  • 【python】修改pip程序默认源为清华源

    一行代码实现pip源更换为清华源 Python原来的镜像源在国内速度非常慢 xff0c 因此直接将全局pip的源改为国内清华源 xff0c 将下列代码复制到cmd中 xff0c 运行即可 pip config set global inde
  • Mybatis+PageHelper自己手动写分页后,sql语句最后仍然自动加上了limit解决方案

    问题 xff1a 由于mysql查询语句涉及到多表联查 xff0c 因此自带的PageHelper分页返回的数目会不对 xff0c 需要自行分页 自行分页没有使用PageHelper start 后发现sql语句最后一行仍然自动加上了lim
  • 数据查询举例

    1 查询LIU老师所授课程的课程号和课程名 select cno cname from c where teacher 61 39 LIU 39 2 查询年龄大于23岁的男生的学号和姓名 select sno sname from s wh
  • swift对接整合ceph

    基本原理科普 xff1a ceph对象存储组件radosgw原生支持swift接口 xff0c 对接只是把openstack的权限认证配置到ceph里 xff0c 创建endpoint时指向ceph rgw地址就可以了 我们要做两件事儿 第
  • openstack octavia部署 ussuri U版

    注 xff1a 1 octavia源码下载地址 xff1a cd home git clone https a href http github com openstack octavia git github com openstack
  • 【Java中 任意几个数字获取其所有的排列组合】

    今天在工作中碰到一个问题 xff0c 在java 中输入比如1 2 3 三个数 我想要得到其所有的排列组合 比如 123 312 132 231 213 321 这些 上网找了找别人的算法 xff0c 稍加整理 xff0c 分享给大家代码
  • python-使用百度接口进行OCR

    1 先按照百度的接口模块 pip install baidu aip 要是速度慢的话 xff0c 直接使用清华源的安装方式 pip install i https pypi tuna tsinghua edu cn simple baidu
  • iOS UITableView cell自适应内容高度

    定义UITableView 并且遵守两个协议 firstTableView 61 UITableView alloc initWithFrame CGRectMake 0 64 kScreenWidth kScreenHeight 64 s
  • C++和python代码实现朗读文本的功能(附源代码)

    闲来无事 xff0c 平时就喜欢看看小说之类的 突发奇想 xff0c 好多小说软件上的听小说功能都是收费的 xff0c 咱们写个小程序读一读 xff0c 岂不是很哇塞 xff01 看了一些资料 xff0c 这里给大家分享出来 xff0c 已
  • linux 实现回收站功能

    话不多说 xff0c 直接代码 bin sh 创建回收站目录 MYRM 61 34 var tmp rm 34 if d MYRM then mkdir MYRM fi 移动文件到回收站 function mvFile filePath 6
  • Linux下的内网穿透+访问内网网站(利用阿里云)

    本人亲测 两台主机设置 xff1a 内网 xff1a 本地ubantu16 xff0c 外网 xff1a 阿里云centos7 4 xff08 点击购买一台阿里云实例 xff09 前提 xff1a 本地和阿里云 xff1a 关闭防火墙 xf
  • 来此加密增加对zerossl和buypass免费证书支持

    众所周知 xff0c 现在网站基本上都要开启HTTPS的访问 而目前提供免费的证书平台也越来越多 xff0c 其中Let 39 s Encrypt最为知名 xff0c 其提供了泛域名和多域名的证书申请 xff0c 给广大站长和企业节省了一大
  • buypass:可全民使用的免费六个月SSL证书

    一 缘起 自从Let 39 s Encrypt出圈以来 xff0c 由于其免费的特质深得大家的喜爱 随着大家广泛地使用 xff0c 对他的不满意也随之而来 一般来看不满意的有以下两点 xff1a 1 证书有效期比较短 2 域名数量限制 3
  • 关于Linux命令行环境下无线网卡的配置

    无线网卡的一种配置方法 xff0c 通过wpa supplicant并依据SSID及口令生成相关配置文件 xff0c 然后讲配置文件挂接进网卡的的配置即可 xff08 树莓派中也使用这种方法 xff09 当然也可以直接在interface无
  • 申请永久免费的域名SSL证书的方法

    现在主流都在推荐使用SSL证书 xff0c 部署了SSL证书能自动激活浏览器显示 锁 型标志 xff0c 我们可以在浏览器的地址栏看到 https 开头的网址 SSL证书意味着在客户端浏览器和Web服务器之间已建立起一条SSL安全加密通道
  • 免费域名证书最新申请方式大全

    目前市场环境下 xff0c 可获得域名SSL证书的方式有很多 xff0c 一般有付费和免费划分 对于想免费使用域名SSL证书的朋友 xff0c 这里收集整理了几个常用的SSL证书申请方式 对于SSL证书的用处 xff0c 简单的来说 xff
  • Let's Encrypt 在线证书申请:来此加密

    Let s Encrypt是国外一个公共的免费SSL项目 xff0c 由 Linux 基金会托管 xff0c 它的来头不小 xff0c 由Mozilla 思科 Akamai IdenTrust和EFF等组织发起 xff0c 目的就是向网站自
  • Setup Kubernetes on a Raspberry Pi Cluster easily the official way!

    转自 http blog hypriot com post setup kubernetes raspberry pi cluster Kubernetes shares the pole position with Docker in t