访问 Kubernetes clusterIP 服务请求超时

2024-03-17

我正在寻求帮助来解决这个无法正常工作的基本场景:

安装了三个节点kubeadm on VirtualBox 虚拟机在 MacBook 上运行:

sudo kubectl get nodes
NAME                STATUS    ROLES     AGE       VERSION
kubernetes-master   Ready     master    4h        v1.10.2
kubernetes-node1    Ready     <none>    4h        v1.10.2
kubernetes-node2    Ready     <none>    34m       v1.10.2

Virtualbox VM 有 2 个适配器:1) 仅主机 2) NAT。来宾计算机的节点 IP 为:

kubernetes-master (192.168.56.3)
kubernetes-node1  (192.168.56.4)
kubernetes-node2  (192.168.56.5)

我正在使用 flannel pod 网络(我之前也尝试过 Calico,结果相同)。

安装主节点时我使用了以下命令:

sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.56.3

我部署了一个 nginx 应用程序,其 pod 已启动,每个节点一个 pod:

nginx-deployment-64ff85b579-sk5zs   1/1       Running   0          14m       10.244.2.2   kubernetes-node2
nginx-deployment-64ff85b579-sqjgb   1/1       Running   0          14m       10.244.1.2   kubernetes-node1

我将它们公开为 ClusterIP 服务:

sudo kubectl get services 
NAME               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes         ClusterIP   10.96.0.1       <none>        443/TCP   22m
nginx-deployment   ClusterIP   10.98.206.211   <none>        80/TCP    14m

现在问题是:

我 ssh 到 kubernetes-node1 并使用集群 IP 卷曲服务:

ssh 192.168.56.4
---
curl 10.98.206.211

有时请求正常,返回 nginx 欢迎页面。我可以在日志中看到该请求始终由同一节点 (kubernetes-node1) 中的 pod 应答。其他一些请求会被卡住,直到超时。我猜想这些被发送到另一个节点(kubernetes-node2)中的 pod。

反之亦然,当 ssh 进入 kubernetes-node2 时,该节点的 pod 会记录成功的请求,而其他请求则超时。

我似乎存在某种网络问题,节点无法从其他节点访问 Pod。我怎样才能解决这个问题?

UPDATE:

我将副本数量减少到 1,所以现在 kubernetes-node2 上只有一个 pod

如果我 ssh 到 kubernetes-node2,所有卷曲都会正常。当在 kubernetes-node1 中时,所有请求都会超时。

更新2:

kubernetes-master ifconfig

cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.0.1  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::20a0:c7ff:fe6f:8271  prefixlen 64  scopeid 0x20<link>
        ether 0a:58:0a:f4:00:01  txqueuelen 1000  (Ethernet)
        RX packets 10478  bytes 2415081 (2.4 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 11523  bytes 2630866 (2.6 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:cd:ce:84:a9  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.56.3  netmask 255.255.255.0  broadcast 192.168.56.255
        inet6 fe80::a00:27ff:fe2d:298f  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:2d:29:8f  txqueuelen 1000  (Ethernet)
        RX packets 20784  bytes 2149991 (2.1 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 26567  bytes 26397855 (26.3 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.3.15  netmask 255.255.255.0  broadcast 10.0.3.255
        inet6 fe80::a00:27ff:fe09:f08a  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:09:f0:8a  txqueuelen 1000  (Ethernet)
        RX packets 12662  bytes 12491693 (12.4 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4507  bytes 297572 (297.5 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.0.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::c078:65ff:feb9:e4ed  prefixlen 64  scopeid 0x20<link>
        ether c2:78:65:b9:e4:ed  txqueuelen 0  (Ethernet)
        RX packets 6  bytes 444 (444.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 6  bytes 444 (444.0 B)
        TX errors 0  dropped 15 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 464615  bytes 130013389 (130.0 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 464615  bytes 130013389 (130.0 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tunl0: flags=193<UP,RUNNING,NOARP>  mtu 1440
        tunnel   txqueuelen 1000  (IPIP Tunnel)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vethb1098eb3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet6 fe80::d8a3:a2ff:fedf:4d1d  prefixlen 64  scopeid 0x20<link>
        ether da:a3:a2:df:4d:1d  txqueuelen 0  (Ethernet)
        RX packets 10478  bytes 2561773 (2.5 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 11538  bytes 2631964 (2.6 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

kubernetes-node1 ifconfig

cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.1.1  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::5cab:32ff:fe04:5b89  prefixlen 64  scopeid 0x20<link>
        ether 0a:58:0a:f4:01:01  txqueuelen 1000  (Ethernet)
        RX packets 199  bytes 41004 (41.0 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 331  bytes 56438 (56.4 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:0f:02:bb:ff  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.56.4  netmask 255.255.255.0  broadcast 192.168.56.255
        inet6 fe80::a00:27ff:fe36:741a  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:36:74:1a  txqueuelen 1000  (Ethernet)
        RX packets 12834  bytes 9685221 (9.6 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 9114  bytes 1014758 (1.0 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.3.15  netmask 255.255.255.0  broadcast 10.0.3.255
        inet6 fe80::a00:27ff:feb2:23a3  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:b2:23:a3  txqueuelen 1000  (Ethernet)
        RX packets 13263  bytes 12557808 (12.5 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 5065  bytes 341321 (341.3 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.1.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::7815:efff:fed6:1423  prefixlen 64  scopeid 0x20<link>
        ether 7a:15:ef:d6:14:23  txqueuelen 0  (Ethernet)
        RX packets 483  bytes 37506 (37.5 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 483  bytes 37506 (37.5 KB)
        TX errors 0  dropped 15 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 3072  bytes 269588 (269.5 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3072  bytes 269588 (269.5 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth153293ec: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet6 fe80::70b6:beff:fe94:9942  prefixlen 64  scopeid 0x20<link>
        ether 72:b6:be:94:99:42  txqueuelen 0  (Ethernet)
        RX packets 81  bytes 19066 (19.0 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 129  bytes 10066 (10.0 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

kubernetes-node2 ifconfig

cni0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 10.244.2.1  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::4428:f5ff:fe8b:a76b  prefixlen 64  scopeid 0x20<link>
        ether 0a:58:0a:f4:02:01  txqueuelen 1000  (Ethernet)
        RX packets 184  bytes 36782 (36.7 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 284  bytes 36940 (36.9 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:7f:e9:79:cd  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.56.5  netmask 255.255.255.0  broadcast 192.168.56.255
        inet6 fe80::a00:27ff:feb7:ff54  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:b7:ff:54  txqueuelen 1000  (Ethernet)
        RX packets 12634  bytes 9466460 (9.4 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 8961  bytes 979807 (979.8 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.3.15  netmask 255.255.255.0  broadcast 10.0.3.255
        inet6 fe80::a00:27ff:fed8:9210  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:d8:92:10  txqueuelen 1000  (Ethernet)
        RX packets 12658  bytes 12491919 (12.4 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4544  bytes 297215 (297.2 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.2.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::c832:e4ff:fe3e:f616  prefixlen 64  scopeid 0x20<link>
        ether ca:32:e4:3e:f6:16  txqueuelen 0  (Ethernet)
        RX packets 111  bytes 8466 (8.4 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 111  bytes 8466 (8.4 KB)
        TX errors 0  dropped 15 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 2940  bytes 258968 (258.9 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2940  bytes 258968 (258.9 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

更新3:

库贝莱特日志:

kubernetes-master kubelet 日志 https://raw.githubusercontent.com/codependent/kubernetes-cluster-logs/master/kubernetes-master-kubelet-log

kubernetes-node1 kubelet 日志 https://raw.githubusercontent.com/codependent/kubernetes-cluster-logs/master/kubernetes-node1-kubelet-log

kubernetes-node2 kubelet 日志 https://raw.githubusercontent.com/codependent/kubernetes-cluster-logs/master/kubernetes-node2-kubelet-log

IP路由

Master

kubernetes-master:~$ ip route
default via 10.0.3.2 dev enp0s8 proto dhcp src 10.0.3.15 metric 100 
10.0.3.0/24 dev enp0s8 proto kernel scope link src 10.0.3.15 
10.0.3.2 dev enp0s8 proto dhcp scope link src 10.0.3.15 metric 100 
10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1 
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink 
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
192.168.56.0/24 dev enp0s3 proto kernel scope link src 192.168.56.3 

Node1

kubernetes-node1:~$ ip route
default via 10.0.3.2 dev enp0s8 proto dhcp src 10.0.3.15 metric 100 
10.0.3.0/24 dev enp0s8 proto kernel scope link src 10.0.3.15 
10.0.3.2 dev enp0s8 proto dhcp scope link src 10.0.3.15 metric 100 
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink 
10.244.1.0/24 dev cni0 proto kernel scope link src 10.244.1.1 
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
192.168.56.0/24 dev enp0s3 proto kernel scope link src 192.168.56.4 

Node2

kubernetes-node2:~$ ip route
default via 10.0.3.2 dev enp0s8 proto dhcp src 10.0.3.15 metric 100 
10.0.3.0/24 dev enp0s8 proto kernel scope link src 10.0.3.15 
10.0.3.2 dev enp0s8 proto dhcp scope link src 10.0.3.15 metric 100 
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink 
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
192.168.56.0/24 dev enp0s3 proto kernel scope link src 192.168.56.5

iptables-保存:

kubernetes-master iptables-save https://raw.githubusercontent.com/codependent/kubernetes-cluster-logs/master/kubernetes-master-iptables-save

kubernetes-node1 iptables-保存 https://raw.githubusercontent.com/codependent/kubernetes-cluster-logs/master/kubernetes-node1-iptables-save

kubernetes-node2 iptables-保存 https://raw.githubusercontent.com/codependent/kubernetes-cluster-logs/master/kubernetes-node2-iptables-save


我在使用 Flannel 的 K8s 集群上遇到了类似的问题。我已经为虚拟机设置了用于互联网连接的 NAT 网卡和用于节点到节点通信的仅主机网卡。 Flannel 默认选择 NAT 网卡进行节点到节点通信,这显然在这种情况下不起作用。

我在部署之前修改了 flannel 清单以设置--iface=enp0s8应该选择的仅主机网卡的参数(enp0s8就我而言)。在你的情况下,它看起来像enp0s3将是正确的网卡。此后,节点到节点的通信工作正常。

我没有注意到我还修改了kube代理清单中包括--cluster-cidr=10.244.0.0/16 and --代理模式=iptables这似乎也是必需的。

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

访问 Kubernetes clusterIP 服务请求超时 的相关文章

随机推荐