我正在创建一个 1 个主节点 2 个节点的 kubernetes 集群。我正在尝试基于以下内容创建 skydns:
apiVersion: v1
kind: ReplicationController
metadata:
name: kube-dns-v11
namespace: kube-system
labels:
k8s-app: kube-dns
version: v11
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kube-dns
version: v11
template:
metadata:
labels:
k8s-app: kube-dns
version: v11
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: etcd
image: gcr.io/google_containers/etcd-amd64:2.2.1
# resources:
# # TODO: Set memory limits when we've profiled the container for large
# # clusters, then set request = limit to keep this container in
# # guaranteed class. Currently, this container falls into the
# # "burstable" category so the kubelet doesn't backoff from restarting it.
# limits:
# cpu: 100m
# memory: 500Mi
# requests:
# cpu: 100m
# memory: 50Mi
command:
- /usr/local/bin/etcd
- -data-dir
- /var/etcd/data
- -listen-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -advertise-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -initial-cluster-token
- skydns-etcd
volumeMounts:
- name: etcd-storage
mountPath: /var/etcd/data
- name: kube2sky
image: gcr.io/google_containers/kube2sky:1.14
# resources:
# # TODO: Set memory limits when we've profiled the container for large
# # clusters, then set request = limit to keep this container in
# # guaranteed class. Currently, this container falls into the
# # "burstable" category so the kubelet doesn't backoff from restarting it.
# limits:
# cpu: 100m
# # Kube2sky watches all pods.
# memory: 200Mi
# requests:
# cpu: 100m
# memory: 50Mi
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
# successThreshold: 1
# failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 30
timeoutSeconds: 5
args:
# command = "/kube2sky"
- --domain=cluster.local
- name: skydns
image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 50Mi
args:
# command = "/skydns"
- -machines=http://127.0.0.1:4001
- -addr=0.0.0.0:53
- -ns-rotate=false
- -domain=cluster.local
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- name: healthz
image: gcr.io/google_containers/exechealthz:1.0
# resources:
# # keep request = limit to keep this container in guaranteed class
# limits:
# cpu: 10m
# memory: 20Mi
# requests:
# cpu: 10m
# memory: 20Mi
args:
- -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
- -port=8080
ports:
- containerPort: 8080
protocol: TCP
volumes:
- name: etcd-storage
emptyDir: {}
dnsPolicy: Default # Don't use cluster DNS.
然而,skydns 吐出以下内容:
> $kubectl logs kube-dns-v11-k07j9 --namespace=kube-system skydns
> 2016/04/18 12:47:05 skydns: falling back to default configuration,
> could not read from etcd: 100: Key not found (/skydns) [1] 2016/04/18
> 12:47:05 skydns: ready for queries on cluster.local. for
> tcp://0.0.0.0:53 [rcache 0] 2016/04/18 12:47:05 skydns: ready for
> queries on cluster.local. for udp://0.0.0.0:53 [rcache 0] 2016/04/18
> 12:47:11 skydns: failure to forward request "read udp
> 192.168.122.1:53: i/o timeout" 2016/04/18 12:47:15 skydns: failure to forward request "read udp 192.168.122.1:53: i/o timeout" 2016/04/18
> 12:47:19 skydns: failure to forward request "read udp
> 192.168.122.1:53: i/o timeout" 2016/04/18 12:47:23 skydns: failure to forward request "read udp 192.168.122.1:53: i/o timeout" 2016/04/18
> 12:47:27 skydns: failure to forward request "read udp
> 192.168.122.1:53: i/o timeout" 2016/04/18 12:47:31 skydns: failure to forward request "read udp 192.168.122.1:53: i/o timeout" 2016/04/18
> 12:47:35 skydns: failure to forward request "read udp
> 192.168.122.1:53: i/o timeout" 2016/04/18 12:47:39 skydns: failure to forward request "read udp 192.168.122.1:53: i/o timeout" 2016/04/18
> 12:47:43 skydns: failure to forward request "read udp
> 192.168.122.1:53: i/o timeout" 2016/04/18 12:47:47 skydns: failure to forward request "read udp 192.168.122.1:53: i/o timeout" 2016/04/18
> 12:47:51 skydns: failure to forward request "read udp
> 192.168.122.1:53: i/o timeout" 2016/04/18 12:47:55 skydns: failure to forward request "read udp 192.168.122.1:53: i/o timeout" 2016/04/18
> 12:47:59 skydns: failure to forward request "read udp
> 192.168.122.1:53: i/o timeout" 2016/04/18 12:48:03 skydns: failure to forward request "read udp 192.168.122.1:53: i/o timeout"
进一步查看后,我才意识到 192.168.122.1 是什么?它是kvm上的虚拟交换机。为什么 skydns 试图访问我的虚拟交换机或虚拟机的 dns 服务器?
SkyDNS
默认其转发名称服务器为中列出的名称服务器/etc/resolv.conf
. Since SkyDNS
运行在kube-dns
pod 作为集群插件,它继承了它的/etc/resolv.conf
从其主机中描述的kube-dns doc https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns#inheriting-dns-from-the-node.
从你的问题看来,你的主机/etc/resolv.conf
配置为使用 192.168.122.1 作为其名称服务器,因此它成为您的转发服务器SkyDNS
配置。我相信 192.168.122.1 无法从您的 Kubernetes 集群路由,这就是为什么您在kube-dns
logs.
此问题最简单的解决方案是提供一个可访问的 DNS 服务器作为标志SkyDNS
在你的 RC 配置中。这是一个示例(这只是您的 RC 配置,但添加了-nameservers
标志在SkyDNS
集装箱规格):
apiVersion: v1
kind: ReplicationController
metadata:
name: kube-dns-v11
namespace: kube-system
labels:
k8s-app: kube-dns
version: v11
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kube-dns
version: v11
template:
metadata:
labels:
k8s-app: kube-dns
version: v11
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: etcd
image: gcr.io/google_containers/etcd-amd64:2.2.1
# resources:
# # TODO: Set memory limits when we've profiled the container for large
# # clusters, then set request = limit to keep this container in
# # guaranteed class. Currently, this container falls into the
# # "burstable" category so the kubelet doesn't backoff from restarting it.
# limits:
# cpu: 100m
# memory: 500Mi
# requests:
# cpu: 100m
# memory: 50Mi
command:
- /usr/local/bin/etcd
- -data-dir
- /var/etcd/data
- -listen-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -advertise-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -initial-cluster-token
- skydns-etcd
volumeMounts:
- name: etcd-storage
mountPath: /var/etcd/data
- name: kube2sky
image: gcr.io/google_containers/kube2sky:1.14
# resources:
# # TODO: Set memory limits when we've profiled the container for large
# # clusters, then set request = limit to keep this container in
# # guaranteed class. Currently, this container falls into the
# # "burstable" category so the kubelet doesn't backoff from restarting it.
# limits:
# cpu: 100m
# # Kube2sky watches all pods.
# memory: 200Mi
# requests:
# cpu: 100m
# memory: 50Mi
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
# successThreshold: 1
# failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 30
timeoutSeconds: 5
args:
# command = "/kube2sky"
- --domain=cluster.local
- name: skydns
image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 50Mi
args:
# command = "/skydns"
- -machines=http://127.0.0.1:4001
- -addr=0.0.0.0:53
- -ns-rotate=false
- -domain=cluster.local
- -nameservers=8.8.8.8:53,8.8.4.4:53 # Adding this flag. Dont use double quotes.
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- name: healthz
image: gcr.io/google_containers/exechealthz:1.0
# resources:
# # keep request = limit to keep this container in guaranteed class
# limits:
# cpu: 10m
# memory: 20Mi
# requests:
# cpu: 10m
# memory: 20Mi
args:
- -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
- -port=8080
ports:
- containerPort: 8080
protocol: TCP
volumes:
- name: etcd-storage
emptyDir: {}
dnsPolicy: Default # Don't use cluster DNS.
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)