Kubernetes 1.9 已发布,其中内置 CoreDNS。我们现在可以通过 Kubeadm(一种单步轻松安装 Kubernetes 的工具包)将 CoreDNS 安装为默认服务发现。
当前,CoreDNS 是 Kubernetes 1.9 中的 Alpha 版。我们制定了一份路线图,CoreDNS 将在 1.10 版本中成为 Beta 版,并最终取代 kube-dns 成为默认 DNS。
需要注意的是,当目前从 kube-dns 切换至 CoreDNS 时, kube-dns 随附的配置(stubzone、联合...)将不复存在,并将切换到 CoreDNS 中的默认配置。从 Kubernetes 的即将发布版本(v1.10)开始,将支持从 kube-dns 转换配置,届时 CoreDNS 将成为 Beta 版。
了解 CoreDNS 配置
这是 kubeadm 随附的默认 CoreDNS 配置。它保存在名为 coredns
的 kubernetes configmap 中
# kubectl -n kube-system get configmap coredns -oyaml
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local 10.96.0.0/12 {
pods insecure
upstream /etc/resolv.conf
}
prometheus :9153
proxy . /etc/resolv.conf
cache 30
}
kind: ConfigMap
metadata:
creationTimestamp: 2017-12-21T12:55:15Z
name: coredns
namespace: kube-system
resourceVersion: "161"
selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
uid: 30bf0882-e64e-11e7-baf6-0cc47a8055d6
Corefile 部分是 CoreDNS 的配置。此配置基于 CoreDNS 的以下 插件:
- errors:错误记录在 stdout 中。
- health:CoreDNS 的运行状况报告到 http://localhost:8080/health。
- kubernetes:CoreDNS 将根据 Kubernetes 中服务的 IP 和容器的 IP 响应 DNS 查询。你可以在 此处 找到更多详情。
Kubernetes 插件的选项
群集域
和服务 CIDR
默认通过 kubeadm 分别定义为cluster.local
和10.96.0.0/12
。我们可通过 kubeadm 的--service-dns-domain
和--service-cidr
标志修改并选择所需的值。
提供
pods insecure
选项是为了向后兼容 kube-dns。
Upstream
用于解析指向外部主机(外部服务)的服务。
- prometheus:CoreDNS 的度量指标采用 Prometheus 格式展示在 http://localhost:9153/metrics。
- proxy:不在 Kubernetes 群集域内的任何查询都将转发到预定义的解析器 (/etc/resolv.conf)。
- cache:启用前端缓存。
我们可以通过修改这个 configmap 来修改默认行为。需要重新启动 CoreDNS pod 才能使更改生效。
在全新 Kubernetes 集群中安装 CoreDNS
要为全新 Kubernetes 集群安装 CoreDNS(而不是 kube-dns),我们需要使用 feature-gates
标志并将其设置为 CoreDNS=true
。在安装全新 Kubernetes 集群时使用以下命令将 CoreDNS 安装为默认 DNS 服务。
# kubeadm init --feature-gates CoreDNS=true
# kubeadm init --feature-gates CoreDNS=true
[init] Using Kubernetes version: v1.9.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.09.1-ce. Max validated version: 17.03
[WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [sandeep2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 147.75.107.43]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 31.502217 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node sandeep2 as master by adding a label and a taint
[markmaster] Master sandeep2 tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 4cd282.a826a13b3705a4ec
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
如果在部署 Kubernetes 时看到以下输出,则确认 CoreDNS 已安装。
[addons] Applied essential addon: CoreDNS
将现有集群更新为使用 CoreDNS
如果您有现有集群,也可以使用 kubeadm upgrade
命令将您的 DNS 服务升级到 CoreDNS,以替换 kube-dns。
在继续应用更改之前,可以通过使用 kubeadm upgrade plan
和将 feature-gates
标志设置为 CoreDNS=true
来检查将要安装的 CoreDNS 版本。
检查要升级的 CoreDNS 版本
# kubeadm upgrade plan --feature-gates CoreDNS=true
# kubeadm upgrade plan --feature-gates CoreDNS=true
...
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
Kubelet 1 x v1.9.0 v1.10.0-alpha.1
Upgrade to the latest experimental version:
COMPONENT CURRENT AVAILABLE
API Server v1.9.0 v1.10.0-alpha.1
Controller Manager v1.9.0 v1.10.0-alpha.1
Scheduler v1.9.0 v1.10.0-alpha.1
Kube Proxy v1.9.0 v1.10.0-alpha.1
CoreDNS 1.0.1 1.0.1
Etcd 3.1.10 3.1.10
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.10.0-alpha.1
Note: Before you can perform this upgrade, you have to update kubeadm to v1.10.0-alpha.1.
然后可以执行 kubeadm upgrade apply
和 feature-gates CoreDNS=true
,使用 CoreDNS 作为默认 DNS 升级集群
# kubeadm upgrade apply <version> --feature-gates CoreDNS=true
# kubeadm upgrade apply v1.10.0-alpha.1 --feature-gates CoreDNS=true
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade/version] You have chosen to change the cluster version to "v1.10.0-alpha.1"
[upgrade/versions] Cluster version: v1.10.0-alpha.1
[upgrade/versions] kubeadm version: v1.9.0
[upgrade/version] Found 1 potential version compatibility errors but skipping since the --force flag is set:
- Specified version to upgrade to "v1.10.0-alpha.1" is at least one minor release higher than the kubeadm minor release (10 > 9). Such an upgrade is not supported
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler]
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.10.0-alpha.1"...
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests781134294"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests781134294/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests781134294/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests781134294/kube-scheduler.yaml"
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests038673725/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests038673725/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests038673725/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.10.0-alpha.1". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets in turn.
验证 CoreDNS 服务
要验证 CoreDNS 是否正在运行,我们可以检查节点中的 pod 状态和部署。请注意,CoreDNS 服务将保持为 “kube-dns”,这确保了从 kube-dns 向 CoreDNS 升级服务发现期间的平稳过渡。
检查 pod
状态
# kubectl -n kube-system get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
coredns-546545bc84-ttsh4 1/1 Running 0 5h 10.32.0.61 sandeep2
etcd-sandeep2 1/1 Running 0 5h 147.75.107.43 sandeep2
kube-apiserver-sandeep2 1/1 Running 0 4h 147.75.107.43 sandeep2
kube-controller-manager-sandeep2 1/1 Running 0 4h 147.75.107.43 sandeep2
kube-proxy-fkxmg 1/1 Running 0 4h 147.75.107.43 sandeep2
kube-scheduler-sandeep2 1/1 Running 4 5h 147.75.107.43 sandeep2
weave-net-jhjtc 2/2 Running 0 5h 147.75.107.43 sandeep2
检查 部署
# kubectl -n kube-system get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
coredns 1 1 1 1 4h
我们可以通过一些基本的 dig
命令来检查 CoreDNS 是否正常运行
# dig @10.32.0.61 kubernetes.default.svc.cluster.local +noall +answer
; <<>> DiG 9.10.3-P4-Ubuntu <<>> @10.32.0.61 kubernetes.default.svc.cluster.local +noall +answer
; (1 server found)
;; global options: +cmd
kubernetes.default.svc.cluster.local. 23 IN A 10.96.0.1
# dig @10.32.0.61 ptr 1.0.96.10.in-addr.arpa. +noall +answer
; <<>> DiG 9.10.3-P4-Ubuntu <<>> @10.32.0.61 ptr 1.0.96.10.in-addr.arpa. +noall +answer
; (1 server found)
;; global options: +cmd
1.0.96.10.in-addr.arpa. 30 IN PTR kubernetes.default.svc.cluster.local.