kubeadm安裝kubernetes V1.11.1 集羣

之前測試了離線環境下使用二進制方法安裝配置Kubernetes集群的方法,安裝的過程中聽說 kubeadm 安裝配置集群更加方便,因此試著折騰了一下。安裝過程中,也有一些坑,相對來說操作上要比二進制方便一點,畢竟不用手工創建那麼多的配置文件,但是對於瞭解Kubernetes的運作方式,可能不如二進制方式好。同時,因為kubeadm方式,很多集群依賴的組件都是以容器方式運行在Master節點上,感覺對於虛擬機資源的消耗要比二進制方式厲害。

0. kubeadm 介紹與準備工作

kubeadm is designed to be a simple way for new users to start trying Kubernetes out, possibly for the first time, a way for existing users to test their application on and stitch together a cluster easily, and also to be a building block in other ecosystem and/or installer tool with a larger scope.

kubeadm是一個python寫的項目,代碼在這裡,用來幫助快速部署Kubernetes集群環境,但是目前僅僅是作為測試環境使用,如果你想在生產環境使用,可是要三思。

本文所用的環境:

  • 虛擬機軟件:VirtualBox
  • 操作系統:Centos 7.3 minimal 安裝
  • 網卡:兩塊網卡,一塊 Host-Only方式,一塊 Nat 方式。
  • 網絡規劃:
  • Master:192.168.0.101
  • Node:192.168.0.102-104

0.1 關掉 selinux

$ setenforce 0

$ sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux

0.2 關掉防火牆

$ systemctl stop firewalld

$ systemctl disable firewalld

0.3 關閉 swap

$ swapoff -a

$ sed -i 's/.*swap.*/#&/' /etc/fstab

0.4 配置轉發參數

$ cat < /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF

$ sysctl --system

0.5 設置國內 yum 源

$ cat < /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

0.6 安裝一些必備的工具

$ yum -y epel-release

$ yum install -y net-tools wget vim ntpdate

1. 安裝 kubeadm 必須的軟件,在所有節點上運行

1.1 安裝Docker

$ yum install -y docker

$ systemctl enable docker && systemctl start docker

$ #設置系統服務,如果不設置後面 kubeadm init 的時候會有 warning

$ systemctl enable docker.service

如果想要用二進制方法安裝最新版本的Docker,可以參考我之前的文章在Redhat 7.3中採用離線方式安裝Docker

1.2 安裝kubeadm、kubectl、kubelet

$ yum install -y kubelet kubeadm kubectl kubernetes-cni

$ systemctl enable kubelet && systemctl start kubelet

這一步之後kubelet還不能正常運行,還處於下面的狀態。

The kubelet is now restarting every few seconds, as it waits in a crashloop for kubeadm to tell it what to do.

2. 安裝Master節點

因為國內沒辦法訪問Google的鏡像源,變通的方法是從其他鏡像源下載後,修改tag。執行下面這個Shell腳本即可。

#!/bin/bash

images=(kube-proxy-amd64:v1.11.0 kube-scheduler-amd64:v1.11.0 kube-controller-manager-amd64:v1.11.0 kube-apiserver-amd64:v1.11.0

etcd-amd64:3.2.18 coredns:1.1.3 pause-amd64:3.1 kubernetes-dashboard-amd64:v1.8.3 k8s-dns-sidecar-amd64:1.14.9 k8s-dns-kube-dns-amd64:1.14.9

k8s-dns-dnsmasq-nanny-amd64:1.14.9 )

for imageName in ${images[@]} ; do

docker pull registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName

docker tag registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName k8s.gcr.io/$imageName

#docker rmi registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName

done

docker tag da86e6ba6ca1 k8s.gcr.io/pause:3.1

接下來執行Master節點的初始化,因為我的虛擬機是雙網卡,需要指定apiserver的監聽地址。

[root@devops-101 ~]# kubeadm init --kubernetes-version=v1.11.0 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.0.101

[init] using Kubernetes version: v1.11.0

[preflight] running pre-flight checks

I0724 08:36:35.636931 3409 kernel_validator.go:81] Validating kernel version

I0724 08:36:35.637052 3409 kernel_validator.go:96] Validating kernel config

[WARNING Hostname]: hostname "devops-101" could not be reached

[WARNING Hostname]: hostname "devops-101" lookup devops-101 on 172.20.10.1:53: no such host

[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'

[preflight/images] Pulling images required for setting up a Kubernetes cluster

[preflight/images] This might take a minute or two, depending on the speed of your internet connection

[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'

[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[preflight] Activating the kubelet service

[certificates] Generated ca certificate and key.

[certificates] Generated apiserver certificate and key.

[certificates] apiserver serving cert is signed for DNS names [devops-101 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.101]

[certificates] Generated apiserver-kubelet-client certificate and key.

[certificates] Generated sa key and public key.

[certificates] Generated front-proxy-ca certificate and key.

[certificates] Generated front-proxy-client certificate and key.

[certificates] Generated etcd/ca certificate and key.

[certificates] Generated etcd/server certificate and key.

[certificates] etcd/server serving cert is signed for DNS names [devops-101 localhost] and IPs [127.0.0.1 ::1]

[certificates] Generated etcd/peer certificate and key.

[certificates] etcd/peer serving cert is signed for DNS names [devops-101 localhost] and IPs [192.168.0.101 127.0.0.1 ::1]

[certificates] Generated etcd/healthcheck-client certificate and key.

[certificates] Generated apiserver-etcd-client certificate and key.

[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"

[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"

[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"

[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"

[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"

[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"

[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"

[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"

[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"

[init] this might take a minute or longer if the control plane images have to be pulled

[apiclient] All control plane components are healthy after 46.002877 seconds

[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace

[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster

[markmaster] Marking the node devops-101 as master by adding the label "node-role.kubernetes.io/master=''"

[markmaster] Marking the node devops-101 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]

[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "devops-101" as an annotation

[bootstraptoken] using token: wkj0bo.pzibll6rd9gyi5z8

[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials

[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster

[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace

[addons] Applied essential addon: CoreDNS

[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node

as root:

kubeadm join 192.168.0.101:6443 --token wkj0bo.pzibll6rd9gyi5z8 --discovery-token-ca-cert-hash sha256:51985223a369a1f8c226f3ccdcf97f4ad5ff201a7c8c708e1636eea0739c0f05

看到以上信息表示Master節點已經初始化成功了。如果需要用普通用戶管理集群,可以按照提示進行操作,如果是使用root用戶管理,執行下面的命令。

[root@devops-101 ~]# export KUBECONFIG=/etc/kubernetes/admin.conf

[root@devops-101 ~]# kubectl get nodes

NAME STATUS ROLES AGE VERSION

devops-101 NotReady master 7m v1.11.1

[root@devops-101 ~]# kubectl get pods --all-namespaces

NAMESPACE NAME READY STATUS RESTARTS AGE

kube-system coredns-78fcdf6894-8sd6g 0/1 Pending 0 7m

kube-system coredns-78fcdf6894-lgvd9 0/1 Pending 0 7m

kube-system etcd-devops-101 1/1 Running 0 6m

kube-system kube-apiserver-devops-101 1/1 Running 0 6m

kube-system kube-controller-manager-devops-101 1/1 Running 0 6m

kube-system kube-proxy-bhmj8 1/1 Running 0 7m

kube-system kube-scheduler-devops-101 1/1 Running 0 6m

可以看到節點還沒有Ready,dns的兩個pod也沒不正常,還需要安裝網絡配置。

3. Master節點的網絡配置

這裡我選用了 Flannel 的方案。

kubeadm only supports Container Network Interface (CNI) based networks (and does not support kubenet).

修改系統設置。

[root@devops-101 ~]# sysctl net.bridge.bridge-nf-call-iptables=1

net.bridge.bridge-nf-call-iptables = 1

[root@devops-101 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml

clusterrole.rbac.authorization.k8s.io/flannel created

clusterrolebinding.rbac.authorization.k8s.io/flannel created

serviceaccount/flannel created

configmap/kube-flannel-cfg created

daemonset.extensions/kube-flannel-ds created

執行成功後,Master並不能馬上變成Ready狀態,稍等幾分鐘,就可以看到所有狀態都正常了。

[root@devops-101 ~]# kubectl get pods --all-namespaces

NAMESPACE NAME READY STATUS RESTARTS AGE

kube-system coredns-78fcdf6894-8sd6g 1/1 Running 0 14m

kube-system coredns-78fcdf6894-lgvd9 1/1 Running 0 14m

kube-system etcd-devops-101 1/1 Running 0 13m

kube-system kube-apiserver-devops-101 1/1 Running 0 13m

kube-system kube-controller-manager-devops-101 1/1 Running 0 13m

kube-system kube-flannel-ds-6zljr 1/1 Running 0 48s

kube-system kube-proxy-bhmj8 1/1 Running 0 14m

kube-system kube-scheduler-devops-101 1/1 Running 0 13m

[root@devops-101 ~]# kubectl get nodes

NAME STATUS ROLES AGE VERSION

devops-101 Ready master 14m v1.11.1

4. 加入節點

Node節點的加入集群前,首先需要按照本文的第0節和第1節做好準備工作,然後下載鏡像。

$ docker pull registry.cn-hangzhou.aliyuncs.com/k8sth/kube-proxy-amd64:v1.11.0

$ docker pull registry.cn-hangzhou.aliyuncs.com/k8sth/pause-amd64:3.1

$ docker tag registry.cn-hangzhou.aliyuncs.com/k8sth/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1

$ docker tag registry.cn-hangzhou.aliyuncs.com/k8sth/kube-proxy-amd64:v1.11.0 k8s.gcr.io/kube-proxy-amd64:v1.11.0

$ docker tag registry.cn-hangzhou.aliyuncs.com/k8sth/pause-amd64:3.1 k8s.gcr.io/pause:3.1

最後再根據Master節點的提示加入集群。

$ kubeadm join 192.168.0.101:6443 --token wkj0bo.pzibll6rd9gyi5z8 --discovery-token-ca-cert-hash sha256:51985223a369a1f8c226f3ccdcf97f4ad5ff201a7c8c708e1636eea0739c0f05

節點的啟動也需要一點時間,稍後再到Master上查看狀態。

[root@devops-101 ~]# kubectl get nodes

NAME STATUS ROLES AGE VERSION

devops-101 Ready master 1h v1.11.1

devops-102 Ready 11m v1.11.1

我把安裝中需要用到的一些命令整理成了幾個腳本,放在我的Github上,大家可以下載使用。

X. 坑

pause:3.1

安裝的過程中,發現kubeadmin會找 pause:3.1 的鏡像,所以需要重新 tag 。

$ docker tag registry.cn-hangzhou.aliyuncs.com/k8sth/pause-amd64:3.1 k8s.gcr.io/pause:3.1

兩臺服務器時間不同步。

報錯信息

[discovery] Failed to request cluster info, will try again: [Get https://192.168.0.101:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate has expired or is not yet valid]

解決方法,設定一個時間服務器同步兩臺服務器的時間。

$ ntpdate ntp1.aliyun.com

kubeadm安裝kubernetes V1.11.1 集群


分享到:


相關文章: