Kubernetes v1.15.1 高可用集群搭建

節點規劃

  • k8s-master01:192.168.10.192
  • k8s-master02:192.168.10.193
  • k8s-master03:192.168.10.194
  • k8s-node01:192.168.10.195
  • k8s-node02:192.168.10.196
  • k8s-node03:192.168.10.198
  • VIP:192.168.10.230
  • 主機系統:Centos7.6
  • 內核版本:4.4.216-1.el7.elrepo.x86_64
    (3.10內核版中有bug,會對docker和k8s造成不穩定,生產環境中建議升級內核版本)

前期準備

  • 更改主機名以及hosts解析
<code>hostnamectl 

set

-hostname k8s-master01 hostnamectl

set

-hostname k8s-master02 hostnamectl

set

-hostname k8s-master03 hostnamectl

set

-hostname k8s-node01 hostnamectl

set

-hostname k8s-node02 hostnamectl

set

-hostname k8s-node02 /<code>
  • 安裝依賴包
<code>

yum

install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git /<code>
  • 關閉防火牆 安裝,清空 iptables
<code>systemctl 

stop

firewalld && systemctl

disable

firewalld yum -y

install

iptables-services && systemctl

start

iptables && systemctl

enable

iptables && iptables -F && service iptables

save

/<code>
  • 關閉虛擬內存
<code>

swapoff

-a && sed -i

'/ swap / s/^\(.*\)$/#\1/g'

/etc/fstab 安裝時會檢測swap是否關閉 /<code>
  • 關閉 Selinux
<code>setenforce 

0

&& sed -i

's/^SELINUX=.*/SELINUX=disabled/'

/etc/selinux/

config

/<code>
  • 調整內核參數
<code>

cat

> k8s.conf << EOF

=

1 #打開網橋轉發功能

=

1

=

1

=

0

=

0

=

1

=

0

=

8192

=

1048576

=

52706963

=

52706963

=

1

EOF

cp

k8s.conf /etc/sysctl.d/k8s.conf

sysctl

-p /etc/sysctl.d/k8s.conf

/<code>
  • 設置 rsyslogd 和 systemd journald
<code>

持久化保存日誌的目錄

mkdir

/var/log/journal

mkdir

/etc/systemd/journald.conf.d

cat

> /etc/systemd/journald.conf.d/99-prophet.conf

[Journal]

Storage

=

persistent

Compress

=

yes

SyncIntervalSec

=

5m

RateLimitInterval

=

30s

RateLimitBurst

=

1000

SystemMaxUse

=

10G

SystemMaxFileSize

=

200m

MaxRetentionSec

=

2week

ForwardToSyslog

=

no

EOF

systemctl

restart systemd-journald #重啟日誌進程

/<code>
  • 升級內核為 4.44
<code>rpm -Uvh http://www.elrepo.org/elrepo-

release

-7.0

-3.

el7.elrepo.noarch.rpm yum grub2-

set

-

default

"CentOS linux (4.4.216-1.el7.elrepo.x86_64) 7 (Core)"

/<code>
  • kube-proxy開啟ipvs的前置條件
<code>modprobe br_netfilter
cat > /etc/sysconfig/modules/ipvs.modules 
  • 安裝 Docker
<code>yum 

install

-y yum-utils device-mapper-persistent-

data

lvm2 yum-config-manager \

http

://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum

update

-y && yum

install

-y mkdir -p /etc/systemd/

system

/docker.service.d systemctl daemon-reload && systemctl restart docker && systemctl

enable

docker /<code>
  • 更改docker的cgroup driver
    配置docker使用systemd驅動,相比默認的cgrouops更穩定。
<code>cat 

/etc/

docker/daemon.json {

"registry-mirrors"

: [

"https://v16stybc.mirror.aliyuncs.com"

],

"exec-opts"

: [

"native.cgroupdriver=systemd"

],

"log-driver"

:

"json-file"

,

"log-opts"

: {

"max-size"

:

"100m"

} } EOF systemctl daemon-reload systemctl restart docker /<code>
  • 在maser節點配置Haproxy和Keepalived容器
    採用睿雲breeze開源項目的Haproxy和Keepalived docker鏡像
    在三個master節點上均安裝haproxy和keepalived
    mkdir /data
<code>cat > /data/lb/start-haproxy.sh <

"EOF"

MasterIP1=192.168.10.192 MasterIP2=192.168.10.193 MasterIP3=192.168.10.194 MasterPort=6443 docker run -d --restart=always --name HAProxy-K8S -p 6444:6444 \ -e MasterIP1=

$MasterIP1

\ -e MasterIP2=

$MasterIP2

\ -e MasterIP3=

$MasterIP3

\ -e MasterPort=

$MasterPort

\ wise2c/haproxy-k8s EOF /<code>
<code>cat > /data/lb/start-keepalived.sh <

"EOF"

VIRTUAL_IP=192.168.10.230 INTERFACE=enp0s3 NETMASK_BIT=24 CHECK_PORT=6444 RID=10 VRID=160 MCAST_GROUP=224.0.0.18 docker run -itd --restart=always --name=Keepalived-K8S \ --net=host --

cap

-add=NET_ADMIN \ -e VIRTUAL_IP=

$VIRTUAL_IP

\ -e INTERFACE=

$INTERFACE

\ -e CHECK_PORT=

$CHECK_PORT

\ -e RID=

$RID

\ -e VRID=

$VRID

\ -e NETMASK_BIT=

$NETMASK_BIT

\ -e MCAST_GROUP=

$MCAST_GROUP

\ wise2c/keepalived-k8s EOF /<code>

keepalived 啟動成功後,會看到VIP已經出現在enp0s3網卡上VIP會在三個節點進行漂移,如果一個節點宕機,會漂移到其他節點,目前在master01上


Kubernetes v1.15.1 高可用集群搭建


將master01關機,VIP漂移到master02節點


Kubernetes v1.15.1 高可用集群搭建


  • Kubeadm
<code>

cat

< /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name

=

Kubernetes

baseurl

=

http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

enabled

=

1

gpgcheck

=

0

repo_gpgcheck

=

0

gpgkey

=

http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg

http

:

//mirros.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

yum

-y install kubeadm-1.15.1 kubectl-1.15.1 kubelet-1.15.1

systemctl

enable kubelet.service

/<code>
  • 初始化主節點
<code> 
 

生成kubeadm配置文件,修改配置文件裡的images下載源、是否允許Master允許業務Pod、網絡配置信息

kubeadm

config

print

init-defaults

>

kubeadm-config.yaml

vi

kubeadm-config.yaml

//修改默認kubeadm初始化參數如下

apiVersion:

kubeadm.k8s.io/v1beta2

bootstrapTokens:

-

groups:

-

system:bootstrappers:kubeadm:default-node-token

token:

abcdef.0123456789abcdef

ttl:

24h0m0s

usages:

-

signing

-

authentication

kind:

InitConfiguration

localAPIEndpoint:

advertiseAddress:

192.168

.10

.192

bindPort:

6443

nodeRegistration:

criSocket:

/var/run/dockershim.sock

name:

k8s-master01

taints:

-

effect:

NoSchedule

key:

node-role.kubernetes.io/master

apiServer:

timeoutForControlPlane:

4m0s

apiVersion:

kubeadm.k8s.io/v1beta2

certificatesDir:

/etc/kubernetes/pki

clusterName:

kubernetes

controlPlaneEndpoint:

"192.168.10.230:6444"

controllerManager:

{}

dns:

type:

CoreDNS

etcd:

local:

dataDir:

/var/lib/etcd

imageRepository:

k8s.gcr.io

kind:

ClusterConfiguration

kubernetesVersion:

v1.15.1

networking:

dnsDomain:

cluster.local

PodSubnet:

"10.244.0.0/16"

serviceSubnet:

10.96

.0

.0

/12

scheduler:

{}

kubeadm

config

images

pull

--config

kubeadm-config.yaml

/<code>

注意:拉取鏡像如果不設置,默認會從谷歌服務器拉取,因國內無法訪問谷歌,可採取重新tag都方法獲取鏡像。


國內k8s鏡像站:
registry.cn-hangzhou.aliyuncs.com/google_container

<code>例如:

docker

tag

registry

.cn-hangzhou

.aliyuncs

.com

/

google_containers

/

kube-scheduler

:v1.15.1

k8s

.gcr

.io

/

kube-scheduler

:v1.15.1

/<code>
  • 獲取的鏡像如下:
<code>每個節點都需要以下鏡像

k8s

.gcr

.io

/

kube-apiserver

:v1.15.1

k8s

.gcr

.io

/

kube-controller-manager

:v1.15.1

k8s

.gcr

.io

/

kube-scheduler

:v1.15.1

k8s

.gcr

.io

/

kube-proxy

:v1.15.1

k8s

.gcr

.io

/

pause

:3.1

k8s

.gcr

.io

/

etcd

:3.3.10

k8s

.gcr

.io

/

coredns

:1.3.1

/<code>
  • 部署master節點
<code>kubeadm 

init

--config=/root/kubeadm.conf --experimental-upload-certs | tee kubeadm-

init

.log 參數解釋: --experimental-upload-certs:可以在後續執行加入節點時自動分發證書文件 tee kubeadm-

init

.log:輸出日誌 /<code>

部署成功後,會有如下提示:注意:部署高可用集群初始化後,會生成兩個join信息,一個為添加master節點使用,一個為添加node節點使用

<code>Your Kubernetes control-plane has initialized successfully!

To 

start

using

your cluster, you need

to

run the

following

as

a regular

user

: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(

id

-u):$(

id

-g) $HOME/.kube/config You can

now

join

any

number

of

the control-plane node running the

following

command

on

each

as

root: kubeadm

join

192.168

.10

.230

:

6444

Please note that the certificate-

key

gives

access

to

cluster sensitive

data

,

keep

it secret!

As

a safeguard, uploaded-certs will be deleted

in

two

hours

; If necessary, you can

use

"kubeadm init phase upload-certs --experimental-upload-certs"

to

reload certs afterward.

Then

you can

join

any

number

of

worker nodes

by

running the

following

on

each

as

root: kubeadm

join

192.168

.10

.230

:

6444

/<code>
  • 初始化完成後,設置kubectl相關環境變量
    注意如果不設置以下命令,kubectl get nodes會報錯
<code>

mkdir

-p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo

chown

$(id -u):$(id -g) $HOME/.kube/config /<code>
  • 加入其他兩個master節點和三個node節點
<code>運行日誌中出現的

join

信息命令 /<code>
  • 部署flannel網絡插件
<code>wget 

https:

/

/raw.githubusercontent.com/coreos

/flannel/master

/Documentation/kube

-flannel.yml kubectl create -f kube-flannel.yml quay.io/coreos/

flannel:

v

0

.

12.0

-amd64 /<code>

最終節點狀態


Kubernetes v1.15.1 高可用集群搭建


  • 查看etcd集群狀態
<code>kubectl -n kube-system exec etcd-k8s-master01 -- etcdctl    --endpoints=

https:

/

/192.168.10.192:2379 --ca-file=/etc

/kubernetes/pki

/etcd/ca

.crt --cert-file=

/etc/kubernetes

/pki/etcd

/server.crt --key-file=/etc

/kubernetes/pki

/etcd/server

.key cluster-health /<code>


Kubernetes v1.15.1 高可用集群搭建


分享到:


相關文章: