节点规划
- k8s-master01:192.168.10.192
- k8s-master02:192.168.10.193
- k8s-master03:192.168.10.194
- k8s-node01:192.168.10.195
- k8s-node02:192.168.10.196
- k8s-node03:192.168.10.198
- VIP:192.168.10.230
- 主机系统:Centos7.6
- 内核版本:4.4.216-1.el7.elrepo.x86_64
(3.10内核版中有bug,会对docker和k8s造成不稳定,生产环境中建议升级内核版本)
前期准备
- 更改主机名以及hosts解析
<code>hostnamectlset
-hostname k8s-master01 hostnamectlset
-hostname k8s-master02 hostnamectlset
-hostname k8s-master03 hostnamectlset
-hostname k8s-node01 hostnamectlset
-hostname k8s-node02 hostnamectlset
-hostname k8s-node02 /<code>
- 安装依赖包
<code>yum
install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git /<code>
- 关闭防火墙 安装,清空 iptables
<code>systemctlstop
firewalld && systemctldisable
firewalld yum -yinstall
iptables-services && systemctlstart
iptables && systemctlenable
iptables && iptables -F && service iptablessave
/<code>
- 关闭虚拟内存
<code>swapoff
-a && sed -i'/ swap / s/^\(.*\)$/#\1/g'
/etc/fstab 安装时会检测swap是否关闭 /<code>
- 关闭 Selinux
<code>setenforce0
&& sed -i's/^SELINUX=.*/SELINUX=disabled/'
/etc/selinux/config
/<code>
- 调整内核参数
<code>cat
> k8s.conf << EOF
=1 #打开网桥转发功能
=1
=1
=0
=0
=1
=0
=8192
=1048576
=52706963
=52706963
=1
EOF
cp
k8s.conf /etc/sysctl.d/k8s.conf
sysctl
-p /etc/sysctl.d/k8s.conf
/<code>
- 设置 rsyslogd 和 systemd journald
<code>持久化保存日志的目录
mkdir
/var/log/journal
mkdir
/etc/systemd/journald.conf.d
cat
> /etc/systemd/journald.conf.d/99-prophet.conf
[Journal]
Storage
=persistent
Compress
=yes
SyncIntervalSec
=5m
RateLimitInterval
=30s
RateLimitBurst
=1000
SystemMaxUse
=10G
SystemMaxFileSize
=200m
MaxRetentionSec
=2week
ForwardToSyslog
=no
EOF
systemctl
restart systemd-journald #重启日志进程
/<code>
- 升级内核为 4.44
<code>rpm -Uvh http://www.elrepo.org/elrepo-release
-7.0
-3.
el7.elrepo.noarch.rpm yum grub2-set
-default
"CentOS linux (4.4.216-1.el7.elrepo.x86_64) 7 (Core)"
/<code>
- kube-proxy开启ipvs的前置条件
<code>modprobe br_netfilter cat > /etc/sysconfig/modules/ipvs.modules
- 安装 Docker
<code>yuminstall
-y yum-utils device-mapper-persistent-data
lvm2 yum-config-manager \http
://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yumupdate
-y && yuminstall
-y mkdir -p /etc/systemd/system
/docker.service.d systemctl daemon-reload && systemctl restart docker && systemctlenable
docker /<code>
- 更改docker的cgroup driver
配置docker使用systemd驱动,相比默认的cgrouops更稳定。
<code>cat/etc/
docker/daemon.json {"registry-mirrors"
: ["https://v16stybc.mirror.aliyuncs.com"
],"exec-opts"
: ["native.cgroupdriver=systemd"
],"log-driver"
:"json-file"
,"log-opts"
: {"max-size"
:"100m"
} } EOF systemctl daemon-reload systemctl restart docker /<code>
- 在maser节点配置Haproxy和Keepalived容器
采用睿云breeze开源项目的Haproxy和Keepalived docker镜像
在三个master节点上均安装haproxy和keepalived
mkdir /data
<code>cat > /data/lb/start-haproxy.sh <"EOF"
MasterIP1=192.168.10.192 MasterIP2=192.168.10.193 MasterIP3=192.168.10.194 MasterPort=6443 docker run -d --restart=always --name HAProxy-K8S -p 6444:6444 \ -e MasterIP1=$MasterIP1
\ -e MasterIP2=$MasterIP2
\ -e MasterIP3=$MasterIP3
\ -e MasterPort=$MasterPort
\ wise2c/haproxy-k8s EOF /<code>
<code>cat > /data/lb/start-keepalived.sh <"EOF"
VIRTUAL_IP=192.168.10.230 INTERFACE=enp0s3 NETMASK_BIT=24 CHECK_PORT=6444 RID=10 VRID=160 MCAST_GROUP=224.0.0.18 docker run -itd --restart=always --name=Keepalived-K8S \ --net=host --cap
-add=NET_ADMIN \ -e VIRTUAL_IP=$VIRTUAL_IP
\ -e INTERFACE=$INTERFACE
\ -e CHECK_PORT=$CHECK_PORT
\ -e RID=$RID
\ -e VRID=$VRID
\ -e NETMASK_BIT=$NETMASK_BIT
\ -e MCAST_GROUP=$MCAST_GROUP
\ wise2c/keepalived-k8s EOF /<code>
keepalived 启动成功后,会看到VIP已经出现在enp0s3网卡上VIP会在三个节点进行漂移,如果一个节点宕机,会漂移到其他节点,目前在master01上
将master01关机,VIP漂移到master02节点
- Kubeadm
<code>cat
< /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name
=Kubernetes
baseurl
=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled
=1
gpgcheck
=0
repo_gpgcheck
=0
gpgkey
=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http
://mirros.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum
-y install kubeadm-1.15.1 kubectl-1.15.1 kubelet-1.15.1
systemctl
enable kubelet.service
/<code>
- 初始化主节点
<code>生成kubeadm配置文件,修改配置文件里的images下载源、是否允许Master允许业务Pod、网络配置信息
kubeadm
config
init-defaults
>
kubeadm-config.yaml
vi
kubeadm-config.yaml
//修改默认kubeadm初始化参数如下
apiVersion:
kubeadm.k8s.io/v1beta2
bootstrapTokens:
-
groups:
-
system:bootstrappers:kubeadm:default-node-token
token:
abcdef.0123456789abcdef
ttl:
24h0m0s
usages:
-
signing
-
authentication
kind:
InitConfiguration
localAPIEndpoint:
advertiseAddress:
192.168
.10
.192
bindPort:
6443
nodeRegistration:
criSocket:
/var/run/dockershim.sock
name:
k8s-master01
taints:
-
effect:
NoSchedule
key:
node-role.kubernetes.io/master
apiServer:
timeoutForControlPlane:
4m0s
apiVersion:
kubeadm.k8s.io/v1beta2
certificatesDir:
/etc/kubernetes/pki
clusterName:
kubernetes
controlPlaneEndpoint:
"192.168.10.230:6444"
controllerManager:
{}
dns:
type:
CoreDNS
etcd:
local:
dataDir:
/var/lib/etcd
imageRepository:
k8s.gcr.io
kind:
ClusterConfiguration
kubernetesVersion:
v1.15.1
networking:
dnsDomain:
cluster.local
PodSubnet:
"10.244.0.0/16"
serviceSubnet:
10.96
.0
.0
/12
scheduler:
{}
kubeadm
config
images
pull
--config
kubeadm-config.yaml
/<code>
注意:拉取镜像如果不设置,默认会从谷歌服务器拉取,因国内无法访问谷歌,可采取重新tag都方法获取镜像。
国内k8s镜像站:
registry.cn-hangzhou.aliyuncs.com/google_container
<code>例如:docker
tag
registry
.cn-hangzhou
.aliyuncs
.com
/google_containers
/kube-scheduler
:v1.15.1
k8s
.gcr
.io
/kube-scheduler
:v1.15.1
/<code>
- 获取的镜像如下:
<code>每个节点都需要以下镜像k8s
.gcr
.io
/kube-apiserver
:v1.15.1
k8s
.gcr
.io
/kube-controller-manager
:v1.15.1
k8s
.gcr
.io
/kube-scheduler
:v1.15.1
k8s
.gcr
.io
/kube-proxy
:v1.15.1
k8s
.gcr
.io
/pause
:3.1
k8s
.gcr
.io
/etcd
:3.3.10
k8s
.gcr
.io
/coredns
:1.3.1
/<code>
- 部署master节点
<code>kubeadminit
--config=/root/kubeadm.conf --experimental-upload-certs | tee kubeadm-init
.log 参数解释: --experimental-upload-certs:可以在后续执行加入节点时自动分发证书文件 tee kubeadm-init
.log:输出日志 /<code>
部署成功后,会有如下提示:注意:部署高可用集群初始化后,会生成两个join信息,一个为添加master节点使用,一个为添加node节点使用
<code>Your Kubernetes control-plane has initialized successfully! Tostart
using
your cluster, you needto
run thefollowing
as
a regularuser
: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id
-u):$(id
-g) $HOME/.kube/config You cannow
join
any
number
of
the control-plane node running thefollowing
commandon
each
as
root: kubeadmjoin
192.168
.10
.230
:6444
Please note that the certificate-key
givesaccess
to
cluster sensitivedata
,keep
it secret!As
a safeguard, uploaded-certs will be deletedin
twohours
; If necessary, you canuse
"kubeadm init phase upload-certs --experimental-upload-certs"
to
reload certs afterward.Then
you canjoin
any
number
of
worker nodesby
running thefollowing
on
each
as
root: kubeadmjoin
192.168
.10
.230
:6444
/<code>
- 初始化完成后,设置kubectl相关环境变量
注意如果不设置以下命令,kubectl get nodes会报错
<code>mkdir
-p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudochown
$(id -u):$(id -g) $HOME/.kube/config /<code>
- 加入其他两个master节点和三个node节点
<code>运行日志中出现的join
信息命令 /<code>
- 部署flannel网络插件
<code>wgethttps:
//raw.githubusercontent.com/coreos
/flannel/master
/Documentation/kube
-flannel.yml kubectl create -f kube-flannel.yml quay.io/coreos/flannel:
v0
.12.0
-amd64 /<code>
最终节点状态
- 查看etcd集群状态
<code>kubectl -n kube-system exec etcd-k8s-master01 -- etcdctl --endpoints=https:
//192.168.10.192:2379 --ca-file=/etc
/kubernetes/pki
/etcd/ca
.crt --cert-file=/etc/kubernetes
/pki/etcd
/server.crt --key-file=/etc
/kubernetes/pki
/etcd/server
.key cluster-health /<code>
關鍵字: kubernetes kube etc