Rancher2.1 HA部署kubernetes

介紹

本文主要目的在於記錄rancher ha集群搭建步驟,內容包括系統配置、docker安裝、k8s安裝、rancher ha安裝等。

Rancher2.1 HA部署kubernetes

服務器環境信息:

Rancher2.1 HA部署kubernetes

環境設置

操作系統文件限制

vim /etc/security/limits.conf

在文件末尾添加以下內容:

root soft nofile 655350

root hard nofile 655350

* soft nofile 655350

* hard nofile 655350

關閉防火牆

systemctl stop firewalld

systemctl disable firewalld

關閉setlinx

將SELINUX值設置為disabled:

vim /etc/selinux/config

SELINUX=disabled

關閉swap

註釋或刪除swap交換分區:vim /etc/fstab

# /etc/fstab

# Created by anaconda on Fri Jun 2 14:11:50 2017

#

# Accessible filesystems, by reference, are maintained under '/dev/disk'

# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info

#

/dev/mapper/centos-root / xfs defaults 0 0

UUID=f5b4435a-77bc-48f4-8d22-6fa55e9e04a2 /boot xfs defaults 0 0

/dev/mapper/centos-grid0 /grid0 xfs defaults 0 0

#/dev/mapper/centos-swap swap swap defaults 0 0

kernel調優

添加如下內容,vim /etc/sysctl.conf:

net.ipv4.ip_forward=1

net.bridge.bridge-nf-call-iptables=1

net.bridge.bridge-nf-call-ip6tables=1

vm.swappiness=0

vm.max_map_count=655360

創建用戶

創建用戶並且添加到docker組:

useradd rancher -G docker

ssh免密登錄

在31-33服務器上執行下面命令:

su - rancher

ssh-keygen -t rsa

ssh-copy-id -i .ssh/id_rsa.pub [email protected]

ssh-copy-id -i .ssh/id_rsa.pub [email protected]

ssh-copy-id -i .ssh/id_rsa.pub [email protected]

root用戶也需要ssh免密登錄,命令參考:

su - root

ssh-keygen -t rsa

ssh-copy-id -i .ssh/id_rsa.pub [email protected]

ssh-copy-id -i .ssh/id_rsa.pub [email protected]

ssh-copy-id -i .ssh/id_rsa.pub [email protected]

docker安裝

rke工具目前只支持docker v17.03.2,請務必保持版本一致,否則後續安裝會報錯。

1、安裝repo源:

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

卸載舊docker版本:

yum remove -y docker \\

docker-client \\

docker-client-latest \\

docker-common \\

docker-latest \\

docker-latest-logrotate \\

docker-logrotate \\

docker-selinux \\

docker-engine-selinux \\

docker-engine \\

container*

2、安裝自定義版本

export docker_version=17.03.2

3、安裝必要的一些系統工具

yum install -y yum-utils device-mapper-persistent-data lvm2 bash-completion

4、添加軟件源信息

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

5、安裝docker-ce

version=$(yum list docker-ce.x86_64 --showduplicates | sort -r|grep ${docker_version}|awk '{print $2}')

yum -y install --setopt=obsoletes=0 docker-ce-${version} docker-ce-selinux-${version}

6、開機自啟動

systemctl enable docker

7、添加國內加速代理,設置storage-driver

vim /etc/docker/daemon.json

輸入如下內容:

{

"registry-mirrors": ["https://39r65dar.mirror.aliyuncs.com"],

"storage-driver": "overlay2",

"storage-opts": [

"overlay2.override_kernel_check=true"

]

}

8、重啟docker

systemctl restart docker

安裝nginx

在192.168.100.22服務器上安裝nginx用戶rancher-server負載均衡。

安裝nginx:

sudo rpm -Uvh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm

yum install nginx -y

sudo systemctl enable nginx.service

修改配置文件:vi /etc/nginx/nginx.conf

user nginx;

worker_processes 4;

worker_rlimit_nofile 40000;

events {

worker_connections 8192;

}

http {

# Gzip Settings

gzip on;

gzip_disable "msie6";

gzip_disable "MSIE [1-6]\\.(?!.*SV1)";

gzip_vary on;

gzip_static on;

gzip_proxied any;

gzip_min_length 0;

gzip_comp_level 8;

gzip_buffers 16 8k;

gzip_http_version 1.1;

gzip_types text/xml application/xml application/atom+xml application/rss+xml application/xhtml+xml image/svg+xml application/font-woff text/javascript application/javascript application/x-javascript text/x-json application/json application/x-web-app-manifest+json text/css text/plain text/x-component font/opentype application/x-font-ttf application/vnd.ms-fontobject font/woff2 image/x-icon image/png image/jpeg;

server {

listen 80;

return 301 https://$host$request_uri;

}

}

stream {

upstream rancher_servers {

least_conn;

server 192.168.100.31:443 max_fails=3 fail_timeout=5s;

server 192.168.100.32:443 max_fails=3 fail_timeout=5s;

server 192.168.100.33:443 max_fails=3 fail_timeout=5s;

}

server {

listen 443;

proxy_pass rancher_servers;

}

}

啟動nginx:

sudo systemctl restart nginx.service

Rancher集群部署

安裝必要工具

在291.168.100.31服務器上進行下面操作。

安裝rke:

su root

wget https://www.cnrancher.com/download/rke/rke_linux-amd64

chmod +x rke_linux-amd64

mv rke_linux-amd64 /usr/bin/rke

安裝kubectl:

wget https://www.cnrancher.com/download/kubectl/kubectl_amd64-linux

chmod +x kubectl_amd64-linux

mv kubectl_amd64-linux /usr/bin/kubectl

安裝helm:

wget https://storage.googleapis.com/kubernetes-helm/helm-v2.11.0-linux-amd64.tar.gz

tar zxvf helm-v2.12.0-linux-amd64.tar.gz

mv linux-amd64/helm /usr/bin/helm

mv linux-amd64/tiller /usr/bin/tiller

rm -rf helm-v2.12.0-linux-amd64.tar.gz linux-amd64/

安裝k8s

1、切換到rancher用戶

su - rancher

2、創建rancher集群配置文件:

vim rancher-cluster.yml

輸入如下內容:

nodes:

- address: 192.168.100.31

user: rancher

role: [controlplane,worker,etcd]

- address: 192.168.100.32

user: rancher

role: [controlplane,worker,etcd]

- address: 192.168.100.33

user: rancher

role: [controlplane,worker,etcd]

services:

etcd:

snapshot: true

creation: 6h

retention: 24h

如果之前操作失敗,重新安裝需要清理數據:

su - root

rm -rf /var/lib/rancher/etcd/*

rm -rf /etc/kubernetes/*

su - rancher

rke remove --config ./rancher-cluster.yml

3、啟動集群

rke up --config ./rancher-cluster.yml

完成後,它應顯示:Finished building Kubernetes cluster successfully。

4、配置環境變量:

切換到root用戶su - root

vim /etc/profile

export KUBECONFIG=/home/rancher/kube_config_rancher-cluster.yml

保存,並執行:

source /etc/profile

5、通過kubectl測試您的連接,並查看您的所有節點是否處於Ready狀態

[rancher@bigman-s1 ~]$ kubectl get nodes

NAME STATUS ROLES AGE VERSION

192.168.100.31 Ready controlplane,etcd,worker 3m v1.11.6

192.168.100.32 Ready controlplane,etcd,worker 3m v1.11.6

192.168.100.33 Ready controlplane,etcd,worker 3m v1.11.6

6、檢查集群Pod的運行狀況

[rancher@bigman-s1 ~]$ kubectl get pods --all-namespaces

NAMESPACE NAME READY STATUS RESTARTS AGE

ingress-nginx default-http-backend-797c5bc547-z4gj5 1/1 Running 0 3m

ingress-nginx nginx-ingress-controller-bvgxm 1/1 Running 0 3m

ingress-nginx nginx-ingress-controller-rjrss 1/1 Running 0 3m

ingress-nginx nginx-ingress-controller-z5nmf 1/1 Running 0 3m

kube-system canal-cwb9g 3/3 Running 0 4m

kube-system canal-lnvmt 3/3 Running 0 4m

kube-system canal-xfft6 3/3 Running 0 4m

kube-system kube-dns-7588d5b5f5-5lql6 3/3 Running 0 4m

kube-system kube-dns-autoscaler-5db9bbb766-qlskd 1/1 Running 0 4m

kube-system metrics-server-97bc649d5-vx7p7 1/1 Running 0 4m

kube-system rke-ingress-controller-deploy-job-ghz5d 0/1 Completed 0 3m

kube-system rke-kubedns-addon-deploy-job-snkfq 0/1 Completed 0 4m

kube-system rke-metrics-addon-deploy-job-kzlwb 0/1 Completed 0 4m

kube-system rke-network-plugin-deploy-job-4f8ms 0/1 Completed 0 4m

保存kube_config_rancher-cluster.yml和rancher-cluster.yml文件的副本,您將需要這些文件來維護和升級Rancher實例。

Helm

使用Helm在集群上安裝tiller服務以管理charts,由於RKE默認啟用RBAC, 因此我們需要使用kubectl來創建一個serviceaccount,clusterrolebinding才能讓tiller具有部署到集群的權限。

1、在kube-system命名空間中創建ServiceAccount:

kubectl -n kube-system create serviceaccount tiller

2、創建ClusterRoleBinding以授予tiller帳戶對集群的訪問權限:

kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller

3、安裝Helm Server(Tiller)

helm init --service-account tiller --tiller-image registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.12.0 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts

4、安裝Tiller金絲雀版本

helm init --service-account tiller --canary-image

需要修改成國內鏡像(可能需要delete再重新init)

export TILLER_TAG=v2.12.0 ;

kubectl --namespace=kube-system set image deployments/tiller-deploy tiller=hongxiaolu/tiller:$TILLER_TAG

helm安裝rancher

添加Chart倉庫地址

使用helm repo add命令添加Rancher chart倉庫地址,訪問Rancher tag和Chart版本

替換為您要使用的Helm倉庫分支(即latest或stable)。

helm repo add rancher-stable https://releases.rancher.com/server-charts/stable

安裝證書管理器

1、只有Rancher自動生成的證書和LetsEncrypt頒發的證書才需要cert-manager。如果是你自己的證書,可使用ingress.tls.source=secret參數指定證書,並跳過此步驟。

helm install stable/cert-manager \\

--name cert-manager \\

--namespace kube-system

Rancher自動生成證書

默認情況下,Rancher會自動生成CA根證書並使用cert-manager頒發證書以訪問Rancher server界面。

唯一的要求是將hostname配置為訪問Rancher的域名地址,使用這種SSL證書配置方式需提前安裝證書管理器。

helm install rancher-stable/rancher \\

--name rancher \\

--namespace cattle-system \\

--set hostname=hi.rancher.cn

hi.rancher.cn就是後面訪問rancher的域名,需要在/etc/hosts文件中添加關聯(所有主機):

vim /etc/hosts

192.168.100.22 hi.rancher.cn

由於我們通過hosts文件來添加映射,所以需要為Agent Pod添加主機別名(/etc/hosts):

kubectl -n cattle-system patch deployments cattle-cluster-agent --patch '{

"spec": {

"template": {

"spec": {

"hostAliases": [

{

"hostnames":

[

"hi.rancher.cn"

],

"ip": "192.168.100.22"

}

]

}

}

}

}'

kubectl -n cattle-system patch daemonsets cattle-node-agent --patch '{

"spec": {

"template": {

"spec": {

"hostAliases": [

{

"hostnames":

[

"hi.rancher.cn"

],

"ip": "192.168.100.22"

}

]

}

}

}

}'

登錄rancher管理端

1、需要在/etc/hosts文件中添加關聯(所有主機):

vim /etc/hosts

192.168.100.22 hi.rancher.cn

2、使用域名登錄https://hi.rancher.cn

Rancher2.1 HA部署kubernetes

輸入:admin/admin,設置用戶密碼。

3、登錄之後,此時可以看到已經創建好的k8s集群

Rancher2.1 HA部署kubernetes

安裝rancher-cli

1、下載rancher-cli工具

wget https://releases.rancher.com/cli2/v2.0.6/rancher-linux-amd64-v2.0.6.tar.gz

tar zxvf rancher-linux-amd64-v2.0.6.tar.gz

2、配置變量

mv rancher-v2.0.6/rancher /usr/bin/rancher

rm -rf rancher-v2.0.6/

3、測試登錄

新建用戶獲取tonken:

Rancher2.1 HA部署kubernetes

使用創建好的用戶token登錄:

rancher login https://hi.rancher.cn/v3 --token token-jpf2f:sjmptntdn6k7rf9mqz7k7c9w77q6pfxmxmr7fvtdjwswbprpjhzvq8

其它幫助

docker xfs type問題

在運行docker info 命令時,如果你的文件系統使用了xfs,那麼Docker會檢測ftype的值,如果ftype=0,那麼會有警告出現。

具體警告如下:

WARNING: overlay: the backing xfs filesystem is formatted without d_type support, which leads to incorrect behavior.

Reformat the filesystem with ftype=1 to enable d_type support.

Running without d_type support will not be supported in future releases.

這個問題需要解決,否則後續容器會出現異常退出等情況,具體docker為什麼這麼關係ftype值,可以去百度或者查閱官方文檔。

由於docker默認是安裝在系統盤的,那麼重新格式化分區並掛盤肯定是行不通的,我這邊環境正好還有多餘的盤,所以講docker切換到新的分區,並將分區參數調成成ftype=1。

1、查看分區信息

[root@bigman-s1 ~]# df

文件系統 1K-塊 已用 可用 已用% 掛載點

devtmpfs 32858088 0 32858088 0% /dev

tmpfs 32871880 8 32871872 1% /dev/shm

tmpfs 32871880 61024 32810856 1% /run

tmpfs 32871880 0 32871880 0% /sys/fs/cgroup

/dev/mapper/centos-root 598661528 81105212 517556316 14% /

/dev/mapper/centos-grid0 1227397576 66398088 1160999488 6% /grid0

2、重新格式化磁盤(注意備份數據)

umount /dev/mapper/centos-grid0

mkfs.xfs -n ftype=1 -f /dev/mapper/centos-grid0

mount /dev/mapper/centos-grid0 /grid0

xfs_info /grid0

3、修改docker數據目錄

vim /usr/lib/systemd/system/docker.service

ExecStart=/usr/bin/dockerd --graph /grid0/docker

4、修改docker storage-driver驅動

{

"registry-mirrors": ["https://39r65dar.mirror.aliyuncs.com"],

"storage-driver": "overlay2",

"storage-opts": [

"overlay2.override_kernel_check=true"

]

}

5、重啟docker

systemctl disable docker

systemctl enable docker

systemctl daemon-reload

systemctl restart docker

docker info

docker本地倉庫

1、啟動本地倉庫服務:

docker run -d -p 5000:5000 --restart=always --name registry registry:2

2、修改配置

vi /etc/docker/daemon.json

{

"registry-mirrors": ["https://39r65dar.mirror.aliyuncs.com","http://192.168.100.21:5000"],

"storage-driver": "overlay2",

"storage-opts": [

"overlay2.override_kernel_check=true"

],

"insecure-registries" : ["192.168.100.21:5000"]

}


分享到:


相關文章: