跟我一起學k8s(三)深入理解k8s資源


跟我一起學k8s(三)深入理解k8s資源

k8s


作者:DevOps旭

來自:DevOps探路者

一、什麼是k8s資源

在運維管理k8s時,管理員習慣將k8s中的一切稱為資源,比如pod、deployment、service等等,k8s通過對這些資源進行維護,調度,從而實現了整個集群的管理

二、認識pod

pod是kubernetes內的最小管理單元,可以對一組容器提供管理。在k8s的管理哲學中,並不會對單個的容器進行維護,而是針對一組pod進行部署和操作。當然了,一個pod內的容器數也是靈活的,可以是一個也可以是多個。

跟我一起學k8s(三)深入理解k8s資源

那麼這麼設計又有什麼優勢呢?首先需要強調一下容器的管理哲學,那就是一個容器內只運行一個進程(子進程除外),那麼,如果一個應用需要多個進程時,是選擇一個容器內多個進程呢還是選擇多個容器在同一節點上呢?

首先,一個容器多個進程是可以實現的,可以通過腳本來實現不同進程按照一定的依賴順序啟動,但是這樣就存在一個問題了,容器內的第一個進程是否存在是判斷容器存活的關鍵,那麼容器內如果有多個進程,保障所有進程都處於運行狀態將是一個極大的挑戰,實現這個將會導致容器越來越沉,將有悖於容器輕量級的本質,同時,日誌蒐集,數據持久化等等,也將帶來巨大的挑戰,所以這個並非是一個比較好的選擇。那麼,多個容器跑在一個節點上呢?這恰好就是pod的管理哲學了。將多個容器限制在同一個pod中,共享這個pod的PID、NETWORK、UTS、IPC、MOUNT namespace這樣的話,僅需要通過pause這一個容器對pod實現管理即可 。

說了那麼多了,那麼我們應該如何創建一個pod呢?kubernetes為我們提供十分便捷的方式kubectl——一個可以和apiserver交互的終端

<code>kubectl create pod --image=nginx/<code>

通過上面這條簡單的命令便可以創建一個pod,那麼除此之外還可以通過yaml文件來創建pod,下面就是一個最簡單的yaml文件

<code>apiVersion: v1
kind: Pod
metadata:
  name: nginx-demo
spec:
  containers:
  - image: nginx
    imagePullPolicy: Always
    name: nginx-demo/<code>

這個編排文件遵循kubernetes API組的v1版本,將資源類型描述為pod,命名為nginx-demo

我們可以通過如下命令創建pod

<code>[root@k8s01 yaml]# kubectl apply -f nginx-demo.yml
[root@k8s01 yaml]# kubectl  get po -o wide 
NAME    READY   STATUS    RESTARTS   AGE     IP           NODE    NOMINATED NODE   READINESS GATES
nginx   1/1     Running   0          4m32s   10.244.1.9   k8s02              /<code>

可是容器的生命週期是短暫的,但是我們可以針對pod設置重啟策略restartPolicy實現pod中容器的重啟

<code>Always: 當容器失效時重啟容器
OnFailure:當容器終止運行且退出碼不為0時,由kubelet重啟pod
Never:從不重啟/<code>

下面我們修改pod的yaml文件

<code>apiVersion: v1
kind: Pod
metadata:
  name: nginx-demo
spec:
  restartPolicy: Always
  containers:
  - image: nginx
    imagePullPolicy: Always
    name: nginx-demo/<code>

然後刪除pod進行重建。

<code>[root@k8s01 yaml]# kubectl  delete pod nginx
[root@k8s01 yaml]# kubectl  apply -f nginx-demo.yml 
[root@k8s01 yaml]# kubectl  get po -o wide 
NAME    READY   STATUS    RESTARTS   AGE   IP            NODE    NOMINATED NODE   READINESS GATES
nginx   1/1     Running   0          52s   10.244.1.11   k8s02              /<code>

現在可以看到pod已經重建,並且運行在k8s02節點上,不過鑑於nginx的官方鏡像缺少很多命令,無法進入容器內進行kill 操作殺死nginx進程,只能選擇在pod所在節點殺死進程的方式來模擬容器故障

<code>[root@k8s02 ~]# ps -ef | grep nginx
root      58895  58880  0 02:59 ?        00:00:00 nginx: master process nginx -g daemon off;
101       58947  58895  0 02:59 ?        00:00:00 nginx: worker process
root      59071  49835  0 03:00 pts/0    00:00:00 grep --color=auto nginx
[root@k8s02 ~]# kill  58895

然後在k8s01節點上我們可以看到
[root@k8s01 yaml]# kubectl  get po  -o wide 
NAME    READY   STATUS      RESTARTS   AGE   IP            NODE    NOMINATED NODE   READINESS GATES
nginx   0/1     Completed   0          53s   10.244.1.11   k8s02              
[root@k8s01 yaml]# kubectl  get po  -o wide 
NAME    READY   STATUS    RESTARTS   AGE   IP            NODE    NOMINATED NODE   READINESS GATES
nginx   1/1     Running   1          55s   10.244.1.11   k8s02              
此時nginx已經重啟成功,然後我們看一下pod的事件
[root@k8s01 yaml]# kubectl  describe pod nginx
Name:         nginx
Namespace:    default
Priority:     0
Node:         k8s02/192.168.1.32
Start Time:   Sun, 06 Sep 2020 02:59:31 +0800
Labels:       
Annotations:  
Status:       Running
IP:           10.244.1.11
IPs:
  IP:  10.244.1.11
Containers:
  nginx:
    Container ID:   docker://cf21ee868641ba2da52321e16fe7e43a0aca61b7ebcb0c4a4d62ecb4a3f9787a
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:b0ad43f7ee5edbc0effbc14645ae7055e21bc1973aee5150745632a24a752661
    Port:           
    Host Port:      
    State:          Running
      Started:      Sun, 06 Sep 2020 03:00:24 +0800
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sun, 06 Sep 2020 02:59:48 +0800
      Finished:     Sun, 06 Sep 2020 03:00:20 +0800
    Ready:          True
    Restart Count:  1
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-hdhjf (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-hdhjf:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-hdhjf
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age                From            Message
  ----    ------     ----               ----            -------
  Normal  Scheduled  63s                                Successfully assigned default/nginx to k8s02
  Normal  Pulled     47s                kubelet, k8s02  Successfully pulled image "nginx" in 16.098712681s
  Normal  Pulling    14s (x2 over 63s)  kubelet, k8s02  Pulling image "nginx"
  Normal  Created    11s (x2 over 47s)  kubelet, k8s02  Created container nginx
  Normal  Started    11s (x2 over 47s)  kubelet, k8s02  Started container nginx
  Normal  Pulled     11s                kubelet, k8s02  Successfully pulled image "nginx" in 3.162238195s
  可以清晰地看到kubelet對nginx容器的重啟過程/<code> 

雖然kubelet可以實現對pod中的容器進行重啟,但是,如果node節點發生了故障,這個策略又會如何呢?下面我們依次關閉k8s02節點的kubelet和kube-proxy 來模擬節點k8s02故障

<code>[root@k8s02 ~]# systemctl  stop kubelet 
[root@k8s02 ~]# systemctl  stop kube-proxy 
[root@k8s02 ~]# systemctl  status kube-proxy 
● kube-proxy.service - Kubernetes Proxy
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: inactive (dead) since 日 2020-09-06 03:06:03 CST; 23s ago
  Process: 971 ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS (code=killed, signal=TERM)
 Main PID: 971 (code=killed, signal=TERM)
​
9月 05 23:52:04 k8s02 systemd[1]: Ignoring invalid environment assignment '--proxy-mode=ipvs': /opt/kubernetes/cfg/kube-proxy.conf
9月 05 23:52:04 k8s02 systemd[1]: Started Kubernetes Proxy.
9月 05 23:52:16 k8s02 kube-proxy[971]: E0905 23:52:16.561493     971 node.go:125] Failed to retrieve node info: Get "https...timeout
9月 05 23:52:23 k8s02 kube-proxy[971]: E0905 23:52:23.654714     971 node.go:125] Failed to retrieve node info: nodes "k8s...r scope
9月 06 03:06:03 k8s02 systemd[1]: Stopping Kubernetes Proxy...
9月 06 03:06:03 k8s02 systemd[1]: Stopped Kubernetes Proxy.
Hint: Some lines were ellipsized, use -l to show in full.
[root@k8s02 ~]# systemctl  status kubelet
● kubelet.service - Kubernetes Kubelet
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: inactive (dead) since 日 2020-09-06 03:05:57 CST; 35s ago
  Process: 1183 ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS (code=exited, status=0/SUCCESS)
 Main PID: 1183 (code=exited, status=0/SUCCESS)
​
9月 05 23:52:30 k8s02 kubelet[1183]: E0905 23:52:30.456897    1183 remote_runtime.go:113] RunPodSandbox from runtime service fail...
9月 05 23:52:30 k8s02 kubelet[1183]: E0905 23:52:30.456938    1183 kuberuntime_sandbox.go:69] CreatePodSandbox for pod "nginx-679...
9月 05 23:52:30 k8s02 kubelet[1183]: E0905 23:52:30.456951    1183 kuberuntime_manager.go:730] createPodSandbox for pod "nginx-67...
9月 05 23:52:30 k8s02 kubelet[1183]: E0905 23:52:30.457009    1183 pod_workers.go:191] Error syncing pod ee15155c-faab-424...685b)" 
9月 06 02:44:26 k8s02 kubelet[1183]: E0906 02:44:26.124263    1183 remote_runtime.go:329] ContainerStatus "4413a8d21a2b72b...68fb93c
9月 06 02:44:26 k8s02 kubelet[1183]: E0906 02:44:26.124934    1183 remote_runtime.go:329] ContainerStatus "35eee7e6a06d70f...91c626b
9月 06 02:51:40 k8s02 kubelet[1183]: E0906 02:51:40.490991    1183 remote_runtime.go:329] ContainerStatus "6489db11518634b...332343e
9月 06 02:51:41 k8s02 kubelet[1183]: E0906 02:51:41.660419    1183 kubelet_pods.go:1250] Failed killing the pod "nginx": f...32343e"
9月 06 03:05:57 k8s02 systemd[1]: Stopping Kubernetes Kubelet...
9月 06 03:05:57 k8s02 systemd[1]: Stopped Kubernetes Kubelet.
Hint: Some lines were ellipsized, use -l to show in full./<code>

然後我們在k8s01節點上觀察一下

<code>[root@k8s01 yaml]# kubectl  get node 
NAME    STATUS     ROLES    AGE   VERSION
k8s01   Ready         9d    v1.19.0
k8s02   NotReady      9d    v1.19.0
k8s03   Ready         9d    v1.19.0/<code>

此時node節點k8s02已經是故障狀態,那麼pod呢?

<code>[root@k8s01 yaml]# kubectl  get po -o wide 
NAME    READY   STATUS    RESTARTS   AGE     IP            NODE    NOMINATED NODE   READINESS GATES
nginx   1/1     Running   1          8m18s   10.244.1.11   k8s02              
[root@k8s01 yaml]# kubectl exec -it nginx sh 
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server: error dialing backend: dial tcp 192.168.1.32:10250: connect: connection refused/<code>

那麼現在我們在殺死pod的進程呢?

<code>[root@k8s02 ~]# ps -ef | grep nginx
root      59156  59141  0 03:00 ?        00:00:00 nginx: master process nginx -g daemon off;
101       59203  59156  0 03:00 ?        00:00:00 nginx: worker process
root      61301  49835  0 03:10 pts/0    00:00:00 grep --color=auto nginx
[root@k8s02 ~]# kill  59156
在k8s01上看呢
[root@k8s01 yaml]# kubectl  get po -o wide 
NAME    READY   STATUS        RESTARTS   AGE   IP            NODE    NOMINATED NODE   READINESS GATES
nginx   1/1     Terminating   1          12m   10.244.1.11   k8s02              /<code>

可見pod已經被刪除,查看pod的事件

<code>[root@k8s01 yaml]# kubectl  describe pod nginx
Name:                      nginx
Namespace:                 default
Priority:                  0
Node:                      k8s02/192.168.1.32
Start Time:                Sun, 06 Sep 2020 02:59:31 +0800
Labels:                    
Annotations:               
Status:                    Terminating (lasts 48s)
Termination Grace Period:  30s
IP:                        10.244.1.11
IPs:
  IP:  10.244.1.11
Containers:
  nginx:
    Container ID:   docker://cf21ee868641ba2da52321e16fe7e43a0aca61b7ebcb0c4a4d62ecb4a3f9787a
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:b0ad43f7ee5edbc0effbc14645ae7055e21bc1973aee5150745632a24a752661
    Port:           
    Host Port:      
    State:          Running
      Started:      Sun, 06 Sep 2020 03:00:24 +0800
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sun, 06 Sep 2020 02:59:48 +0800
      Finished:     Sun, 06 Sep 2020 03:00:20 +0800
    Ready:          True
    Restart Count:  1
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-hdhjf (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-hdhjf:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-hdhjf
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason        Age                From             Message
  ----     ------        ----               ----             -------
  Normal   Scheduled     13m                                 Successfully assigned default/nginx to k8s02
  Normal   Pulled        13m                kubelet, k8s02   Successfully pulled image "nginx" in 16.098712681s
  Normal   Pulling       12m (x2 over 13m)  kubelet, k8s02   Pulling image "nginx"
  Normal   Created       12m (x2 over 13m)  kubelet, k8s02   Created container nginx
  Normal   Started       12m (x2 over 13m)  kubelet, k8s02   Started container nginx
  Normal   Pulled        12m                kubelet, k8s02   Successfully pulled image "nginx" in 3.162238195s
  Warning  NodeNotReady  6m23s              node-controller  Node is not ready/<code>

現在我們恢復pod

<code>[root@k8s02 ~]# systemctl  start kubelet
[root@k8s02 ~]# systemctl  start kube-proxy
[root@k8s01 yaml]# kubectl  get po -o wide 
No resources found in default namespace.
[root@k8s01 yaml]# /<code>

可以看到node異常導致pod無法完成自動恢復,可見pod自身的故障恢復能力還是有限的,同時node恢復後,pod也未恢復,那麼這個必將引發很多問題,那麼針對這個問題又該將如何處理呢?

三、認識deployment

3.1、deployment的故障自動轉移

為了應對pod的故障轉移,我們需要認識一下kubernetes的另一個關鍵的資源——deployment。deployment是一個極其強大的資源,是kubernetes提供的一個強大的控制器,這個控制器用來管理無狀態應用的。我們可以通過這個控制器實現對pod的調度,對pod的滾動式升級,對pod的擴容縮容等等。那麼我們應該如何創建一個deployment資源呢?下面我們先創建一個最簡單的deployment資源

<code>apiVersion: apps/v1     # api組
kind: Deployment        # 資源類型為deployment
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 1           # 副本數為1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx    # 鏡像為nginx
        name: nginx/<code>

我們創建次資源

<code>[root@k8s01 yaml]# kubectl  apply -f nginx-deployment.yaml 
[root@k8s01 yaml]# kubectl  get po -o wide 
NAME                     READY   STATUS    RESTARTS   AGE    IP            NODE    NOMINATED NODE   READINESS GATES
nginx-6799fc88d8-9lsjl   1/1     Running   0          109s   10.244.0.15   k8s01              /<code>

可以看到pod被自動分配到了k8s01節點上,那麼我們在模擬一下節點上的kubelet異常,將會如何呢?

<code>[root@k8s01 yaml]# systemctl  stop kubelet 
[root@k8s01 yaml]# systemctl  stop kube-proxy
[root@k8s01 yaml]# ps -ef | grep nginx
root      70693  70678  0 03:36 ?        00:00:00 nginx: master process nginx -g daemon off;
101       70732  70693  0 03:36 ?        00:00:00 nginx: worker process
root      71641  50125  0 03:39 pts/0    00:00:00 grep --color=auto nginx
[root@k8s01 yaml]# kill 70693/<code>

此時我們觀察一下k8s01節點

<code>[root@k8s01 yaml]# kubectl  get node 
NAME    STATUS     ROLES    AGE   VERSION
k8s01   NotReady      9d    v1.19.0
k8s02   Ready         9d    v1.19.0
k8s03   Ready         9d    v1.19.0
[root@k8s01 yaml]# kubectl  describe pod nginx-6799fc88d8-9lsjl 
Name:         nginx-6799fc88d8-9lsjl
Namespace:    default
Priority:     0
Node:         k8s01/192.168.1.31
Start Time:   Sun, 06 Sep 2020 03:36:23 +0800
Labels:       app=nginx
              pod-template-hash=6799fc88d8
Annotations:  
Status:       Running
IP:           10.244.0.15
IPs:
  IP:           10.244.0.15
Controlled By:  ReplicaSet/nginx-6799fc88d8
Containers:
  nginx:
    Container ID:   docker://f86cb1313c120b7797ac843a17f23a3551de7e868cbfe8fd24ade70de1ede843
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:b0ad43f7ee5edbc0effbc14645ae7055e21bc1973aee5150745632a24a752661
    Port:           
    Host Port:      
    State:          Running
      Started:      Sun, 06 Sep 2020 03:36:26 +0800
    Ready:          True
    Restart Count:  0
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-hdhjf (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-hdhjf:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-hdhjf
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason        Age    From             Message
  ----     ------        ----   ----             -------
  Normal   Scheduled     4m42s                   Successfully assigned default/nginx-6799fc88d8-9lsjl to k8s01
  Normal   Pulling       4m42s  kubelet, k8s01   Pulling image "nginx"
  Normal   Pulled        4m40s  kubelet, k8s01   Successfully pulled image "nginx" in 2.073509979s
  Normal   Created       4m40s  kubelet, k8s01   Created container nginx
  Normal   Started       4m40s  kubelet, k8s01   Started container nginx
  Warning  NodeNotReady  68s    node-controller  Node is not ready
  [root@k8s01 yaml]# kubectl  get po -o wide 
NAME                     READY   STATUS              RESTARTS   AGE   IP            NODE    NOMINATED NODE   READINESS GATES
nginx-6799fc88d8-9lsjl   1/1     Terminating         1          13m   10.244.0.15   k8s01              
nginx-6799fc88d8-dvcj7   0/1     ContainerCreating   0          3s            k8s02              /<code>

我們驚喜的發現,在5分鐘( pod-eviction-timeout控制 ,默認5m0s)後,在k8s01節點上的pod自動刪除,pod被調度到了k8s02節點上,並被啟動了起來,實現了pod的轉移。可是實際在生產中,肯定無法容忍這個現象,那麼我們還有什麼策略呢?

3.2、deployment的pod多副本

我們在回顧一下deployment的yaml文件,可以發現裡面有一行為副本數,那麼我們對此進行修改後又將會如何呢?

<code>[root@k8s01 yaml]# vim nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
​
[root@k8s01 yaml]# kubectl  apply -f nginx-deployment.yaml 
deployment.apps/nginx configured
[root@k8s01 yaml]# kubectl  get pod -o wide 
NAME                     READY   STATUS    RESTARTS   AGE     IP            NODE    NOMINATED NODE   READINESS GATES
nginx-6799fc88d8-dvcj7   1/1     Running   0          7m55s   10.244.1.13   k8s02              
nginx-6799fc88d8-j9l4v   1/1     Running   0          22s     10.244.0.16   k8s01              
nginx-6799fc88d8-v48rj   1/1     Running   0          22s     10.244.2.15   k8s03              /<code>

我們可以驚喜的看到,pod的副本數由1變成了3,那麼這個是怎麼實現的呢?

<code>[root@k8s01 yaml]# kubectl  describe deployment nginx 
Name:                   nginx
Namespace:              default
CreationTimestamp:      Sun, 06 Sep 2020 03:36:23 +0800
Labels:                 app=nginx
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=nginx
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx:
    Image:        nginx
    Port:         
    Host Port:    
    Environment:  
    Mounts:       
  Volumes:        
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Progressing    True    NewReplicaSetAvailable
  Available      True    MinimumReplicasAvailable
OldReplicaSets:  
NewReplicaSet:   nginx-6799fc88d8 (3/3 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  21m   deployment-controller  Scaled up replica set nginx-6799fc88d8 to 1
  Normal  ScalingReplicaSet  77s   deployment-controller  Scaled up replica set nginx-6799fc88d8 to 3/<code>

可以在deployment的事件中看到,deployment-controller 將nginx的replica 調整到3,這個replica是kubernetes的控制器,可以按照模板來實現pod的創建。

3.3、deployment的地毯式升級

作為核心資源的deployment的除此之外,還可以實現地毯式升級,而且可以控制升級的速率,主要是通過以下參數實現

<code>maxSurge : 決定了deployment配置中期望的副本數之外,最多允許超出的pod實例數量
maxUnavailable : 決定了滾動升級時,最多有多少pod處於不可用狀態/<code>

下面我們模擬一下升級,先創建一個升級使用的yaml文件

<code>apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 2
      maxUnavailable: 0
  selector:
    matchLabels:
      app: nginx
  replicas: 8
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:1.12.1
        name: nginx/<code> 

下面開始升級服務

<code># 先將副本數調至8,以放大現象,使滾動升級更加明顯
[root@k8s01 yaml]# kubectl  scale deployment nginx --replicas=8
[root@k8s01 yaml]# kubectl  get pod -o wide 
NAME                     READY   STATUS    RESTARTS   AGE     IP            NODE    NOMINATED NODE   READINESS GATES
nginx-6799fc88d8-72kkv   1/1     Running   0          4m41s   10.244.1.14   k8s02              
nginx-6799fc88d8-7tl5d   1/1     Running   0          4m41s   10.244.1.15   k8s02              
nginx-6799fc88d8-dvcj7   1/1     Running   0          29m     10.244.1.13   k8s02              
nginx-6799fc88d8-j9l4v   1/1     Running   0          22m     10.244.0.16   k8s01              
nginx-6799fc88d8-jhwt6   1/1     Running   0          4m41s   10.244.0.17   k8s01              
nginx-6799fc88d8-m4wxm   1/1     Running   0          4m41s   10.244.2.16   k8s03              
nginx-6799fc88d8-mg6jl   1/1     Running   0          4m41s   10.244.0.18   k8s01              
nginx-6799fc88d8-v48rj   1/1     Running   0          22m     10.244.2.15   k8s03              
# 執行升級命令
[root@k8s01 yaml]# kubectl  apply -f nginx-deployment-update.yaml 
deployment.apps/nginx configured
# 開始滾動升級
[root@k8s01 yaml]# kubectl  get pod -o wide
NAME                     READY   STATUS              RESTARTS   AGE   IP            NODE    NOMINATED NODE   READINESS GATES
nginx-599c4c9ccc-4z7nn   0/1     ContainerCreating   0          15s           k8s02              
nginx-599c4c9ccc-kbr6v   0/1     ContainerCreating   0          15s           k8s01              
nginx-6799fc88d8-72kkv   1/1     Running             0          10m   10.244.1.14   k8s02              
nginx-6799fc88d8-7tl5d   1/1     Running             0          10m   10.244.1.15   k8s02              
nginx-6799fc88d8-dvcj7   1/1     Running             0          35m   10.244.1.13   k8s02              
nginx-6799fc88d8-j9l4v   1/1     Running             0          28m   10.244.0.16   k8s01              
nginx-6799fc88d8-jhwt6   1/1     Running             0          10m   10.244.0.17   k8s01              
nginx-6799fc88d8-m4wxm   1/1     Running             0          10m   10.244.2.16   k8s03              
nginx-6799fc88d8-mg6jl   1/1     Running             0          10m   10.244.0.18   k8s01              
nginx-6799fc88d8-v48rj   1/1     Running             0          28m   10.244.2.15   k8s03              
# 滾動升級完畢
[root@k8s01 yaml]# kubectl  get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE     IP            NODE    NOMINATED NODE   READINESS GATES
nginx-599c4c9ccc-2f4fc   1/1     Running   0          2m15s   10.244.2.17   k8s03              
nginx-599c4c9ccc-4cckr   1/1     Running   0          46s     10.244.0.20   k8s01              
nginx-599c4c9ccc-4vh5f   1/1     Running   0          32s     10.244.1.18   k8s02              
nginx-599c4c9ccc-4z7nn   1/1     Running   0          4m4s    10.244.1.16   k8s02              
nginx-599c4c9ccc-87hf7   1/1     Running   0          28s     10.244.0.21   k8s01              
nginx-599c4c9ccc-kbr6v   1/1     Running   0          4m4s    10.244.0.19   k8s01              
nginx-599c4c9ccc-mk6c2   1/1     Running   0          74s     10.244.1.17   k8s02              
nginx-599c4c9ccc-q4wtg   1/1     Running   0          41s     10.244.2.18   k8s03              /<code>

這裡可以看到nginx的滾動升級已經結束,下面我們可以看一下deployment的事件

<code>[root@k8s01 yaml]# kubectl  describe  deployment nginx
Name:                   nginx
Namespace:              default
CreationTimestamp:      Sun, 06 Sep 2020 03:36:23 +0800
Labels:                 app=nginx
Annotations:            deployment.kubernetes.io/revision: 2
Selector:               app=nginx
Replicas:               8 desired | 8 updated | 8 total | 8 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  0 max unavailable, 2 max surge
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx:
    Image:        nginx:1.12.1
    Port:         
    Host Port:    
    Environment:  
    Mounts:       
  Volumes:        
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  
NewReplicaSet:   nginx-599c4c9ccc (8/8 replicas created)
Events:
  Type    Reason             Age                From                   Message
  ----    ------             ----               ----                   -------
  Normal  ScalingReplicaSet  53m                deployment-controller  Scaled up replica set nginx-6799fc88d8 to 1
  Normal  ScalingReplicaSet  32m                deployment-controller  Scaled up replica set nginx-6799fc88d8 to 3
  Normal  ScalingReplicaSet  15m                deployment-controller  Scaled up replica set nginx-6799fc88d8 to 8
  Normal  ScalingReplicaSet  5m                 deployment-controller  Scaled up replica set nginx-599c4c9ccc to 2
  Normal  ScalingReplicaSet  3m11s              deployment-controller  Scaled down replica set nginx-6799fc88d8 to 7
  Normal  ScalingReplicaSet  3m11s              deployment-controller  Scaled up replica set nginx-599c4c9ccc to 3
  Normal  ScalingReplicaSet  2m10s              deployment-controller  Scaled up replica set nginx-599c4c9ccc to 4
  Normal  ScalingReplicaSet  2m10s              deployment-controller  Scaled down replica set nginx-6799fc88d8 to 6
  Normal  ScalingReplicaSet  102s               deployment-controller  Scaled down replica set nginx-6799fc88d8 to 5
  Normal  ScalingReplicaSet  102s               deployment-controller  Scaled up replica set nginx-599c4c9ccc to 5
  Normal  ScalingReplicaSet  97s                deployment-controller  Scaled down replica set nginx-6799fc88d8 to 4
  Normal  ScalingReplicaSet  97s                deployment-controller  Scaled up replica set nginx-599c4c9ccc to 6
  Normal  ScalingReplicaSet  65s (x6 over 88s)  deployment-controller  (combined from similar events): Scaled down replica set nginx-6799fc88d8 to 0/<code>

可以看到,kubernetes通過deployment-controller,將replica nginx-599c4c9ccc 調整為2,當pod創建成功後,將replica nginx-6799fc88d8調整為7(這個依賴於pod啟動的速度),按照此順序,直到nginx-599c4c9ccc 調整為8,nginx-6799fc88d8調整為0,滾動升級結束。

除此之外,deployment資源也可以實現資源的回退

<code>[root@k8s01 yaml]# kubectl  rollout history deployment nginx
deployment.apps/nginx 
REVISION  CHANGE-CAUSE
1         
2         
[root@k8s01 yaml]# kubectl  rollout undo deployment nginx
deployment.apps/nginx rolled back
[root@k8s01 yaml]# kubectl  get pod -o wide 
NAME                     READY   STATUS    RESTARTS   AGE   IP            NODE    NOMINATED NODE   READINESS GATES
nginx-6799fc88d8-4wn62   1/1     Running   0          79s   10.244.1.19   k8s02              
nginx-6799fc88d8-5rz78   1/1     Running   0          47s   10.244.0.24   k8s01              
nginx-6799fc88d8-ckdfx   1/1     Running   0          60s   10.244.2.19   k8s03              
nginx-6799fc88d8-f6dr7   1/1     Running   0          51s   10.244.1.21   k8s02              
nginx-6799fc88d8-ghhp2   1/1     Running   0          55s   10.244.2.20   k8s03              
nginx-6799fc88d8-msl22   1/1     Running   0          55s   10.244.0.23   k8s01              
nginx-6799fc88d8-qmcxq   1/1     Running   0          79s   10.244.0.22   k8s01              
nginx-6799fc88d8-wvmw9   1/1     Running   0          60s   10.244.1.20   k8s02              /<code>

deployment回退到了上一個版本,下面看一下deployment的事件

<code>[root@k8s01 yaml]# kubectl  describe deployment nginx
Name:                   nginx
Namespace:              default
CreationTimestamp:      Sun, 06 Sep 2020 03:36:23 +0800
Labels:                 app=nginx
Annotations:            deployment.kubernetes.io/revision: 3
Selector:               app=nginx
Replicas:               8 desired | 8 updated | 8 total | 8 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  0 max unavailable, 2 max surge
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx:
    Image:        nginx
    Port:         
    Host Port:    
    Environment:  
    Mounts:       
  Volumes:        
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  
NewReplicaSet:   nginx-6799fc88d8 (8/8 replicas created)
Events:
  Type    Reason             Age                 From                   Message
  ----    ------             ----                ----                   -------
  Normal  ScalingReplicaSet  41m                 deployment-controller  Scaled up replica set nginx-6799fc88d8 to 8
  Normal  ScalingReplicaSet  31m                 deployment-controller  Scaled up replica set nginx-599c4c9ccc to 2
  Normal  ScalingReplicaSet  29m                 deployment-controller  Scaled down replica set nginx-6799fc88d8 to 7
  Normal  ScalingReplicaSet  29m                 deployment-controller  Scaled up replica set nginx-599c4c9ccc to 3
  Normal  ScalingReplicaSet  28m                 deployment-controller  Scaled up replica set nginx-599c4c9ccc to 4
  Normal  ScalingReplicaSet  28m                 deployment-controller  Scaled down replica set nginx-6799fc88d8 to 6
  Normal  ScalingReplicaSet  27m                 deployment-controller  Scaled down replica set nginx-6799fc88d8 to 5
  Normal  ScalingReplicaSet  27m                 deployment-controller  Scaled up replica set nginx-599c4c9ccc to 5
  Normal  ScalingReplicaSet  27m                 deployment-controller  Scaled down replica set nginx-6799fc88d8 to 4
  Normal  ScalingReplicaSet  27m                 deployment-controller  Scaled up replica set nginx-599c4c9ccc to 6
  Normal  ScalingReplicaSet  118s                deployment-controller  Scaled up replica set nginx-6799fc88d8 to 2
  Normal  ScalingReplicaSet  99s                 deployment-controller  Scaled down replica set nginx-599c4c9ccc to 6
  Normal  ScalingReplicaSet  99s                 deployment-controller  Scaled up replica set nginx-6799fc88d8 to 4
  Normal  ScalingReplicaSet  99s                 deployment-controller  Scaled down replica set nginx-599c4c9ccc to 7
  Normal  ScalingReplicaSet  99s (x2 over 58m)   deployment-controller  Scaled up replica set nginx-6799fc88d8 to 3
  Normal  ScalingReplicaSet  94s                 deployment-controller  Scaled down replica set nginx-599c4c9ccc to 5
  Normal  ScalingReplicaSet  94s                 deployment-controller  Scaled up replica set nginx-6799fc88d8 to 5
  Normal  ScalingReplicaSet  94s                 deployment-controller  Scaled down replica set nginx-599c4c9ccc to 4
  Normal  ScalingReplicaSet  94s                 deployment-controller  Scaled up replica set nginx-6799fc88d8 to 6
  Normal  ScalingReplicaSet  73s (x12 over 27m)  deployment-controller  (combined from similar events): Scaled down replica set nginx-599c4c9ccc to 0/<code>

和滾動升級相同的策略,deployment回退到了之前的版本。

可以說deployment是kubernetes中的一個很重要的資源,後面會對此資源進行更加細緻的分析,去尋找此控制器的最佳實踐。


分享到:


相關文章: