容器編排系統K8s之節點污點和pod容忍度

  前文我們了解了k8s上的kube-scheduler的工作方式,以及pod調度策略的定義;回顧請參考://www.cnblogs.com/qiuhom-1874/p/14243312.html;今天我們來聊一下k8s上的節點污點和pod容忍度相關話題;

  節點污點是什麼呢?

  節點污點有點類似節點上的標籤或註解資訊,它們都是用來描述對應節點的元數據資訊;污點定義的格式和標籤、註解的定義方式很類似,都是用一個kv數據來表示,不同於節點標籤,污點的鍵值數據中包含對應污點的effect,污點的effect是用於描述對應節點上的污點有什麼作用;在k8s上污點有三個效用(effect),第一個效用是NoSchedule,表示拒絕pod調度到對應節點上運行;第二個效用是PreferSchedule,表示盡量不把pod調度到此節點上運行;第三個效用是NoExecute,表示拒絕將pod調度到此節點上運行;該效用相比NoSchedule要嚴苛一點;從上面的描述來看,對應污點就是來描述拒絕pod運行在對應節點的節點屬性;

  pod對節點污點的容忍度

  從字面意思就能夠理解,pod要想運行在對應有污點的節點上,對應pod就要容忍對應節點上的污點;我們把這種容忍節點污點的定義叫做pod對節點污點的容忍度;pod對節點污點的容忍度就是在對應pod中定義怎麼去匹配節點污點;通常匹配節點污點的方式有兩種,一種是等值匹配,一種是存在性匹配;所謂等值匹配表示對應pod的污點容忍度,必須和節點上的污點屬性相等,所謂污點屬性是指污點的key、value以及effect;即容忍度必須滿足和對應污點的key,value和effect相同,這樣表示等值匹配關係,其操作符為Equal;存在性匹配是指對應容忍度只需要匹配污點的key和effect即可,value不納入匹配標準,即容忍度只要滿足和對應污點的key和effect相同就表示能夠容忍對應污點,其操作符為Exists;

  節點污點和pod容忍度的關係

  提示:如上圖所示,只有能夠容忍對應節點污點的pod才能夠被調度到對應節點運行,不能容忍節點污點的pod是一定不能調度到對應節點上運行(除節點污點為PreferNoSchedule);

  節點污點管理

  給節點添加污點命令使用語法格式

Usage:
  kubectl taint NODE NAME KEY_1=VAL_1:TAINT_EFFECT_1 ... KEY_N=VAL_N:TAINT_EFFECT_N [options]

  提示:給節點增加污點我們可以用kubectl taint node命令來增加節點污點,只需要指定對應節點名稱和污點即可,污點可以指定多個,用空格隔開;

  示例:給node01添加一個test=test:NoSchedule的污點

[root@master01 ~]# kubectl taint node node01.k8s.org test=test:NoSchedule
node/node01.k8s.org tainted
[root@master01 ~]#

  查看節點污點

[root@master01 ~]# kubectl describe node node01.k8s.org |grep Taint
Taints:             test=test:NoSchedule
[root@master01 ~]# 

  刪除污點

[root@master01 ~]# kubectl describe node node01.k8s.org |grep Taint
Taints:             test=test:NoSchedule
[root@master01 ~]# kubectl taint node node01.k8s.org test:NoSchedule-
node/node01.k8s.org untainted
[root@master01 ~]# kubectl describe node node01.k8s.org |grep Taint  
Taints:             <none>
[root@master01 ~]# 

  提示:刪除污點可以指定對應節點上的污點的key和對應污點的effect,也可以直接在對應污點的key後面加「-」,表示刪除對應名為對應key的所有污點;

   pod容忍度定義

  示例:創建一個pod,其容忍度為對應節點有 node-role.kubernetes.io/master:NoSchedule的污點

[root@master01 ~]# cat pod-demo-taints.yaml
apiVersion: v1
kind: Pod
metadata:
  name: redis-demo
  labels:
    app: db
spec:
  containers:
  - name: redis
    image: redis:4-alpine
    ports:
    - name: redis
      containerPort: 6379
  tolerations:
  - key: node-role.kubernetes.io/master
    operator: Exists
    effect: NoSchedule
[root@master01 ~]# 

  提示:定義pod對節點污點的容忍度需要用tolerations欄位定義,該欄位為一個列表對象;其中key是用來指定對應污點的key,這個key必須和對應節點污點上的key相等;operator欄位用於指定對應的操作符,即描述容忍度怎麼匹配污點,這個操作符只有兩個,Equal和Exists;effect欄位用於描述對應的效用,該欄位的值通常有三個,NoSchedule、PreferNoSchedule、NoExecute;這個欄位的值必須和對應的污點相同;上述清單表示,redis-demo這個pod能夠容忍節點上有node-role.kubernetes.io/master:NoSchedule的污點;

  應用清單

[root@master01 ~]# kubectl apply -f pod-demo-taints.yaml
pod/redis-demo created
[root@master01 ~]# kubectl get pods -o wide
NAME         READY   STATUS    RESTARTS   AGE   IP            NODE             NOMINATED NODE   READINESS GATES
redis-demo   1/1     Running   0          7s    10.244.4.35   node04.k8s.org   <none>           <none>
[root@master01 ~]# 

  提示:可以看到對應pod運行在node04上;這裡需要注意,定義pod容忍度只是表示對應pod可以運行在對應有污點的節點上,並非它一定運行在對應節點上;它也可以運行在那些沒有污點的節點上;

  驗證:刪除pod,給node01,node02,03,04都打上test:NoSchedule的污點,再次應用清單,看看對應pod是否能夠正常運行?

[root@master01 ~]# kubectl delete -f pod-demo-taints.yaml
pod "redis-demo" deleted
[root@master01 ~]# kubectl taint node node01.k8s.org test:NoSchedule
node/node01.k8s.org tainted
[root@master01 ~]# kubectl taint node node02.k8s.org test:NoSchedule 
node/node02.k8s.org tainted
[root@master01 ~]# kubectl taint node node03.k8s.org test:NoSchedule 
node/node03.k8s.org tainted
[root@master01 ~]# kubectl taint node node04.k8s.org test:NoSchedule 
node/node04.k8s.org tainted
[root@master01 ~]# kubectl describe node node01.k8s.org |grep Taints
Taints:             test:NoSchedule
[root@master01 ~]# kubectl describe node node02.k8s.org |grep Taints 
Taints:             test:NoSchedule
[root@master01 ~]# kubectl describe node node03.k8s.org |grep Taints 
Taints:             test:NoSchedule
[root@master01 ~]# kubectl describe node node04.k8s.org |grep Taints 
Taints:             test:NoSchedule
[root@master01 ~]# kubectl apply -f pod-demo-taints.yaml
pod/redis-demo created
[root@master01 ~]# kubectl get pods -o wide
NAME         READY   STATUS    RESTARTS   AGE   IP            NODE               NOMINATED NODE   READINESS GATES
redis-demo   1/1     Running   0          18s   10.244.0.14   master01.k8s.org   <none>           <none>
[root@master01 ~]# 

  提示:可以看到對應pod,被調度到master節點上運行了;其原因是對應pod能夠容忍master節點上的污點;對應其他node節點上的污點,它並不能容忍,所以只能運行在master節點;

  刪除對應pod中容忍度的定義,再次應用pod清單,看看對應pod是否會正常運行?

[root@master01 ~]# kubectl delete pod redis-demo 
pod "redis-demo" deleted
[root@master01 ~]# cat pod-demo-taints.yaml
apiVersion: v1
kind: Pod
metadata:
  name: redis-demo
  labels:
    app: db
spec:
  containers:
  - name: redis
    image: redis:4-alpine
    ports:
    - name: redis
      containerPort: 6379
[root@master01 ~]# kubectl apply -f pod-demo-taints.yaml
pod/redis-demo created
[root@master01 ~]# kubectl get pods -o wide
NAME         READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
redis-demo   0/1     Pending   0          6s    <none>   <none>   <none>           <none>
[root@master01 ~]# 

  提示:可以看到對應pod處於pending狀態;其原因是對應pod沒法容忍對應節點污點;即所有節點都排斥對應pod運行在對應節點上;

  示例:定義等值匹配關係污點容忍度

[root@master01 ~]# cat pod-demo-taints.yaml
apiVersion: v1
kind: Pod
metadata:
  name: redis-demo
  labels:
    app: db
spec:
  containers:
  - name: redis
    image: redis:4-alpine
    ports:
    - name: redis
      containerPort: 6379
  tolerations:
  - key: test
    operator: Equal
    value: test
    effect: NoSchedule

[root@master01 ~]# 

  提示:定義等值匹配關係的容忍度,需要指定對應污點中的value屬性;

  刪除原有pod,應用清單

[root@master01 ~]# kubectl delete pod redis-demo
pod "redis-demo" deleted
[root@master01 ~]# kubectl apply -f pod-demo-taints.yaml
pod/redis-demo created
[root@master01 ~]# kubectl get pods -o wide
NAME         READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
redis-demo   0/1     Pending   0          4s    <none>   <none>   <none>           <none>
[root@master01 ~]# 

  提示:可以看到應用對應清單以後,pod處於pending狀態,其原因是沒有滿足對應pod容忍度的節點,所以對應pod無法正常調度到節點上運行;

  驗證:修改node01節點的污點為test=test:NoSchedule

[root@master01 ~]# kubectl describe node node01.k8s.org |grep Taints
Taints:             test:NoSchedule
[root@master01 ~]# kubectl taint node node01.k8s.org test=test:NoSchedule --overwrite 
node/node01.k8s.org modified
[root@master01 ~]# kubectl describe node node01.k8s.org |grep Taints                 
Taints:             test=test:NoSchedule
[root@master01 ~]# kubectl get pods -o wide
NAME         READY   STATUS    RESTARTS   AGE     IP            NODE             NOMINATED NODE   READINESS GATES
redis-demo   1/1     Running   0          4m46s   10.244.1.44   node01.k8s.org   <none>           <none>
[root@master01 ~]# 

  提示:可以看到把node01的污點修改為test=test:NoSchedule以後,對應pod就被調度到node01上運行;

  驗證:修改node01節點上的污點為test:NoSchedule,看看對應pod是否被驅離呢?

[root@master01 ~]# kubectl taint node node01.k8s.org test:NoSchedule --overwrite     
node/node01.k8s.org modified
[root@master01 ~]# kubectl describe node node01.k8s.org |grep Taints                 
Taints:             test:NoSchedule
[root@master01 ~]# kubectl get pods -o wide
NAME         READY   STATUS    RESTARTS   AGE     IP            NODE             NOMINATED NODE   READINESS GATES
redis-demo   1/1     Running   0          7m27s   10.244.1.44   node01.k8s.org   <none>           <none>
[root@master01 ~]# 

  提示:可以看到對應節點污點修改為test:NoSchedule以後,對應pod也不會被驅離,說明效用為NoSchedule的污點只是在pod調度時起作用,對於調度完成的pod不起作用;

  示例:定義pod容忍度為test:PreferNoSchedule

[root@master01 ~]# cat pod-demo-taints.yaml
apiVersion: v1
kind: Pod
metadata:
  name: redis-demo1
  labels:
    app: db
spec:
  containers:
  - name: redis
    image: redis:4-alpine
    ports:
    - name: redis
      containerPort: 6379
  tolerations:
  - key: test
    operator: Exists
    effect: PreferNoSchedule

[root@master01 ~]# 

  應用清單

[root@master01 ~]# kubectl apply -f pod-demo-taints.yaml
pod/redis-demo1 created
[root@master01 ~]# kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE   IP            NODE             NOMINATED NODE   READINESS GATES
redis-demo    1/1     Running   0          11m   10.244.1.44   node01.k8s.org   <none>           <none>
redis-demo1   0/1     Pending   0          6s    <none>        <none>           <none>           <none>
[root@master01 ~]# 

  提示:可以看到對應pod處於pending狀態,其原因是沒有節點污點是test:PerferNoSchedule,所以對應pod不能被調度運行;

  給node02節點添加test:PreferNoSchedule污點

[root@master01 ~]# kubectl describe node node02.k8s.org |grep Taints 
Taints:             test:NoSchedule
[root@master01 ~]# kubectl taint node node02.k8s.org test:PreferNoSchedule 
node/node02.k8s.org tainted
[root@master01 ~]# kubectl describe node node02.k8s.org |grep -A 1 Taints
Taints:             test:NoSchedule
                    test:PreferNoSchedule
[root@master01 ~]# kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE     IP            NODE             NOMINATED NODE   READINESS GATES
redis-demo    1/1     Running   0          18m     10.244.1.44   node01.k8s.org   <none>           <none>
redis-demo1   0/1     Pending   0          6m21s   <none>        <none>           <none>           <none>
[root@master01 ~]# 

  提示:可以看到對應node02上有兩個污點,對應pod也沒有正常運行起來,其原因是node02上有一個test:NoSchedule污點,對應pod容忍度不能容忍此類污點;

  驗證:修改node01,node03,node04上的節點污點為test:PreferNoSchedule,修改pod的容忍度為test:NoSchedule,再次應用清單,看看對應pod怎麼調度

[root@master01 ~]# kubectl taint node node01.k8s.org test:NoSchedule-     
node/node01.k8s.org untainted
[root@master01 ~]# kubectl taint node node03.k8s.org test:NoSchedule- 
node/node03.k8s.org untainted
[root@master01 ~]# kubectl taint node node04.k8s.org test:NoSchedule- 
node/node04.k8s.org untainted
[root@master01 ~]# kubectl taint node node01.k8s.org test:PreferNoSchedule
node/node01.k8s.org tainted
[root@master01 ~]# kubectl taint node node03.k8s.org test:PreferNoSchedule  
node/node03.k8s.org tainted
[root@master01 ~]# kubectl taint node node04.k8s.org test:PreferNoSchedule 
node/node04.k8s.org tainted
[root@master01 ~]# kubectl describe node node01.k8s.org |grep -A 1 Taints 
Taints:             test:PreferNoSchedule
Unschedulable:      false
[root@master01 ~]# kubectl describe node node02.k8s.org |grep -A 1 Taints 
Taints:             test:NoSchedule
                    test:PreferNoSchedule
[root@master01 ~]# kubectl describe node node03.k8s.org |grep -A 1 Taints 
Taints:             test:PreferNoSchedule
Unschedulable:      false
[root@master01 ~]# kubectl describe node node04.k8s.org |grep -A 1 Taints 
Taints:             test:PreferNoSchedule
Unschedulable:      false
[root@master01 ~]# kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE   IP            NODE             NOMINATED NODE   READINESS GATES
redis-demo    1/1     Running   0          31m   10.244.1.44   node01.k8s.org   <none>           <none>
redis-demo1   1/1     Running   0          19m   10.244.1.45   node01.k8s.org   <none>           <none>
[root@master01 ~]# kubectl delete pod --all
pod "redis-demo" deleted
pod "redis-demo1" deleted
[root@master01 ~]# cat pod-demo-taints.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: redis-demo1
  labels:
    app: db
spec:
  containers:
  - name: redis
    image: redis:4-alpine
    ports:
    - name: redis
      containerPort: 6379
  tolerations:
  - key: test
    operator: Exists
    effect: NoSchedule

[root@master01 ~]# kubectl apply -f pod-demo-taints.yaml
pod/redis-demo1 created
[root@master01 ~]# kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE   IP            NODE             NOMINATED NODE   READINESS GATES
redis-demo1   1/1     Running   0          5s    10.244.4.36   node04.k8s.org   <none>           <none>
[root@master01 ~]# 

  提示:從上面的驗證過程來看,當我們把node01,node03,node04節點上的污點刪除以後,剛才創建的redis-demo1pod被調度到node01上運行了;其原因是node01上的污點第一個被刪除;但我們把pod的容忍對修改成test:NoSchedule以後,再次應用清單,對應pod被調度到node04上運行;這意味著NoSchedule效用污點容忍度是可以正常容忍PreferNoSchedule污點;

  示例:定義pod容忍度為test:NoExecute

[root@master01 ~]# cat pod-demo-taints.yaml
apiVersion: v1
kind: Pod
metadata:
  name: redis-demo2
  labels:
    app: db
spec:
  containers:
  - name: redis
    image: redis:4-alpine
    ports:
    - name: redis
      containerPort: 6379
  tolerations:
  - key: test
    operator: Exists
    effect: NoExecute
[root@master01 ~]# 

  應用清單

[root@master01 ~]# kubectl apply -f pod-demo-taints.yaml
pod/redis-demo2 created
[root@master01 ~]# kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE   IP            NODE             NOMINATED NODE   READINESS GATES
redis-demo1   1/1     Running   0          35m   10.244.4.36   node04.k8s.org   <none>           <none>
redis-demo2   1/1     Running   0          5s    10.244.4.38   node04.k8s.org   <none>           <none>
[root@master01 ~]# 

  提示:可以看到對應pod被調度到node04上運行,說明容忍效用為NoExecute能夠容忍污點效用為PreferNoSchedule的節點;

  驗證:更改所有node節點污點為test:NoSchedule,刪除原有pod,再次應用清單,看看對應pod是否還會正常運行?

[root@master01 ~]# kubectl taint node node01.k8s.org test-
node/node01.k8s.org untainted
[root@master01 ~]# kubectl taint node node02.k8s.org test- 
node/node02.k8s.org untainted
[root@master01 ~]# kubectl taint node node03.k8s.org test- 
node/node03.k8s.org untainted
[root@master01 ~]# kubectl taint node node04.k8s.org test- 
node/node04.k8s.org untainted
[root@master01 ~]# kubectl taint node node01.k8s.org test:NoSchedule
node/node01.k8s.org tainted
[root@master01 ~]# kubectl taint node node02.k8s.org test:NoSchedule 
node/node02.k8s.org tainted
[root@master01 ~]# kubectl taint node node03.k8s.org test:NoSchedule 
node/node03.k8s.org tainted
[root@master01 ~]# kubectl taint node node04.k8s.org test:NoSchedule 
node/node04.k8s.org tainted
[root@master01 ~]# kubectl describe node node01.k8s.org |grep -A 1 Taints
Taints:             test:NoSchedule
Unschedulable:      false
[root@master01 ~]# kubectl describe node node02.k8s.org |grep -A 1 Taints 
Taints:             test:NoSchedule
Unschedulable:      false
[root@master01 ~]# kubectl describe node node03.k8s.org |grep -A 1 Taints 
Taints:             test:NoSchedule
Unschedulable:      false
[root@master01 ~]# kubectl describe node node04.k8s.org |grep -A 1 Taints 
Taints:             test:NoSchedule
Unschedulable:      false
[root@master01 ~]# kubectl delete pod --all
pod "redis-demo1" deleted
pod "redis-demo2" deleted
[root@master01 ~]# kubectl apply -f pod-demo-taints.yaml
pod/redis-demo2 created
[root@master01 ~]# kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
redis-demo2   0/1     Pending   0          6s    <none>   <none>   <none>           <none>
[root@master01 ~]# 

  提示:可以看到對應pod處於pending狀態,說明pod容忍效用為NoExecute,並不能容忍污點效用為NoSchedule;

  刪除pod,修改所有節點污點為test:NoExecute,把pod容忍度修改為NoScheudle,然後應用清單,看看對應pod怎麼調度

[root@master01 ~]# kubectl delete pod --all
pod "redis-demo2" deleted
[root@master01 ~]# kubectl taint node node01.k8s.org test-               
node/node01.k8s.org untainted
[root@master01 ~]# kubectl taint node node02.k8s.org test- 
node/node02.k8s.org untainted
[root@master01 ~]# kubectl taint node node03.k8s.org test- 
node/node03.k8s.org untainted
[root@master01 ~]# kubectl taint node node04.k8s.org test- 
node/node04.k8s.org untainted
[root@master01 ~]# kubectl taint node node01.k8s.org test:NoExecute
node/node01.k8s.org tainted
[root@master01 ~]# kubectl taint node node02.k8s.org test:NoExecute 
node/node02.k8s.org tainted
[root@master01 ~]# kubectl taint node node03.k8s.org test:NoExecute 
node/node03.k8s.org tainted
[root@master01 ~]# kubectl taint node node04.k8s.org test:NoExecute 
node/node04.k8s.org tainted
[root@master01 ~]# kubectl describe node node01.k8s.org |grep -A 1 Taints
Taints:             test:NoExecute
Unschedulable:      false
[root@master01 ~]# kubectl describe node node02.k8s.org |grep -A 1 Taints 
Taints:             test:NoExecute
Unschedulable:      false
[root@master01 ~]# kubectl describe node node03.k8s.org |grep -A 1 Taints 
Taints:             test:NoExecute
Unschedulable:      false
[root@master01 ~]# kubectl describe node node04.k8s.org |grep -A 1 Taints 
Taints:             test:NoExecute
Unschedulable:      false
[root@master01 ~]# cat pod-demo-taints.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: redis-demo2
  labels:
    app: db
spec:
  containers:
  - name: redis
    image: redis:4-alpine
    ports:
    - name: redis
      containerPort: 6379
  tolerations:
  - key: test
    operator: Exists
    effect: NoSchedule
[root@master01 ~]# kubectl apply -f pod-demo-taints.yaml
pod/redis-demo2 created
[root@master01 ~]# kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
redis-demo2   0/1     Pending   0          8s    <none>   <none>   <none>           <none>
[root@master01 ~]# 

  提示:從上面的演示來看,pod容忍度效用為NoSchedule也不能容忍污點效用為NoExecute;

  刪除pod,修改對應pod的容忍度為test:NoExecute

[root@master01 ~]# kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE    IP       NODE     NOMINATED NODE   READINESS GATES
redis-demo2   0/1     Pending   0          5m5s   <none>   <none>   <none>           <none>
[root@master01 ~]# kubectl delete pod --all
pod "redis-demo2" deleted
[root@master01 ~]# cat pod-demo-taints.yaml
apiVersion: v1
kind: Pod
metadata:
  name: redis-demo2
  labels:
    app: db
spec:
  containers:
  - name: redis
    image: redis:4-alpine
    ports:
    - name: redis
      containerPort: 6379
  tolerations:
  - key: test
    operator: Exists
    effect: NoExecute
[root@master01 ~]# kubectl apply -f pod-demo-taints.yaml
pod/redis-demo2 created
[root@master01 ~]# kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE   IP            NODE             NOMINATED NODE   READINESS GATES
redis-demo2   1/1     Running   0          6s    10.244.4.43   node04.k8s.org   <none>           <none>
[root@master01 ~]# 

  修改node04節點污點為test:NoSchedule,看看對應pod是否可以正常運行?

[root@master01 ~]# kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE     IP            NODE             NOMINATED NODE   READINESS GATES
redis-demo2   1/1     Running   0          4m38s   10.244.4.43   node04.k8s.org   <none>           <none>
[root@master01 ~]# kubectl taint node node04.k8s.org test-
node/node04.k8s.org untainted
[root@master01 ~]# kubectl get pods -o wide               
NAME          READY   STATUS    RESTARTS   AGE    IP            NODE             NOMINATED NODE   READINESS GATES
redis-demo2   1/1     Running   0          8m2s   10.244.4.43   node04.k8s.org   <none>           <none>
[root@master01 ~]# kubectl taint node node04.k8s.org test:NoSchedule
node/node04.k8s.org tainted
[root@master01 ~]# kubectl describe node node04.k8s.org |grep -A 1 Taints
Taints:             test:NoSchedule
Unschedulable:      false
[root@master01 ~]# kubectl get pods -o wide                              
NAME          READY   STATUS    RESTARTS   AGE     IP            NODE             NOMINATED NODE   READINESS GATES
redis-demo2   1/1     Running   0          8m25s   10.244.4.43   node04.k8s.org   <none>           <none>
[root@master01 ~]# 

  提示:從NoExecute更改為NoSchedule,對原有pod不會進行驅離;

  修改pod的容忍度為test:NoSchedule,再次應用清單

[root@master01 ~]# cat pod-demo-taints.yaml
apiVersion: v1
kind: Pod
metadata:
  name: redis-demo3
  labels:
    app: db
spec:
  containers:
  - name: redis
    image: redis:4-alpine
    ports:
    - name: redis
      containerPort: 6379
  tolerations:
  - key: test
    operator: Exists
    effect: NoSchedule
---
apiVersion: v1
kind: Pod
metadata:
  name: redis-demo4
  labels:
    app: db
spec:
  containers:
  - name: redis
    image: redis:4-alpine
    ports:
    - name: redis
      containerPort: 6379
  tolerations:
  - key: test
    operator: Exists
    effect: NoSchedule
[root@master01 ~]# kubectl apply -f pod-demo-taints.yaml
pod/redis-demo3 created
pod/redis-demo4 created
[root@master01 ~]# kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE   IP            NODE             NOMINATED NODE   READINESS GATES
redis-demo2   1/1     Running   0          14m   10.244.4.43   node04.k8s.org   <none>           <none>
redis-demo3   1/1     Running   0          4s    10.244.4.45   node04.k8s.org   <none>           <none>
redis-demo4   1/1     Running   0          4s    10.244.4.46   node04.k8s.org   <none>           <none>
[root@master01 ~]# 

  提示:可以看到後面兩個pod都被調度node04上運行;其原因是對應pod的容忍度test:NoSchedule只能容忍node04上的污點test:NoSchedule;

  修改node04的污點為NoExecute,看看對應pod是否會被驅離?

[root@master01 ~]# kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE     IP            NODE             NOMINATED NODE   READINESS GATES
redis-demo2   1/1     Running   0          17m     10.244.4.43   node04.k8s.org   <none>           <none>
redis-demo3   1/1     Running   0          2m32s   10.244.4.45   node04.k8s.org   <none>           <none>
redis-demo4   1/1     Running   0          2m32s   10.244.4.46   node04.k8s.org   <none>           <none>
[root@master01 ~]# kubectl describe node node04.k8s.org |grep -A 1 Taints
Taints:             test:NoSchedule
Unschedulable:      false
[root@master01 ~]# kubectl taint node node04.k8s.org test-
node/node04.k8s.org untainted
[root@master01 ~]# kubectl taint node node04.k8s.org test:NoExecute
node/node04.k8s.org tainted
[root@master01 ~]# kubectl get pods -o wide
NAME          READY   STATUS        RESTARTS   AGE     IP            NODE             NOMINATED NODE   READINESS GATES
redis-demo2   1/1     Running       0          18m     10.244.4.43   node04.k8s.org   <none>           <none>
redis-demo3   0/1     Terminating   0          3m43s   10.244.4.45   node04.k8s.org   <none>           <none>
redis-demo4   0/1     Terminating   0          3m43s   10.244.4.46   node04.k8s.org   <none>           <none>
[root@master01 ~]# kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE   IP            NODE             NOMINATED NODE   READINESS GATES
redis-demo2   1/1     Running   0          18m   10.244.4.43   node04.k8s.org   <none>           <none>
[root@master01 ~]# 

  提示:可以看到修改node04的污點為test:NoExecute以後,對應pod容忍污點效用為不是NoExecute的pod被驅離了;說明污點效用為NoExecute,它會驅離不能容忍該污點效用的所有pod;

  創建一個deploy,其指定容器的容忍度為test:NoExecute,並指定其驅離延遲施加為10秒

[root@master01 ~]# cat deploy-demo-taint.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-demo
spec:
  replicas: 3
  selector:
     matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
      - name: redis
        image: redis:4-alpine
        ports:
        - name: redis
          containerPort: 6379
      tolerations:
      - key: test
        operator: Exists
        effect: NoExecute
        tolerationSeconds: 10
   
[root@master01 ~]# 

  提示:tolerationSeconds欄位用於指定其驅離寬限其時長;該欄位只能用在其容忍污點效用為NoExecute的容忍度中使用;其他污點效用不能使用該欄位來指定其容忍寬限時長;

  應用配置清單

[root@master01 ~]# kubectl apply -f deploy-demo-taint.yaml
deployment.apps/deploy-demo created
[root@master01 ~]# kubectl get pods -o wide -w
NAME                           READY   STATUS    RESTARTS   AGE   IP            NODE             NOMINATED NODE   READINESS GATES
deploy-demo-79b89f9847-9zk8j   1/1     Running   0          7s    10.244.2.71   node02.k8s.org   <none>           <none>
deploy-demo-79b89f9847-h8zlc   1/1     Running   0          7s    10.244.3.61   node03.k8s.org   <none>           <none>
deploy-demo-79b89f9847-shscr   1/1     Running   0          7s    10.244.1.62   node01.k8s.org   <none>           <none>
redis-demo2                    1/1     Running   0          54m   10.244.4.43   node04.k8s.org   <none>           <none>
deploy-demo-79b89f9847-h8zlc   1/1     Terminating   0          10s   10.244.3.61   node03.k8s.org   <none>           <none>
deploy-demo-79b89f9847-shscr   1/1     Terminating   0          10s   10.244.1.62   node01.k8s.org   <none>           <none>
deploy-demo-79b89f9847-2x8w6   0/1     Pending       0          0s    <none>        <none>           <none>           <none>
deploy-demo-79b89f9847-2x8w6   0/1     Pending       0          0s    <none>        node03.k8s.org   <none>           <none>
deploy-demo-79b89f9847-lhltv   0/1     Pending       0          0s    <none>        <none>           <none>           <none>
deploy-demo-79b89f9847-9zk8j   1/1     Terminating   0          10s   10.244.2.71   node02.k8s.org   <none>           <none>
deploy-demo-79b89f9847-2x8w6   0/1     ContainerCreating   0          0s    <none>        node03.k8s.org   <none>           <none>
deploy-demo-79b89f9847-lhltv   0/1     Pending             0          0s    <none>        node02.k8s.org   <none>           <none>
deploy-demo-79b89f9847-lhltv   0/1     ContainerCreating   0          0s    <none>        node02.k8s.org   <none>           <none>
deploy-demo-79b89f9847-w8xjw   0/1     Pending             0          0s    <none>        <none>           <none>           <none>
deploy-demo-79b89f9847-w8xjw   0/1     Pending             0          0s    <none>        node01.k8s.org   <none>           <none>
deploy-demo-79b89f9847-w8xjw   0/1     ContainerCreating   0          0s    <none>        node01.k8s.org   <none>           <none>
deploy-demo-79b89f9847-shscr   1/1     Terminating         0          10s   10.244.1.62   node01.k8s.org   <none>           <none>
deploy-demo-79b89f9847-h8zlc   1/1     Terminating         0          10s   10.244.3.61   node03.k8s.org   <none>           <none>
deploy-demo-79b89f9847-9zk8j   1/1     Terminating         0          10s   10.244.2.71   node02.k8s.org   <none>           <none>
deploy-demo-79b89f9847-shscr   0/1     Terminating         0          11s   10.244.1.62   node01.k8s.org   <none>           <none>
deploy-demo-79b89f9847-2x8w6   0/1     ContainerCreating   0          1s    <none>        node03.k8s.org   <none>           <none>
deploy-demo-79b89f9847-lhltv   0/1     ContainerCreating   0          1s    <none>        node02.k8s.org   <none>           <none>
deploy-demo-79b89f9847-w8xjw   0/1     ContainerCreating   0          1s    <none>        node01.k8s.org   <none>           <none>
deploy-demo-79b89f9847-h8zlc   0/1     Terminating         0          11s   10.244.3.61   node03.k8s.org   <none>           <none>
deploy-demo-79b89f9847-2x8w6   1/1     Running             0          1s    10.244.3.62   node03.k8s.org   <none>           <none>
deploy-demo-79b89f9847-9zk8j   0/1     Terminating         0          11s   10.244.2.71   node02.k8s.org   <none>           <none>
deploy-demo-79b89f9847-lhltv   1/1     Running             0          1s    10.244.2.72   node02.k8s.org   <none>           <none>
deploy-demo-79b89f9847-w8xjw   1/1     Running             0          2s    10.244.1.63   node01.k8s.org   <none>           <none>
deploy-demo-79b89f9847-h8zlc   0/1     Terminating         0          15s   10.244.3.61   node03.k8s.org   <none>           <none>
deploy-demo-79b89f9847-h8zlc   0/1     Terminating         0          15s   10.244.3.61   node03.k8s.org   <none>           <none>
^C[root@master01 ~]# 

  提示:可以看到對應pod只能在對應節點上運行10秒,隨後就被驅離,因為我們創建的是一個deploy,對應pod被驅離以後,對應deploy又會重建;

  總結:對於污點效用為NoSchedule來說,它只會拒絕新建的pod,不會對原有pod進行驅離;如果對應pod能夠容忍該污點,則對應pod就有可能運行在對應節點上;如果不能容忍,則對應pod一定不會調度到對應節點運行;對於污點效用為PreferNoSchedule來說,它也不會驅離已存在pod,它只有在所有節點都不滿足對應pod容忍度時,對應pod可以勉強運行在此類污點效用的節點上;對於污點效用為NoExecute來說,默認不指定其容忍寬限時長,表示能夠一直容忍,如果指定了其寬限時長,則到了寬限時長對應pod將會被驅離;對應之前被調度到該節點上的pod,在節點污點效用變為NoExecute後,該節點會立即驅離所有不能容忍污點效用為NoExecute的pod;