容器編排系統K8s之Volume的基礎使用
- 2020 年 12 月 24 日
- 筆記
- k8s, K8s volume基礎使用, Kubernetes, 雲計算, 容器編排系統
前文我們聊到了k8s上的ingress資源相關話題,回顧請參考://www.cnblogs.com/qiuhom-1874/p/14167581.html;今天們來聊一下k8s上volume相關話題;
在說k8s上的volume的使用前,我們先來回顧下docker里的volume;對於docker容器來說,鏡像是分成構建的且每一層都是只讀的,只讀就意味著不能修改數據;只有當一個鏡像運行為容器以後,在鏡像的最頂端才會加上一個可寫層,一旦這個容器被刪除,對應可寫層上的數據也隨之被刪除;為了解決docker容器上的數據持久化的問題;docker使用了volume;在docker上volume有兩種管理方式,第一種是用戶手動指定把宿主機(對於宿主機上的目錄可能是掛載存儲系統上的某目錄)上的某個目錄掛載到容器某個目錄,這種管理方式叫做綁定掛載卷;還有一種就是docker自身維護把某個目錄掛載到容器某個目錄,這種叫docker自己管理的卷;不管使用那種方式管理的volume,它都是容器直接關聯宿主機上的某個目錄或文件;docker中的volume解決了容器生命周期內產生的數據在容器終止後能夠持久化保存的問題;同樣k8s也有同樣的煩惱,不同的是k8s面對的是pod;我們知道pod是k8s上最小調度單元,一個pod被刪除以後,pod里運行的容器也隨之被刪除,那麼pod里容器產生的數據該如何被持久化呢?要想解決這個問題,我們先來看看pod的組成;
提示:在k8s上pod里可以運行一個或多個容器,運行多個容器,其中一個容器我們叫主容器,其他的容器是用來輔助主容器,我們叫做sidecar;對於pod來說,不管裡面運行多少個容器,在最底層都會運行一個pause容器,該容器最主要用來為pod提供基礎架構支撐;並且位於同一個pod中的容器都共享pause容器的網路名稱空間以及IPC和UTS;這樣一來我們要想給pod里的容器提供存儲卷,首先要把存儲卷關聯到pause容器,然後在容器里掛載pause里的存儲卷即可;如下圖所示
提示:如上圖所示,對於pause容器來說它可以關聯存儲A,也可以關聯存儲B;對於pause關聯某個存儲,其位於同一pod中的其他容器就也可以掛載pause里關聯的存儲目錄或文件;對於k8s來說存儲本來就不屬於k8s內部組件,它是一個外來系統,這也意味著我們要想k8s使用外部存儲系統,首先pause容器要有適配其對應存儲系統的驅動;我們知道同一宿主機上運行的多個容器都是共享同一內核,即宿主機內核上有某個存儲系統的驅動,那麼pause就可以使用對應的驅動去適配對應的存儲;
volumes類型
我們知道要想在k8s上使用存儲卷,我們需要在對應的節點上提供對應存儲系統的驅動,對應運行在該節點上的所有pod就可以使用對應的存儲系統,那麼問題來了,pod怎麼使用對應的存儲系統呢?該怎麼向其驅動程式傳遞參數呢?我們知道在k8s上一切皆對象,要在k8s上使用存儲卷,我們還需要把對應的驅動抽象成k8s上的資源;在使用時,我們直接初始化對應的資源為對象即可;為了在k8s上簡化使用存儲卷的複雜度,k8s內置了一些存儲介面,對於不同類型的存儲,其使用的介面、傳遞的參數也有所不同;除此之外在k8s上也支援用戶使用自定義存儲,通過csi介面來定義;
查看k8s上支援的volumes介面
[root@master01 ~]# kubectl explain pod.spec.volumes KIND: Pod VERSION: v1 RESOURCE: volumes <[]Object> DESCRIPTION: List of volumes that can be mounted by containers belonging to the pod. More info: //kubernetes.io/docs/concepts/storage/volumes Volume represents a named volume in a pod that may be accessed by any container in the pod. FIELDS: awsElasticBlockStore <Object> AWSElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: //kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk <Object> AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile <Object> AzureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs <Object> CephFS represents a Ceph FS mount on the host that shares a pod's lifetime cinder <Object> Cinder represents a cinder volume attached and mounted on kubelets host machine. More info: //examples.k8s.io/mysql-cinder-pd/README.md configMap <Object> ConfigMap represents a configMap that should populate this volume csi <Object> CSI (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). downwardAPI <Object> DownwardAPI represents downward API about the pod that should populate this volume emptyDir <Object> EmptyDir represents a temporary directory that shares a pod's lifetime. More info: //kubernetes.io/docs/concepts/storage/volumes#emptydir ephemeral <Object> Ephemeral represents a volume that is handled by a cluster storage driver (Alpha feature). The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. fc <Object> FC represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. flexVolume <Object> FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker <Object> Flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running gcePersistentDisk <Object> GCEPersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: //kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk gitRepo <Object> GitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. glusterfs <Object> Glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: //examples.k8s.io/volumes/glusterfs/README.md hostPath <Object> HostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: //kubernetes.io/docs/concepts/storage/volumes#hostpath iscsi <Object> ISCSI represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: //examples.k8s.io/volumes/iscsi/README.md name <string> -required- Volume's name. Must be a DNS_LABEL and unique within the pod. More info: //kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs <Object> NFS represents an NFS mount on the host that shares a pod's lifetime More info: //kubernetes.io/docs/concepts/storage/volumes#nfs persistentVolumeClaim <Object> PersistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: //kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims photonPersistentDisk <Object> PhotonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine portworxVolume <Object> PortworxVolume represents a portworx volume attached and mounted on kubelets host machine projected <Object> Items for all in one resources secrets, configmaps, and downward API quobyte <Object> Quobyte represents a Quobyte mount on the host that shares a pod's lifetime rbd <Object> RBD represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: //examples.k8s.io/volumes/rbd/README.md scaleIO <Object> ScaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. secret <Object> Secret represents a secret that should populate this volume. More info: //kubernetes.io/docs/concepts/storage/volumes#secret storageos <Object> StorageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. vsphereVolume <Object> VsphereVolume represents a vSphere volume attached and mounted on kubelets host machine [root@master01 ~]#
提示:從上面的幫助資訊可以看到k8s上支援的存儲介面還是很多,每一個存儲介面都是一種類型;對於這些存儲類型我們大致可以分為雲存儲,分散式存儲,網路存儲、臨時存儲,節點本地存儲,特殊類型存儲、用戶自定義存儲等等;比如awsElasticBlockStore、azureDisk、azureFile、gcePersistentDisk、vshperVolume、cinder這些類型劃分為雲存儲;cephfs、glusterfs、rbd這些劃分為分散式存儲;nfs、iscsi、fc這些劃分為網路存儲;enptyDIR劃分為臨時存儲;hostPath、local劃分為本地存儲;自定義存儲csi;特殊存儲configMap、secret、downwardAPId;持久卷申請persistentVolumeClaim等等;
volumes的使用
示例:創建使用hostPath類型存儲卷Pod
[root@master01 ~]# cat hostPath-demo.yaml apiVersion: v1 kind: Pod metadata: name: vol-hostpath-demo namespace: default spec: containers: - name: nginx image: nginx:1.14-alpine volumeMounts: - name: webhtml mountPath: /usr/share/nginx/html readOnly: true volumes: - name: webhtml hostPath: path: /vol/html/ type: DirectoryOrCreate [root@master01 ~]#
提示:以上配置清單表示創建一個名為nginx的pod,對應使用nginx:1.14-alpine的鏡像;並且定義了一個存儲卷,該存儲卷的名稱為webhtml,其類型為hostPath;在定義存儲卷時,我們需要在spec欄位下使用volumes欄位來指定,該欄位的值為一個對象列表;其中name是必須欄位,用於指定存儲卷的名稱,方便容器掛載其存儲卷時引用的標識符;其次我們需要對應的存儲卷類型來表示使用對應的存儲介面;hostPath表示使用hostPath類型存儲介面;該類型存儲介面需要我們手動傳遞兩個參數,第一個是path指定宿主機的某個目錄或文件路徑;type用來指定當宿主機上的指定的路徑不存在時該怎麼辦,這個值有7個值;其中DirectoryOrCteate表示對應path欄位指定的文件必須是一個目錄,當這個目錄在宿主機上不存在時就創建;Directory表示對應path欄位指定的文件必須是一個已存在的目錄;FileOrCreate表示對應path欄位指定的文件必須是一個文件,如果文件不存在就創建;File表示對應path欄位必須為一個已存在的文件;Socket表示對應path必須為一個已存在的Socket文件;CharDevice表示對應path欄位指定的文件必須是一個已存在的字元設備;BlockDevice表示對應path欄位指定的是一個已存在的塊設備;
應用配置清單
[root@master01 ~]# kubectl apply -f hostPath-demo.yaml pod/vol-hostpath-demo created [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE myapp-6479b786f5-9d4mh 1/1 Running 1 47h myapp-6479b786f5-k252c 1/1 Running 1 47h vol-hostpath-demo 1/1 Running 0 11s [root@master01 ~]# kubectl describe pod/vol-hostpath-demo Name: vol-hostpath-demo Namespace: default Priority: 0 Node: node03.k8s.org/192.168.0.46 Start Time: Wed, 23 Dec 2020 23:14:35 +0800 Labels: <none> Annotations: <none> Status: Running IP: 10.244.3.92 IPs: IP: 10.244.3.92 Containers: nginx: Container ID: docker://eb8666714b8697457ce2a88271a4615f836873b4729b6a0938776e3d527c6536 Image: nginx:1.14-alpine Image ID: docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 Port: <none> Host Port: <none> State: Running Started: Wed, 23 Dec 2020 23:14:37 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /usr/share/nginx/html from webhtml (ro) /var/run/secrets/kubernetes.io/serviceaccount from default-token-xvd4c (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: webhtml: Type: HostPath (bare host directory volume) Path: /vol/html/ HostPathType: DirectoryOrCreate default-token-xvd4c: Type: Secret (a volume populated by a Secret) SecretName: default-token-xvd4c Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 43s default-scheduler Successfully assigned default/vol-hostpath-demo to node03.k8s.org Normal Pulled 42s kubelet Container image "nginx:1.14-alpine" already present on machine Normal Created 41s kubelet Created container nginx Normal Started 41s kubelet Started container nginx [root@master01 ~]#
提示:可以看到對應pod里以只讀方式掛載了webhtml存儲卷,對應webhtm存儲卷類型為HostPath,對應path是/vol/html/;
查看對應pod所在節點
[root@master01 ~]# kubectl get pod vol-hostpath-demo -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vol-hostpath-demo 1/1 Running 0 3m39s 10.244.3.92 node03.k8s.org <none> <none> [root@master01 ~]#
在node03上查看對應目錄是否創建?
[root@node03 ~]# ll / total 16 lrwxrwxrwx. 1 root root 7 Sep 15 20:33 bin -> usr/bin dr-xr-xr-x. 5 root root 4096 Sep 15 20:39 boot drwxr-xr-x 20 root root 3180 Dec 23 23:10 dev drwxr-xr-x. 80 root root 8192 Dec 23 23:10 etc drwxr-xr-x. 2 root root 6 Nov 5 2016 home lrwxrwxrwx. 1 root root 7 Sep 15 20:33 lib -> usr/lib lrwxrwxrwx. 1 root root 9 Sep 15 20:33 lib64 -> usr/lib64 drwxr-xr-x. 2 root root 6 Nov 5 2016 media drwxr-xr-x. 2 root root 6 Nov 5 2016 mnt drwxr-xr-x. 4 root root 35 Dec 8 14:25 opt dr-xr-xr-x 141 root root 0 Dec 23 23:09 proc dr-xr-x---. 4 root root 213 Dec 21 22:46 root drwxr-xr-x 26 root root 780 Dec 23 23:13 run lrwxrwxrwx. 1 root root 8 Sep 15 20:33 sbin -> usr/sbin drwxr-xr-x. 2 root root 6 Nov 5 2016 srv dr-xr-xr-x 13 root root 0 Dec 23 23:09 sys drwxrwxrwt. 9 root root 251 Dec 23 23:11 tmp drwxr-xr-x. 13 root root 155 Sep 15 20:33 usr drwxr-xr-x. 19 root root 267 Sep 15 20:38 var drwxr-xr-x 3 root root 18 Dec 23 23:14 vol [root@node03 ~]# ll /vol total 0 drwxr-xr-x 2 root root 6 Dec 23 23:14 html [root@node03 ~]# ll /vol/html/ total 0 [root@node03 ~]#
提示:可以看到對應節點上已經創建/vol/html/目錄,對應目錄下沒有任何文件;
在對應節點對應目錄下創建一個網頁文件,訪問對應pod看看是否對應網頁文件是否能夠被訪問到?
[root@node03 ~]# echo "this is test page from node03 /vol/html/test.html" > /vol/html/test.html [root@node03 ~]# cat /vol/html/test.html this is test page from node03 /vol/html/test.html [root@node03 ~]# exit logout Connection to node03 closed. [root@master01 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-6479b786f5-9d4mh 1/1 Running 1 47h 10.244.2.99 node02.k8s.org <none> <none> myapp-6479b786f5-k252c 1/1 Running 1 47h 10.244.4.21 node04.k8s.org <none> <none> vol-hostpath-demo 1/1 Running 0 7m45s 10.244.3.92 node03.k8s.org <none> <none> [root@master01 ~]# curl 10.244.3.92/test.html this is test page from node03 /vol/html/test.html [root@master01 ~]#
提示:可以看到在對應節點上創建網頁文件,訪問pod能夠正常被訪問到;
測試:刪除pod,看看對應節點上的目錄是否會被刪除?
[root@master01 ~]# kubectl delete -f hostPath-demo.yaml pod "vol-hostpath-demo" deleted [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE myapp-6479b786f5-9d4mh 1/1 Running 1 47h myapp-6479b786f5-k252c 1/1 Running 1 47h [root@master01 ~]# ssh node03 Last login: Wed Dec 23 23:18:51 2020 from master01 [root@node03 ~]# ll /vol/html/ total 4 -rw-r--r-- 1 root root 50 Dec 23 23:22 test.html [root@node03 ~]# exit logout Connection to node03 closed. [root@master01 ~]#
提示:可以看到刪除了pod以後,在對應節點上的目錄並不會被刪除;對應的網頁文件還是完好無損;
測試:重新引用配置清單,訪問對應的pod,看看是否能夠訪問到對應的網頁文件內容?
[root@master01 ~]# kubectl apply -f hostPath-demo.yaml pod/vol-hostpath-demo created [root@master01 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-6479b786f5-9d4mh 1/1 Running 1 47h 10.244.2.99 node02.k8s.org <none> <none> myapp-6479b786f5-k252c 1/1 Running 1 47h 10.244.4.21 node04.k8s.org <none> <none> vol-hostpath-demo 1/1 Running 0 7s 10.244.3.93 node03.k8s.org <none> <none> [root@master01 ~]# curl 10.244.3.93/test.html this is test page from node03 /vol/html/test.html [root@master01 ~]#
提示:可以看到對應pod被調度到node03上了,訪問對應的pod能夠訪問到我們創建的網頁文件;假如我們明確指定將此pod運行在node02上,對應pod是否還可以訪問到對應的網頁文件呢?
測試:綁定pod運行在node02.k8s.org上
[root@master01 ~]# cat hostPath-demo.yaml apiVersion: v1 kind: Pod metadata: name: vol-hostpath-demo namespace: default spec: nodeName: node02.k8s.org containers: - name: nginx image: nginx:1.14-alpine volumeMounts: - name: webhtml mountPath: /usr/share/nginx/html readOnly: true volumes: - name: webhtml hostPath: path: /vol/html/ type: DirectoryOrCreate [root@master01 ~]#
提示:綁定pod運行為某個節點上,我們可以在spec欄位中用nodeName欄位來指定對應節點的主機名即可;
刪除原有pod,重新應用新資源清單
[root@master01 ~]# kubectl delete pod/vol-hostpath-demo pod "vol-hostpath-demo" deleted [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE myapp-6479b786f5-9d4mh 1/1 Running 1 47h myapp-6479b786f5-k252c 1/1 Running 1 47h [root@master01 ~]# kubectl apply -f hostPath-demo.yaml pod/vol-hostpath-demo created [root@master01 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-6479b786f5-9d4mh 1/1 Running 1 47h 10.244.2.99 node02.k8s.org <none> <none> myapp-6479b786f5-k252c 1/1 Running 1 47h 10.244.4.21 node04.k8s.org <none> <none> vol-hostpath-demo 1/1 Running 0 8s 10.244.2.100 node02.k8s.org <none> <none> [root@master01 ~]#
提示:可以看到重新應用新資源清單,對應pod運行在node02上;
訪問對應pod,看看test.html是否能夠被訪問到?
[root@master01 ~]# curl 10.244.2.100/test.html <html> <head><title>404 Not Found</title></head> <body bgcolor="white"> <center><h1>404 Not Found</h1></center> <hr><center>nginx/1.14.2</center> </body> </html> [root@master01 ~]#
提示:可以看到現在訪問pod,對應網頁文件就不能被訪問到;其實原因很簡單;hostPath類型存儲卷是將宿主機上的某個目錄或文件當作存儲卷映射進pause容器,然後供pod里的容器掛載使用;這種類型的存儲卷不能跨節點;所以在node02上創建的pod,node03上的文件肯定是不能被訪問到的;為此,如果要使用hostPath類型的存儲卷,我們就必須綁定節點;除此之外我們就應該在k8s節點上創建相同的文件或目錄;
示例:創建使用emptyDir類型存儲卷pod
[root@master01 ~]# cat emptyDir-demo.yaml apiVersion: v1 kind: Pod metadata: name: vol-emptydir-demo namespace: default spec: containers: - name: nginx image: nginx:1.14-alpine volumeMounts: - name: web-cache-dir mountPath: /usr/share/nginx/html readOnly: true readOnly: true - name: alpine image: alpine volumeMounts: - name: web-cache-dir mountPath: /nginx/html command: ["/bin/sh", "-c"] args: - while true; do echo $(hostname) $(date) >> /nginx/html/index.html; sleep 10; done volumes: - name: web-cache-dir emptyDir: medium: Memory sizeLimit: "10Mi" [root@master01 ~]#
提示:以上清單表示定義運行一個名為vol-emptydir-demo的pod;在其pod內部運行兩個容器,一個名為nginx,一個名為alpine;同時這兩個容器都同時掛載一個名為web-cache-dir的存儲卷,其類型為emptyDir,如下圖所示;定義empytDir類型的存儲卷,我們需要在spec.volumes欄位下使用name指定其存儲卷的名稱;用emptyDir指定其存儲卷類型為emptyDir;對於empytDir類型存儲卷,它有兩個屬性,medium欄位用於指定媒介類型,Memory表示使用記憶體作為存儲媒介;默認該欄位的值為「」,表示使用默認的對應節點默認的存儲媒介;sizeLimit欄位是用來限制其對應存儲大小,默認是空,表示不限制;
提示:如上圖,其pod內部有兩個容器,一個名為alpine的容器,它會每隔10往/nginx/html/inde.html文件中寫入對應主機名+時間;而nginx容器掛載對應的empytDir類型存儲卷到本地的網頁存儲目錄;簡單講就是alpine容器往/nginx/html/index.html寫數據,nginx容器掛載對應文件到網頁目錄;
應用資源清單
[root@master01 ~]# kubectl apply -f emptyDir-demo.yaml pod/vol-emptydir-demo created [root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-6479b786f5-9d4mh 1/1 Running 1 2d 10.244.2.99 node02.k8s.org <none> <none> myapp-6479b786f5-k252c 1/1 Running 1 2d 10.244.4.21 node04.k8s.org <none> <none> vol-emptydir-demo 0/2 ContainerCreating 0 8s <none> node03.k8s.org <none> <none> vol-hostpath-demo 1/1 Running 0 72m 10.244.2.100 node02.k8s.org <none> <none> [root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-6479b786f5-9d4mh 1/1 Running 1 2d 10.244.2.99 node02.k8s.org <none> <none> myapp-6479b786f5-k252c 1/1 Running 1 2d 10.244.4.21 node04.k8s.org <none> <none> vol-emptydir-demo 2/2 Running 0 16s 10.244.3.94 node03.k8s.org <none> <none> vol-hostpath-demo 1/1 Running 0 72m 10.244.2.100 node02.k8s.org <none> <none> [root@master01 ~]# kubectl describe pod vol-emptydir-demo Name: vol-emptydir-demo Namespace: default Priority: 0 Node: node03.k8s.org/192.168.0.46 Start Time: Thu, 24 Dec 2020 00:46:56 +0800 Labels: <none> Annotations: <none> Status: Running IP: 10.244.3.94 IPs: IP: 10.244.3.94 Containers: nginx: Container ID: docker://58af9ef80800fb22543d1c80be58849f45f3d62f3b44101dbca024e0761cead5 Image: nginx:1.14-alpine Image ID: docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 Port: <none> Host Port: <none> State: Running Started: Thu, 24 Dec 2020 00:46:57 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /usr/share/nginx/html from web-cache-dir (ro) /var/run/secrets/kubernetes.io/serviceaccount from default-token-xvd4c (ro) alpine: Container ID: docker://327f110a10e8ef9edb5f86b5cb3dad53e824010b52b1c2a71d5dbecab6f49f05 Image: alpine Image ID: docker-pullable://alpine@sha256:3c7497bf0c7af93428242d6176e8f7905f2201d8fc5861f45be7a346b5f23436 Port: <none> Host Port: <none> Command: /bin/sh -c Args: while true; do echo $(hostname) $(date) >> /nginx/html/index.html; sleep 10; done State: Running Started: Thu, 24 Dec 2020 00:47:07 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /nginx/html from web-cache-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-xvd4c (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: web-cache-dir: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: Memory SizeLimit: 10Mi default-token-xvd4c: Type: Secret (a volume populated by a Secret) SecretName: default-token-xvd4c Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 51s default-scheduler Successfully assigned default/vol-emptydir-demo to node03.k8s.org Normal Pulled 51s kubelet Container image "nginx:1.14-alpine" already present on machine Normal Created 51s kubelet Created container nginx Normal Started 50s kubelet Started container nginx Normal Pulling 50s kubelet Pulling image "alpine" Normal Pulled 40s kubelet Successfully pulled image "alpine" in 10.163157508s Normal Created 40s kubelet Created container alpine Normal Started 40s kubelet Started container alpine [root@master01 ~]#
提示:可以看到對應pod已經正常運行起來,其內部有2個容器;其中nginx容器一隻讀方式掛載名為web-cache-dir的存儲卷,alpine以讀寫方式掛載web-cache-dir的存儲卷;對應存儲卷類型為emptyDir;
訪問對應pod,看看是否能夠訪問到對應存儲卷中index.html的內容?
[root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-6479b786f5-9d4mh 1/1 Running 1 2d 10.244.2.99 node02.k8s.org <none> <none> myapp-6479b786f5-k252c 1/1 Running 1 2d 10.244.4.21 node04.k8s.org <none> <none> vol-emptydir-demo 2/2 Running 0 4m38s 10.244.3.94 node03.k8s.org <none> <none> vol-hostpath-demo 1/1 Running 0 77m 10.244.2.100 node02.k8s.org <none> <none> [root@master01 ~]# curl 10.244.3.94 vol-emptydir-demo Wed Dec 23 16:47:07 UTC 2020 vol-emptydir-demo Wed Dec 23 16:47:17 UTC 2020 vol-emptydir-demo Wed Dec 23 16:47:27 UTC 2020 vol-emptydir-demo Wed Dec 23 16:47:37 UTC 2020 vol-emptydir-demo Wed Dec 23 16:47:47 UTC 2020 vol-emptydir-demo Wed Dec 23 16:47:57 UTC 2020 vol-emptydir-demo Wed Dec 23 16:48:07 UTC 2020 vol-emptydir-demo Wed Dec 23 16:48:17 UTC 2020 vol-emptydir-demo Wed Dec 23 16:48:27 UTC 2020 vol-emptydir-demo Wed Dec 23 16:48:37 UTC 2020 vol-emptydir-demo Wed Dec 23 16:48:47 UTC 2020 vol-emptydir-demo Wed Dec 23 16:48:57 UTC 2020 vol-emptydir-demo Wed Dec 23 16:49:07 UTC 2020 vol-emptydir-demo Wed Dec 23 16:49:17 UTC 2020 vol-emptydir-demo Wed Dec 23 16:49:27 UTC 2020 vol-emptydir-demo Wed Dec 23 16:49:37 UTC 2020 vol-emptydir-demo Wed Dec 23 16:49:47 UTC 2020 vol-emptydir-demo Wed Dec 23 16:49:57 UTC 2020 vol-emptydir-demo Wed Dec 23 16:50:07 UTC 2020 vol-emptydir-demo Wed Dec 23 16:50:17 UTC 2020 vol-emptydir-demo Wed Dec 23 16:50:27 UTC 2020 vol-emptydir-demo Wed Dec 23 16:50:37 UTC 2020 vol-emptydir-demo Wed Dec 23 16:50:47 UTC 2020 vol-emptydir-demo Wed Dec 23 16:50:57 UTC 2020 vol-emptydir-demo Wed Dec 23 16:51:07 UTC 2020 vol-emptydir-demo Wed Dec 23 16:51:17 UTC 2020 vol-emptydir-demo Wed Dec 23 16:51:27 UTC 2020 vol-emptydir-demo Wed Dec 23 16:51:37 UTC 2020 vol-emptydir-demo Wed Dec 23 16:51:47 UTC 2020 [root@master01 ~]#
提示:可以看到能夠訪問到index.html文件內容,並且該文件內容是alpine容器動態生成的內容;從上面的示例,不難理解,在同一個pod內部可以共享同一存儲卷;
示例:創建使用nfs類型的存儲卷pod
[root@master01 ~]# cat nfs-demo.yaml apiVersion: v1 kind: Pod metadata: name: vol-nfs-demo namespace: default spec: containers: - name: nginx image: nginx:1.14-alpine volumeMounts: - name: webhtml mountPath: /usr/share/nginx/html readOnly: true volumes: - name: webhtml nfs: path: /data/html/ server: 192.168.0.99 [root@master01 ~]#
提示:定義nfs類型存儲卷,對應spec.volumes.nfs欄位下必須定義path欄位,該欄位用於指定其nfs文件系統的導出文件路徑;server欄位是用於指定其nfs伺服器地址;在使用nfs存儲作為pod的後端存儲,首先我們要準備好nfs伺服器,並導出對應的目錄;
準備nfs伺服器,在192.168.0.99這台伺服器上安裝nfs-utils包
[root@docker_registry ~]# ip a|grep 192.168.0.99 inet 192.168.0.99/24 brd 192.168.0.255 scope global enp3s0 [root@docker_registry ~]# yum install nfs-utils -y Loaded plugins: fastestmirror, langpacks Repository epel is listed more than once in the configuration Repository epel-debuginfo is listed more than once in the configuration Repository epel-source is listed more than once in the configuration base | 3.6 kB 00:00:00 docker-ce-stable | 3.5 kB 00:00:00 epel | 4.7 kB 00:00:00 extras | 2.9 kB 00:00:00 kubernetes/signature | 844 B 00:00:00 kubernetes/signature | 1.4 kB 00:00:00 !!! mariadb-main | 2.9 kB 00:00:00 mariadb-maxscale | 2.4 kB 00:00:00 mariadb-tools | 2.9 kB 00:00:00 mongodb-org | 2.5 kB 00:00:00 proxysql_repo | 2.9 kB 00:00:00 updates | 2.9 kB 00:00:00 (1/6): docker-ce-stable/x86_64/primary_db | 51 kB 00:00:00 (2/6): kubernetes/primary | 83 kB 00:00:01 (3/6): mongodb-org/primary_db | 26 kB 00:00:01 (4/6): epel/x86_64/updateinfo | 1.0 MB 00:00:02 (5/6): updates/7/x86_64/primary_db | 4.7 MB 00:00:01 (6/6): epel/x86_64/primary_db | 6.9 MB 00:00:02 Determining fastest mirrors * base: mirrors.aliyun.com * extras: mirrors.aliyun.com * updates: mirrors.aliyun.com kubernetes 612/612 Resolving Dependencies --> Running transaction check ---> Package nfs-utils.x86_64 1:1.3.0-0.66.el7_8 will be updated ---> Package nfs-utils.x86_64 1:1.3.0-0.68.el7 will be an update --> Finished Dependency Resolution Dependencies Resolved ============================================================================================================================================= Package Arch Version Repository Size ============================================================================================================================================= Updating: nfs-utils x86_64 1:1.3.0-0.68.el7 base 412 k Transaction Summary ============================================================================================================================================= Upgrade 1 Package Total download size: 412 k Downloading packages: No Presto metadata available for base nfs-utils-1.3.0-0.68.el7.x86_64.rpm | 412 kB 00:00:00 Running transaction check Running transaction test Transaction test succeeded Running transaction Updating : 1:nfs-utils-1.3.0-0.68.el7.x86_64 1/2 Cleanup : 1:nfs-utils-1.3.0-0.66.el7_8.x86_64 2/2 Verifying : 1:nfs-utils-1.3.0-0.68.el7.x86_64 1/2 Verifying : 1:nfs-utils-1.3.0-0.66.el7_8.x86_64 2/2 Updated: nfs-utils.x86_64 1:1.3.0-0.68.el7 Complete! [root@docker_registry ~]#
創建/data/html目錄
[root@docker_registry ~]# mkdir /data/html -pv mkdir: created directory 『/data/html』 [root@docker_registry ~]#
配置該目錄能夠被k8s集群節點所訪問
[root@docker_registry ~]# cat /etc/exports /data/html 192.168.0.0/24 (rw,no_root_squash) [root@docker_registry ~]#
提示:以上配置表示把/data/html這個目錄以讀寫,不壓榨root許可權共享給192.168.0.0/24這個網路中的所有主機使用;
啟動nfs
[root@docker_registry ~]# systemctl start nfs [root@docker_registry ~]# ss -tnl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 127.0.0.1:1514 *:* LISTEN 0 128 *:111 *:* LISTEN 0 128 *:20048 *:* LISTEN 0 64 *:42837 *:* LISTEN 0 5 192.168.122.1:53 *:* LISTEN 0 128 *:22 *:* LISTEN 0 128 192.168.0.99:631 *:* LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 64 *:2049 *:* LISTEN 0 128 *:59396 *:* LISTEN 0 128 :::34922 :::* LISTEN 0 128 :::111 :::* LISTEN 0 128 :::20048 :::* LISTEN 0 128 :::80 :::* LISTEN 0 128 :::22 :::* LISTEN 0 100 ::1:25 :::* LISTEN 0 128 :::443 :::* LISTEN 0 128 :::4443 :::* LISTEN 0 64 :::2049 :::* LISTEN 0 64 :::36997 :::* [root@docker_registry ~]#
提示:nfs監聽在tcp的2049埠,啟動請確保該埠能夠正常處於監聽狀態;到此nfs伺服器就準備好了;
在k8s節點上安裝nfs-utils包,為其使用nfs提供所需驅動
yum install nfs-utils -y
驗證:在node01上,看看能不能正常掛載nfs伺服器共享出來的目錄
[root@node01 ~]# showmount -e 192.168.0.99 Export list for 192.168.0.99: /data/html (everyone) [root@node01 ~]# mount -t nfs 192.168.0.99:/data/html /mnt [root@node01 ~]# mount |grep /data/html 192.168.0.99:/data/html on /mnt type nfs4 (rw,relatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.0.44,local_lock=none,addr=192.168.0.99) [root@node01 ~]# umount /mnt [root@node01 ~]# mount |grep /data/html [root@node01 ~]#
提示:可以看到在node01上能夠正常看到nfs伺服器共享出來的目錄,並且也能正常掛載使用;等待其他節點把nfs-utils包安裝完成以後,接下來就可以在master上應用配置清單了;
應用資源清單
[root@master01 ~]# kubectl apply -f nfs-demo.yaml pod/vol-nfs-demo created [root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-6479b786f5-9d4mh 1/1 Running 1 2d1h 10.244.2.99 node02.k8s.org <none> <none> myapp-6479b786f5-k252c 1/1 Running 1 2d1h 10.244.4.21 node04.k8s.org <none> <none> vol-hostpath-demo 1/1 Running 0 141m 10.244.2.100 node02.k8s.org <none> <none> vol-nfs-demo 1/1 Running 0 10s 10.244.3.101 node03.k8s.org <none> <none> [root@master01 ~]# kubectl describe pod vol-nfs-demo Name: vol-nfs-demo Namespace: default Priority: 0 Node: node03.k8s.org/192.168.0.46 Start Time: Thu, 24 Dec 2020 01:55:51 +0800 Labels: <none> Annotations: <none> Status: Running IP: 10.244.3.101 IPs: IP: 10.244.3.101 Containers: nginx: Container ID: docker://72227e3a94622a4ea032a1ab0d7d353aef167d5a0e80c3739e774050eaea3914 Image: nginx:1.14-alpine Image ID: docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 Port: <none> Host Port: <none> State: Running Started: Thu, 24 Dec 2020 01:55:52 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /usr/share/nginx/html from webhtml (ro) /var/run/secrets/kubernetes.io/serviceaccount from default-token-xvd4c (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: webhtml: Type: NFS (an NFS mount that lasts the lifetime of a pod) Server: 192.168.0.99 Path: /data/html/ ReadOnly: false default-token-xvd4c: Type: Secret (a volume populated by a Secret) SecretName: default-token-xvd4c Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 28s default-scheduler Successfully assigned default/vol-nfs-demo to node03.k8s.org Normal Pulled 27s kubelet Container image "nginx:1.14-alpine" already present on machine Normal Created 27s kubelet Created container nginx Normal Started 27s kubelet Started container nginx [root@master01 ~]#
提示:可以看到對應pod已經正常運行,並且其內部容器已經正常掛載對應目錄;
在nfs伺服器對應目錄,創建一個index.html文件
[root@docker_registry ~]# cd /data/html [root@docker_registry html]# echo "this is test file from nfs server ip addr is 192.168.0.99" > index.html [root@docker_registry html]# cat index.html this is test file from nfs server ip addr is 192.168.0.99 [root@docker_registry html]#
訪問對應pod,看看是否能夠訪問到對應文件內容?
[root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-6479b786f5-9d4mh 1/1 Running 1 2d2h 10.244.2.99 node02.k8s.org <none> <none> myapp-6479b786f5-k252c 1/1 Running 1 2d2h 10.244.4.21 node04.k8s.org <none> <none> vol-hostpath-demo 1/1 Running 0 145m 10.244.2.100 node02.k8s.org <none> <none> vol-nfs-demo 1/1 Running 0 4m6s 10.244.3.101 node03.k8s.org <none> <none> [root@master01 ~]# curl 10.244.3.101 this is test file from nfs server ip addr is 192.168.0.99 [root@master01 ~]#
提示:可以看到對應文件內容能夠通過pod訪問到;
刪除pod
[root@master01 ~]# kubectl delete -f nfs-demo.yaml pod "vol-nfs-demo" deleted [root@master01 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-6479b786f5-9d4mh 1/1 Running 1 2d2h 10.244.2.99 node02.k8s.org <none> <none> myapp-6479b786f5-k252c 1/1 Running 1 2d2h 10.244.4.21 node04.k8s.org <none> <none> vol-hostpath-demo 1/1 Running 0 149m 10.244.2.100 node02.k8s.org <none> <none> [root@master01 ~]#
綁定pod運行在node02.k8s.org上,重新應用配置文件創建pod,再次訪問對應pod,看看對應文件是否能夠正常訪問到呢?
[root@master01 ~]# cat nfs-demo.yaml apiVersion: v1 kind: Pod metadata: name: vol-nfs-demo namespace: default spec: nodeName: node02.k8s.org containers: - name: nginx image: nginx:1.14-alpine volumeMounts: - name: webhtml mountPath: /usr/share/nginx/html readOnly: true volumes: - name: webhtml nfs: path: /data/html/ server: 192.168.0.99 [root@master01 ~]# kubectl apply -f nfs-demo.yaml pod/vol-nfs-demo created [root@master01 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-6479b786f5-9d4mh 1/1 Running 1 2d2h 10.244.2.99 node02.k8s.org <none> <none> myapp-6479b786f5-k252c 1/1 Running 1 2d2h 10.244.4.21 node04.k8s.org <none> <none> vol-hostpath-demo 1/1 Running 0 151m 10.244.2.100 node02.k8s.org <none> <none> vol-nfs-demo 1/1 Running 0 8s 10.244.2.101 node02.k8s.org <none> <none> [root@master01 ~]# curl 10.244.2.101 this is test file from nfs server ip addr is 192.168.0.99 [root@master01 ~]#
提示:可以看到把對應pod綁定到node02上,訪問對應pod也能正常訪問到nfs伺服器上的文件;從上述測試過程來看,nfs這種類型的存儲卷能夠脫離pod的生命周期,跨節點將pod里容器產生的數據持久化到對應的nfs文件系統伺服器上;當然nfs此時是單點,一旦nfs伺服器宕機掛掉,對應pod運行時產生的數據將全部丟失;所以對應外部存儲系統,我們應該選擇一個對數據有冗餘,且k8s集群支援的類型的存儲系統,比如cephfs,glusterfs等等;