K8s集群環境搭建

K8s集群環境搭建

1、環境規劃

1.1 集群類型

Kubernetes集群大體上分為兩類:一主多從多主多從

  • 一主多從:一台master節點和多台node節點,搭建簡單,但是有單機故障風險,適用於測試環境
  • 多主多從:多台master節點和多台node節點,搭建麻煩,安全性高,適用於生產環境

image-20221117141443024

1.2 安裝方式

Kubernetes有多種部署方式,目前主流的方式有kubeadmminikube二進位包

說明:現在需要安裝kubernetes的集群環境,但是又不想過於麻煩,所有選擇使用kubeadm方式

1.3 準備環境

image-20221117141842079

角色 ip地址 組件
master 192.168.111.100 docker,kubectl,kubeadm,kubelet
node1 192.168.111.101 docker,kubectl,kubeadm,kubelet
node2 192.168.111.102 docker,kubectl,kubeadm,kubelet

2、環境搭建

說明:

本次環境搭建需要安裝三台Linux系統(一主二從),內置centos7.5系統,然後在每台linux中分別安裝docker。kubeadm(1.25),kubelet(1.25.4),kubelet(1.25.4).

2.1 主機安裝

  • 安裝虛擬機過程中注意下面選項的設置:

  • 作業系統環境:cpu2個 記憶體2G 硬碟50G centos7+

  • 語言:中文簡體/英文

  • 軟體選擇:基礎設施伺服器

  • 分區選擇:自動分區/手動分區

  • 網路配置:按照下面配置網路地址資訊

  • 網路地址:192.168.100.(100、10、20)

  • 子網掩碼:255.255.255.0

  • 默認網關:192.168.100.254

  • DNS:8.8.8.8

  • 主機名設置:

  • Master節點:master

  • Node節點:node1

  • Node節點:node2

2.2 環境初始化

  1. 查看作業系統的版本

    # 此方式下安裝kubernetes集群要求Centos版本要在7.5或之上
    [root@master ~]#cat /etc/redhat-release
    CentOS Stream release 8
    
  2. 主機名解析 (三個節點都做)

    # 為了方便集群節點間的直接調用,在這個配置一下主機名解析,企業中推薦使用內部DNS伺服器
    [root@master ~]#cat >> /etc/hosts << EOF
    > 192.168.111.100 master.example.com master
    > 192.168.111.101 node1.example.com node1
    > 192.168.111.102 node2.example.com node2
    > EOF
    
    [root@master ~]#scp /etc/hosts [email protected]:/etc/hosts
    The authenticity of host '192.168.111.101 (192.168.111.101)' can't be established.
    ECDSA key fingerprint is SHA256:0UQKIYmXwgllRaiKyKIR8RaO8bzS7GGb5180xGHoiMI.
    Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
    Warning: Permanently added '192.168.111.101' (ECDSA) to the list of known hosts.
    [email protected]'s password: 
    hosts                                                                                       100%  280   196.1KB/s   00:00    
    [root@master ~]#scp /etc/hosts [email protected]:/etc/hosts
    The authenticity of host '192.168.111.102 (192.168.111.102)' can't be established.
    ECDSA key fingerprint is SHA256:0UQKIYmXwgllRaiKyKIR8RaO8bzS7GGb5180xGHoiMI.
    Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
    Warning: Permanently added '192.168.111.102' (ECDSA) to the list of known hosts.
    [email protected]'s password: 
    hosts                            
    
  3. 時鐘同步

    # kubernetes要求集群中的節點時間必須精確一致,這裡使用chronyd服務從網路同步時間,企業中建議配置內部的時間同步伺服器
    -master節點
    [root@master ~]#vim /etc/chrony.conf
    local stratum 10
    [root@master ~]#systemctl restart chronyd.service
    [root@master ~]#systemctl enable chronyd.service
    [root@master ~]#hwclock -w
    
    -node1節點
    [root@node1 ~]#vim /etc/chrony.conf
    server master.example.com iburst
    ...
    [root@node1 ~]#systemctl restart chronyd.service 
    [root@node1 ~]#systemctl enable chronyd.service 
    [root@node1 ~]#hwclock -w
    
    -node2節點
    [root@node2 ~]#vim /etc/chrony.conf 
    server master.example.com iburst
    ...
    [root@node2 ~]#systemctl restart chronyd.service 
    [root@node2 ~]#systemctl enable chronyd.service
    [root@node2 ~]#hwclock -w
    
  4. 禁用firewalld、selinux、postfix(三個節點都做)

    # 關閉防火牆、selinux,postfix----3台主機都配置
    -master節點
    [root@master ~]#systemctl disable --now firewalld
    [root@master ~]#sed -i 's/enforcing/disabled/' /etc/selinux/config
    [root@master ~]#setenforce 0
    [root@master ~]#systemctl stop postfix
    [root@master ~]#systemctl disable postfix
    
    -node1節點
    [root@node1 ~]#systemctl disable --now firewalld
    [root@node1 ~]#sed -i 's/enforcing/disabled/' /etc/selinux/config
    [root@node1 ~]#setenforce 0
    [root@node1 ~]#systemctl stop postfix
    [root@node1 ~]#systemctl disable postfix
    
    -node2節點
    [root@node2 ~]#systemctl disable --now firewalld
    [root@node2 ~]#sed -i 's/enforcing/disabled/' /etc/selinux/config
    [root@node2 ~]#setenforce 0
    [root@node2 ~]#systemctl stop postfix
    [root@node2 ~]#systemctl disable postfix
    
  5. 禁用swap分區(三個節點都做)

    -master節點
    [root@master ~]#vim /etc/fstab # 注釋掉swap分區那一行
    [root@master ~]#swapoff -a
    
    -node1節點
    [root@node1 ~]#vim /etc/fstab # 注釋掉swap分區那一行
    [root@node1 ~]#swapoff -a
    
    -node1節點
    [root@node2 ~]#vim /etc/fstab # 注釋掉swap分區那一行
    [root@node2 ~]#swapoff -a
    
  6. 開啟IP轉發,和修改內核資訊—三個節點都需要配置

    -master節點
    [root@master ~]#vim /etc/sysctl.d/k8s.conf
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    [root@master ~]#modprobe   br_netfilter
    [root@master ~]#sysctl -p  /etc/sysctl.d/k8s.conf
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    
    -node1節點
    [root@node1 ~]#vim /etc/sysctl.d/k8s.conf
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    [root@node1 ~]#modprobe   br_netfilter
    [root@node1 ~]#sysctl -p  /etc/sysctl.d/k8s.conf
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    
    -node1節點
    [root@node2 ~]#vim /etc/sysctl.d/k8s.conf
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    [root@node2 ~]#modprobe   br_netfilter
    [root@node2 ~]#sysctl -p  /etc/sysctl.d/k8s.conf
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    
  7. 配置IPVS功能(三個節點都做)

    -master節點
    [root@master ~]#vim /etc/sysconfig/modules/ipvs.modules
    #!/bin/bash
    modprobe -- ip_vs
    modprobe -- ip_vs_rr
    modprobe -- ip_vs_wrr
    modprobe -- ip_vs_sh
    
    [root@master ~]#chmod +x /etc/sysconfig/modules/ipvs.modules
    [root@master ~]#bash /etc/sysconfig/modules/ipvs.modules
    [root@master ~]#lsmod | grep -e ip_vs
    ip_vs_sh               16384  0
    ip_vs_wrr              16384  0
    ip_vs_rr               16384  0
    ip_vs                 172032  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
    nf_conntrack          172032  1 ip_vs
    nf_defrag_ipv6         20480  2 nf_conntrack,ip_vs
    libcrc32c              16384  3 nf_conntrack,xfs,ip_vs
    [root@master ~]#reboot
    
    -node1節點
    [root@node1 ~]#vim /etc/sysconfig/modules/ipvs.modules
    #!/bin/bash
    modprobe -- ip_vs
    modprobe -- ip_vs_rr
    modprobe -- ip_vs_wrr
    modprobe -- ip_vs_sh
    
    [root@node1 ~]#chmod +x /etc/sysconfig/modules/ipvs.modules
    [root@node1 ~]#bash /etc/sysconfig/modules/ipvs.modules
    [root@node1 ~]#lsmod | grep -e ip_vs
    ip_vs_sh               16384  0
    ip_vs_wrr              16384  0
    ip_vs_rr               16384  0
    ip_vs                 172032  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
    nf_conntrack          172032  1 ip_vs
    nf_defrag_ipv6         20480  2 nf_conntrack,ip_vs
    libcrc32c              16384  3 nf_conntrack,xfs,ip_vs
    [root@node1 ~]#reboot
    
    -node2節點
    [root@node2 ~]#vim /etc/sysconfig/modules/ipvs.modules
    #!/bin/bash
    modprobe -- ip_vs
    modprobe -- ip_vs_rr
    modprobe -- ip_vs_wrr
    modprobe -- ip_vs_sh
    
    [root@node2 ~]#chmod +x /etc/sysconfig/modules/ipvs.modules
    [root@node2 ~]#bash /etc/sysconfig/modules/ipvs.modules
    [root@node2 ~]#lsmod | grep -e ip_vs
    ip_vs_sh               16384  0
    ip_vs_wrr              16384  0
    ip_vs_rr               16384  0
    ip_vs                 172032  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
    nf_conntrack          172032  1 ip_vs
    nf_defrag_ipv6         20480  2 nf_conntrack,ip_vs
    libcrc32c              16384  3 nf_conntrack,xfs,ip_vs
    [root@node2 ~]#reboot
    
  8. ssh免密認證

    [root@master ~]#ssh-keygen 
    Generating public/private rsa key pair.
    Enter file in which to save the key (/root/.ssh/id_rsa): 
    Enter passphrase (empty for no passphrase): 
    Enter same passphrase again: 
    Your identification has been saved in /root/.ssh/id_rsa.
    Your public key has been saved in /root/.ssh/id_rsa.pub.
    The key fingerprint is:
    SHA256:VcZ6m+gceBJxwysFWwM08526KiBoSt9qdbDQoMSx3kU root@master
    The key's randomart image is:
    +---[RSA 3072]----+
    |...  E .*+o.o    |
    | o...   .*==..   |
    |... o.  .+o+o    |
    |.....o  o.o..    |
    | o .. o S+.o o   |
    |.o. .o .o +.o    |
    |+ ..o..  =..     |
    |.  o ..  .o      |
    |  ...  ..        |
    +----[SHA256]-----+
    [root@master ~]#ssh-copy-id -i ~/.ssh/id_rsa.pub root@node1
    /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
    The authenticity of host 'node1 (192.168.111.101)' can't be established.
    ECDSA key fingerprint is SHA256:0UQKIYmXwgllRaiKyKIR8RaO8bzS7GGb5180xGHoiMI.
    Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
    /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
    /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
    root@node1's password: 
    
    Number of key(s) added: 1
    
    Now try logging into the machine, with:   "ssh 'root@node1'"
    and check to make sure that only the key(s) you wanted were added.
    
    [root@master ~]#ssh-copy-id -i ~/.ssh/id_rsa.pub root@node2
    /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
    The authenticity of host 'node2 (192.168.111.102)' can't be established.
    ECDSA key fingerprint is SHA256:0UQKIYmXwgllRaiKyKIR8RaO8bzS7GGb5180xGHoiMI.
    Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
    /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
    /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
    root@node2's password: 
    
    Number of key(s) added: 1
    
    Now try logging into the machine, with:   "ssh 'root@node2'"
    and check to make sure that only the key(s) you wanted were added.
    

2.3、安裝docker

  1. 切換鏡像源

    -master節點
    [root@master /etc/yum.repos.d]#curl -o /etc/yum.repos.d/CentOS-Base.repo //mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
    [root@master /etc/yum.repos.d]# dnf -y install epel-release
    [root@master /etc/yum.repos.d]#wget //mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    
    -node1節點
    [root@node1 /etc/yum.repos.d]#curl -o /etc/yum.repos.d/CentOS-Base.repo //mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
    [root@node1 /etc/yum.repos.d]# dnf -y install epel-release
    [root@node1 /etc/yum.repos.d]#wget //mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    
    -node2節點
    [root@node2 /etc/yum.repos.d]#curl -o /etc/yum.repos.d/CentOS-Base.repo //mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
    [root@node2 /etc/yum.repos.d]# dnf -y install epel-release
    [root@node2 /etc/yum.repos.d]#wget //mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    
  2. 安裝docker-ce

    -master節點
    [root@master ~]# dnf -y install docker-ce --allowerasing
    [root@master ~]# systemctl restart docker
    [root@master ~]# systemctl enable docker
    
    -node1節點
    [root@node1 ~]# dnf -y install docker-ce --allowerasing
    [root@node1 ~]# systemctl restart docker
    [root@node1 ~]# systemctl enable docker
    
    -node2節點
    [root@node2 ~]# dnf -y install docker-ce --allowerasing
    [root@node2 ~]# systemctl restart docker
    [root@node2 ~]# systemctl enable docker
    
  3. 添加一個配置文件,配置docker倉庫加速器

    -master節點
    [root@master ~]#cat > /etc/docker/daemon.json << EOF
     {
       "registry-mirrors": ["//6vrrj6n2.mirror.aliyuncs.com"],
       "exec-opts": ["native.cgroupdriver=systemd"],
       "log-driver": "json-file",
       "log-opts": {
         "max-size": "100m"
       },
       "storage-driver": "overlay2"
     }
     EOF
    [root@master ~]#systemctl daemon-reload
    [root@master ~]#systemctl  restart docker
    
    -node1節點
    [root@node1 ~]#cat > /etc/docker/daemon.json << EOF
     {
       "registry-mirrors": ["//6vrrj6n2.mirror.aliyuncs.com"],
       "exec-opts": ["native.cgroupdriver=systemd"],
       "log-driver": "json-file",
       "log-opts": {
         "max-size": "100m"
       },
       "storage-driver": "overlay2"
     }
     EOF
    [root@node1 ~]#systemctl daemon-reload
    [root@node1 ~]#systemctl  restart docker
    
    -node2節點
    [root@node2 ~]#cat > /etc/docker/daemon.json << EOF
     {
       "registry-mirrors": ["//6vrrj6n2.mirror.aliyuncs.com"],
       "exec-opts": ["native.cgroupdriver=systemd"],
       "log-driver": "json-file",
       "log-opts": {
         "max-size": "100m"
       },
       "storage-driver": "overlay2"
     }
     EOF
    [root@node2 ~]#systemctl daemon-reload
    [root@node2 ~]#systemctl  restart docker
    

2.4 安裝kubernetes組件

  1. 由於kubernetes的鏡像在國外,速度比較慢,這裡切換成中國的鏡像源

    -master節點
    [root@master ~]#cat > /etc/yum.repos.d/kubernetes.repo << EOF
    [kubernetes]
    name=Kubernetes
    baseurl=//mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=//mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg //mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    
    -node1節點
    [root@node1 ~]#cat > /etc/yum.repos.d/kubernetes.repo << EOF
    [kubernetes]
    name=Kubernetes
    baseurl=//mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=//mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg //mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    
    -node2節點
    [root@node2 ~]#cat > /etc/yum.repos.d/kubernetes.repo << EOF
    [kubernetes]
    name=Kubernetes
    baseurl=//mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=//mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg //mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    
  2. 安裝kubeadm kubelet kubectl工具

    -master節點
    [root@master ~]#dnf  -y  install kubeadm  kubelet  kubectl
    [root@master ~]#systemctl  restart  kubelet
    [root@master ~]#systemctl  enable  kubelet
    
    -node1節點
    [root@node1 ~]#dnf  -y  install kubeadm  kubelet  kubectl
    [root@node1 ~]#systemctl  restart  kubelet
    [root@node1 ~]#systemctl  enable  kubelet
    
    -node2節點
    [root@node2 ~]#dnf  -y  install kubeadm  kubelet  kubectl
    [root@node2 ~]#systemctl  restart  kubelet
    [root@node2 ~]#systemctl  enable  kubelet
    
  3. 配置containerd

    # 為確保後面集群初始化及加入集群能夠成功執行,需要配置containerd的配置文件/etc/containerd/config.toml,此操作需要在所有節點執行
    -master節點
    [root@master ~]#containerd config default > /etc/containerd/config.toml
    # 將/etc/containerd/config.toml文件中的k8s鏡像倉庫改為registry.aliyuncs.com/google_containers
    [root@master ~]#vim /etc/containerd/config.toml
    sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"
    # 然後重啟並設置containerd服務
    [root@master ~]#systemctl   restart  containerd
    [root@master ~]#systemctl   enable  containerd
    
    # 為確保後面集群初始化及加入集群能夠成功執行,需要配置containerd的配置文件/etc/containerd/config.toml,此操作需要在所有節點執行
    -node1節點
    [root@node1 ~]#containerd config default > /etc/containerd/config.toml
    # 將/etc/containerd/config.toml文件中的k8s鏡像倉庫改為registry.aliyuncs.com/google_containers
    [root@node1 ~]#vim /etc/containerd/config.toml
    sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"
    # 然後重啟並設置containerd服務
    [root@node1 ~]#systemctl   restart  containerd
    [root@node1 ~]#systemctl   enable  containerd
    
    # 為確保後面集群初始化及加入集群能夠成功執行,需要配置containerd的配置文件/etc/containerd/config.toml,此操作需要在所有節點執行
    -node2節點
    [root@node2 ~]#containerd config default > /etc/containerd/config.toml
    # 將/etc/containerd/config.toml文件中的k8s鏡像倉庫改為registry.aliyuncs.com/google_containers
    [root@node2 ~]#vim /etc/containerd/config.toml
    sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"
    # 然後重啟並設置containerd服務
    [root@node2 ~]#systemctl   restart  containerd
    [root@node2 ~]#systemctl   enable  containerd
    
  4. 部署k8s的master節點

    -master節點
    [root@master ~]#kubeadm init \
      --apiserver-advertise-address=192.168.111.100 \
      --image-repository registry.aliyuncs.com/google_containers \
      --kubernetes-version v1.25.4 \
      --service-cidr=10.96.0.0/12 \
      --pod-network-cidr=10.244.0.0/16
    # 建議將初始化內容保存在某個文件中
    [root@master ~]#vim k8s 
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    Alternatively, if you are the root user, you can run:
    
      export KUBECONFIG=/etc/kubernetes/admin.conf
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      //kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 192.168.111.100:6443 --token eav8jn.zj2muv0thd7e8dad \
    	--discovery-token-ca-cert-hash sha256:b38f8a6a6302e25c0bcba2a67c13b234fd0b9fdd8b0c0645154c79edf6555e09 
    
    [root@master ~]#vim /etc/profile.d/k8s.sh
    export KUBECONFIG=/etc/kubernetes/admin.conf
    [root@master ~]#source /etc/profile.d/k8s.sh
    
  5. 安裝pod網路插件

    -master節點
    [root@master ~]#wget //raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
    [root@master ~]#kubectl apply -f kube-flannel.yml 
    namespace/kube-flannel created
    clusterrole.rbac.authorization.k8s.io/flannel created
    clusterrolebinding.rbac.authorization.k8s.io/flannel created
    serviceaccount/flannel created
    configmap/kube-flannel-cfg created
    daemonset.apps/kube-flannel-ds created
    [root@master ~]#kubectl get nodes
    NAME     STATUS     ROLES           AGE     VERSION
    master   NotReady   control-plane   6m41s   v1.25.4
    [root@master ~]#kubectl get nodes
    NAME     STATUS   ROLES           AGE     VERSION
    master   Ready    control-plane   7m10s   v1.25.4
    
  6. 將node節點加入到k8s集群中

    -node1節點
    [root@node1 ~]#kubeadm join 192.168.111.100:6443 --token eav8jn.zj2muv0thd7e8dad \
    > --discovery-token-ca-cert-hash sha256:b38f8a6a6302e25c0bcba2a67c13b234fd0b9fdd8b0c0645154c79edf6555e09 
    
    -node2節點
    [root@node2 ~]#kubeadm join 192.168.111.100:6443 --token eav8jn.zj2muv0thd7e8dad \
    > --discovery-token-ca-cert-hash sha256:b38f8a6a6302e25c0bcba2a67c13b234fd0b9fdd8b0c0645154c79edf6555e09 
    
  7. kubectl get nodes 查看node狀態

    -master節點
    [root@master ~]#kubectl get nodes
    NAME     STATUS     ROLES           AGE     VERSION
    master   Ready      control-plane   9m37s   v1.25.4
    node1    NotReady   <none>          51s     v1.25.4
    node2    NotReady   <none>          31s     v1.25.4
    [root@master ~]#kubectl get nodes
    NAME     STATUS   ROLES           AGE     VERSION
    master   Ready    control-plane   9m57s   v1.25.4
    node1    Ready    <none>          71s     v1.25.4
    node2    Ready    <none>          51s     v1.25.4
    
  8. 使用k8s集群創建一個pod,運行nginx容器,然後進行測試

    [root@master ~]#kubectl create  deployment  nginx  --image nginx
    deployment.apps/nginx created
    [root@master ~]#kubectl  get  pods
    NAME                    READY   STATUS    RESTARTS   AGE
    nginx-76d6c9b8c-z7p4l   1/1     Running   0          35s
    [root@master ~]#kubectl  expose  deployment  nginx  --port 80  --type NodePort
    service/nginx exposed
    [root@master ~]#kubectl  get  pods  -o  wide
    NAME                    READY   STATUS    RESTARTS   AGE    IP           NODE    NOMINATED NODE   READINESS GATES
    nginx-76d6c9b8c-z7p4l   1/1     Running   0          119s   10.244.1.2   node1   <none>           <none>
    [root@master ~]#kubectl  get  services
    NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
    kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        15m
    nginx        NodePort    10.109.37.202   <none>        80:31125/TCP   17s
    
  9. 測試訪問

    image-20221117165705706

  10. 修改默認網頁

    [root@master ~]# kubectl exec -it pod/nginx-76d6c9b8c-z7p4l -- /bin/bash
    root@nginx-76d6c9b8c-z7p4l:/# cd /usr/share/nginx/html/
    echo "zhaoshulin" > index.html
    

image-20221117170031199