使用kubeadm快速部署一套K8S集群
- 2019 年 10 月 5 日
- 筆記
一、Kubernetes概述
1.1 Kubernetes是什麼
- Kubernetes是Google在2014年開源的一個容器集群管理系統,Kubernetes簡稱K8S。
- K8S用於容器化應用程式的部署,擴展和管理。
- K8S提供了容器編排,資源調度,彈性伸縮,部署管理,服務發現等一系列功能。
- Kubernetes目標是讓部署容器化應用簡單高效。
1.2 Kubernetes特性
- 自我修復
- 在節點故障時重新啟動失敗的容器,替換和重新部署,保證預期的副本數量;殺死健康檢查失敗的容器,並且在未準備好之前不會處理客戶端請求,確保線上服務不中斷。
- 彈性伸縮
- 使用命令、UI或者基於CPU使用情況自動快速擴容和縮容應用程式實例,保證應用業務高峰並發時的高可用性;業務低峰時回收資源,以最小成本運行服務。
- 自動部署和回滾
- K8S採用滾動更新策略更新應用,一次更新一個Pod,而不是同時刪除所有Pod,如果更新過程中出現問題,將回滾更改,確保升級不受影響業務。
- 服務發現和負載均衡
- K8S為多個容器提供一個統一訪問入口(內部IP地址和一個DNS名稱),並且負載均衡關聯的所有容器,使得用戶無需考慮容器IP問題。
- 機密和配置管理
- 管理機密數據和應用程式配置,而不需要把敏感數據暴露在鏡像里,提高敏感數據安全性。並可以將一些常用的配置存儲在K8S中,方便應用程式使用。
- 存儲編排
- 掛載外部存儲系統,無論是來自本地存儲,公有雲(如AWS),還是網路存儲(如NFS、GlusterFS、Ceph)都作為集群資源的一部分使用,極大提高存儲使用靈活性。
- 批處理
- 提供一次性任務,定時任務;滿足批量數據處理和分析的場景。
1.3 Kubernetes集群架構與組件

1.4 Kubernetes集群組件介紹
1.4.1 Master組件
- kube-apiserver
- Kubernetes API, 集群的統一入口,各組件協調者,以RESTful API提供介面服務,所有對象資源的增刪改查和監聽操作都交給APIServer處理後再提交給Etcd存儲。
- kube-controller-manager
- 處理集群中常規後台任務,一個資源對應一個控制器,而ControllerManager就是負責管理這些控制器的。
- kube-scheduler
- 根據調度演算法為新創建的Pod選擇一個Node節點,可以任意部署,可以部署在同一個節點上,也可以部署在不同的節點上。
- etcd
- 分散式鍵值存儲系統。用於保存集群狀態數據,比如Pod、Service等對象資訊。
1.4.2 Node組件
- kubelet
- kubelet是Master在Node節點上的Agent,管理本機運行容器的生命周期,比如創建容器、Pod掛載數據卷、下載secret、獲取容器和節點狀態等工作。kubelet將每個Pod轉換成一組容器。
- kube-proxy
- 在Node節點上實現Pod網路代理,維護網路規則和四層負載均衡工作。
- docker或rocket
- 容器引擎,運行容器。
1.5 Kubernetes 核心概念

- Pod
- 最小部署單元
- 一組容器的集合
- 一個Pod中的容器共享網路命名空間
- Pod是短暫的
- Controllers
- ReplicaSet :確保預期的Pod副本數量
- Deployment :無狀態應用部署
- StatefulSet :有狀態應用部署
- DaemonSet :確保所有Node運行同一個Pod
- Job :一次性任務
- Cronjob :定時任務
更高級層次對象,部署和管理Pod
- Service
- 防止Pod失聯
- 定義一組Pod的訪問策略
- Label :標籤,附加到某個資源上,用於關聯對象、查詢和篩選
- Namespaces:命名空間,將對象邏輯上隔離
- Annotations :注釋
二、kubeadm 快速部署K8S集群
2.1 kubernetes 官方提供的三種部署方式
- minikube
Minikube是一個工具,可以在本地快速運行一個單點的Kubernetes,僅用於嘗試Kubernetes或日常開發的用戶使用。部署地址:https://kubernetes.io/docs/setup/minikube/
- kubeadm
Kubeadm也是一個工具,提供kubeadm init和kubeadm join,用於快速部署Kubernetes集群。部署地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/
- 二進位包
推薦,從官方下載發行版的二進位包,手動部署每個組件,組成Kubernetes集群。下載地址:https://github.com/kubernetes/kubernetes/releases
2.2 安裝kubeadm環境準備
以下操作,在三台節點都執行
2.2.1 環境需求
環境:centos 7.4 +
硬體需求:CPU>=2c ,記憶體>=2G
2.2.2 環境角色
IP |
角色 |
安裝軟體 |
---|---|---|
192.168.73.138 |
k8s-Master |
kube-apiserver kube-schduler kube-controller-manager docker flannel kubelet |
192.168.73.139 |
k8s-node01 |
kubelet kube-proxy docker flannel |
192.168.73.140 |
k8s-node01 |
kubelet kube-proxy docker flannel |
2.2.3 環境初始化
PS : 以下所有操作,在三台節點全部執行 1、關閉防火牆及selinux
$ systemctl stop firewalld && systemctl disable firewalld $ sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config && setenforce 0
2、關閉 swap 分區
$ swapoff -a # 臨時 $ sed -i '/ swap / s/^(.*)$/#1/g' /etc/fstab #永久
3、分別在192.168.73.138、192.168.73.139、192.168.73.140上設置主機名及配置hosts
$ hostnamectl set-hostname k8s-master(192.168.73.138主機打命令) $ hostnamectl set-hostname k8s-node01(192.168.73.139主機打命令)
$ hostnamectl set-hostname k8s-node02 (192.168.73.140主機打命令)
4、在所有主機上上添加如下命令
$ cat >> /etc/hosts << EOF 192.168.4.34 k8s-master 192.168.4.35 k8s-node01 192.168.4.36 k8s-node02 EOF
5、內核調整,將橋接的IPv4流量傳遞到iptables的鏈
$ cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF $ sysctl --system
6、設置系統時區並同步時間伺服器
# yum install -y ntpdate # ntpdate time.windows.com
2.2.4 docker 安裝
$ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo $ yum -y install docker-ce-18.06.1.ce-3.el7 $ curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io $ systemctl enable docker && systemctl start docker $ docker --version Docker version 18.06.1-ce, build e68fc7a
2.2.5 添加kubernetes YUM軟體源
$ cat > /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
2.2.6 安裝kubeadm,kubelet和kubectl
2.2.6上所有主機都需要操作,由於版本更新頻繁,這裡指定版本號部署
$ yum install -y kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0 $ systemctl enable kubelet
2.3 部署Kubernetes Master
只需要在Master 節點執行,這裡的apiserve需要修改成自己的master地址
[root@k8s-master ~]# kubeadm init --apiserver-advertise-address=192.168.73.138 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.15.0 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16
由於默認拉取鏡像地址k8s.gcr.io中國無法訪問,這裡指定阿里雲鏡像倉庫地址。
輸出結果:
[preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.4.34] [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.4.34 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.4.34 127.0.0.1 ::1] [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" ......(省略) [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.73.138:6443 --token 2nm5l9.jtp4zwnvce4yt4oj --discovery-token-ca-cert-hash sha256:12f628a21e8d4a7262f57d4f21bc85f8802bb717dd6f513bf9d33f254fea3e89
根據輸出提示操作:
[root@k8s-master ~]# mkdir -p $HOME/.kube [root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
2.4 加入Kubernetes Node
在兩個 Node 節點執行
使用kubeadm join 註冊Node節點到Matser
kubeadm join 的內容,在上面kubeadm init 已經生成好了
[root@k8s-node01 ~]# kubeadm join 192.168.4.34:6443 --token 2nm5l9.jtp4zwnvce4yt4oj --discovery-token-ca-cert-hash sha256:12f628a21e8d4a7262f57d4f21bc85f8802bb717dd6f513bf9d33f254fea3e89
輸出內容:
[preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
2.5 安裝網路插件
只需要在Master 節點執行
[root@k8s-master ~]# wget https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
修改鏡像地址:(有可能默認不能拉取,確保能夠訪問到quay.io這個registery,否則修改如下內容)
[root@k8s-master ~]# vim kube-flannel.yml
進入編輯,把106行,120行的內容,替換如下image,替換之後查看如下為正確
[root@k8s-master ~]# cat -n kube-flannel.yml|grep lizhenliang/flannel:v0.11.0-amd64 106 image: lizhenliang/flannel:v0.11.0-amd64 120 image: lizhenliang/flannel:v0.11.0-amd64 [root@k8s-master ~]# kubectl apply -f kube-flannel.yml [root@k8s-master ~]# ps -ef|grep flannel root 2032 2013 0 21:00 ? 00:00:00 /opt/bin/flanneld --ip-masq --kube-subnet-mgr
2.6 查看集群node狀態
查看集群的node狀態,安裝完網路工具之後,只有顯示如下狀態,所有節點全部都Ready好了之後才能繼續後面的操作
[root@k8s-master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 37m v1.15.0 k8s-node01 Ready <none> 5m22s v1.15.0 k8s-node02 Ready <none> 5m18s v1.15.0
[root@k8s-master ~]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-bccdc95cf-6pdgv 1/1 Running 0 80m coredns-bccdc95cf-f845x 1/1 Running 0 80m etcd-k8s-master 1/1 Running 0 80m kube-apiserver-k8s-master 1/1 Running 0 79m kube-controller-manager-k8s-master 1/1 Running 0 80m kube-flannel-ds-amd64-chpz8 1/1 Running 0 70m kube-flannel-ds-amd64-jx56v 1/1 Running 0 70m kube-flannel-ds-amd64-tsgvv 1/1 Running 0 70m kube-proxy-d5b7l 1/1 Running 0 80m kube-proxy-f7v46 1/1 Running 0 75m kube-proxy-wqhsj 1/1 Running 0 78m kube-scheduler-k8s-master 1/1 Running 0 80m kubernetes-dashboard-8499f49758-6f6ct 1/1 Running 0 42m
只有全部都為1/1則可以成功執行後續步驟,如果flannel需檢查網路情況,重新進行如下操作
kubectl delete -f kube-flannel.yml 然後重新wget,然後修改鏡像地址,然後
kubectl apply -f kube-flannel.yml
2.7 測試Kubernetes集群
在Kubernetes集群中創建一個pod,然後暴露埠,驗證是否正常訪問:
[root@k8s-master ~]# kubectl create deployment nginx --image=nginx deployment.apps/nginx created [root@k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort service/nginx exposed [root@k8s-master ~]# kubectl get pods,svc NAME READY STATUS RESTARTS AGE pod/nginx-554b9c67f9-wf5lm 1/1 Running 0 24s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 39m service/nginx NodePort 10.1.224.251 <none> 80:32039/TCP 9s
訪問地址:http://NodeIP:Port ,此例就是:http://192.168.73.138:32039

2.8 部署 Dashboard
[root@k8s-master ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
[root@k8s-master ~]# vim kubernetes-dashboard.yaml 修改內容: 109 spec: 110 containers: 111 - name: kubernetes-dashboard 112 image: lizhenliang/kubernetes-dashboard-amd64:v1.10.1 # 修改此行 ...... 157 spec: 158 type: NodePort # 增加此行 159 ports: 160 - port: 443 161 targetPort: 8443 162 nodePort: 30001 # 增加此行 163 selector: 164 k8s-app: kubernetes-dashboard [root@k8s-master ~]# kubectl apply -f kubernetes-dashboard.yaml
在火狐瀏覽器訪問(google受信任問題不能訪問)地址: https://NodeIP:30001

創建service account並綁定默認cluster-admin管理員集群角色:
[root@k8s-master ~]# kubectl create serviceaccount dashboard-admin -n kube-system serviceaccount/dashboard-admin created [root@k8s-master ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin [root@k8s-master ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}') Name: dashboard-admin-token-zbn9f Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: dashboard-admin kubernetes.io/service-account.uid: 40259d83-3b4f-4acc-a4fb-43018de7fc19 Type: kubernetes.io/service-account-token Data ==== namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4temJuOWYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNDAyNTlkODMtM2I0Zi00YWNjLWE0ZmItNDMwMThkZTdmYzE5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.E0hGAkeQxd6K-YpPgJmNTv7Sn_P_nzhgCnYXGc9AeXd9k9qAcO97vBeOV-pH518YbjrOAx_D6CKIyP07aCi_3NoPlbbyHtcpRKFl-lWDPdg8wpcIefcpbtS6uCOrpaJdCJjWFcAEHdvcfmiFpdVVT7tUZ2-eHpRTUQ5MDPF-c2IOa9_FC9V3bf6XW6MSCZ_7-fOF4MnfYRa8ucltEIhIhCAeDyxlopSaA5oEbopjaNiVeJUGrKBll8Edatc7-wauUIJXAN-dZRD0xTULPNJ1BsBthGQLyFe8OpL5n_oiHM40tISJYU_uQRlMP83SfkOpbiOpzuDT59BBJB57OQtl3w ca.crt: 1025 bytes


解決其他瀏覽器不能訪問的問題
[root@k8s-master ~]# cd /etc/kubernetes/pki/ [root@k8s-master pki]# mkdir ui [root@k8s-master pki]# cp apiserver.crt ui/ [root@k8s-master pki]# cp apiserver.key ui/ [root@k8s-master pki]# cd ui/ [root@k8s-master ui]# mv apiserver.crt dashboard.pem [root@k8s-master ui]# mv apiserver.key dashboard-key.pem [root@k8s-master ui]# kubectl delete secret kubernetes-dashboard-certs -n kube-system [root@k8s-master ui]# kubectl create secret generic kubernetes-dashboard-certs --from-file=./ -n kube-system
[root@k8s-master]# vim kubernetes-dashboard.yaml #回到這個yaml的路徑下修改 修改 dashboard-controller.yaml 文件,在args下面增加證書兩行 - --tls-key-file=dashboard-key.pem - --tls-cert-file=dashboard.pem [root@k8s-master ~]kubectl apply -f kubernetes-dashboard.yaml [root@k8s-master ~]# kubectl create serviceaccount dashboard-admin -n kube-system serviceaccount/dashboard-admin created [root@k8s-master ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin [root@k8s-master ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}') Name: dashboard-admin-token-zbn9f Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: dashboard-admin kubernetes.io/service-account.uid: 40259d83-3b4f-4acc-a4fb-43018de7fc19 Type: kubernetes.io/service-account-token Data ==== namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4temJuOWYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNDAyNTlkODMtM2I0Zi00YWNjLWE0ZmItNDMwMThkZTdmYzE5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.E0hGAkeQxd6K-YpPgJmNTv7Sn_P_nzhgCnYXGc9AeXd9k9qAcO97vBeOV-pH518YbjrOAx_D6CKIyP07aCi_3NoPlbbyHtcpRKFl-lWDPdg8wpcIefcpbtS6uCOrpaJdCJjWFcAEHdvcfmiFpdVVT7tUZ2-eHpRTUQ5MDPF-c2IOa9_FC9V3bf6XW6MSCZ_7-fOF4MnfYRa8ucltEIhIhCAeDyxlopSaA5oEbopjaNiVeJUGrKBll8Edatc7-wauUIJXAN-dZRD0xTULPNJ1BsBthGQLyFe8OpL5n_oiHM40tISJYU_uQRlMP83SfkOpbiOpzuDT59BBJB57OQtl3w ca.crt: 1025 bytes
文中所用到的知識點和鏡像都是李振良老師的指導,在此聲明感謝