【K8S】K8S 1.18.2安裝dashboard(基於kubernetes-dashboard 2.0.0版本)
【K8S】K8S 1.18.2安裝dashboard(基於kubernetes-dashboard 2.0.0版本)
寫在前面
K8S集群部署成功了,如何對集群進行可視化管理呢?別著急,接下來,我們一起搭建kubernetes-dashboard來解決這個問題。
有關K8S集群的安裝可以參考《【K8S】基於單Master節點安裝K8S集群》
有關Metrics-Service的安裝可以參考《【K8S】K8s部署Metrics-Server服務》
安裝部署dashboard
1.查看pod運行情況
[root@binghe101 ~]# kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-5b8b769fcd-l2tmm 1/1 Running 2 15h 172.18.203.71 binghe101 <none> <none>
kube-system calico-node-7b7fx 1/1 Running 2 15h 192.168.175.102 binghe102 <none> <none>
kube-system calico-node-8krsl 1/1 Running 2 15h 192.168.175.101 binghe101 <none> <none>
kube-system coredns-546565776c-rd2zr 1/1 Running 2 15h 172.18.203.72 binghe101 <none> <none>
kube-system coredns-546565776c-x8r7l 1/1 Running 2 15h 172.18.203.73 binghe101 <none> <none>
kube-system etcd-binghe101 1/1 Running 2 15h 192.168.175.101 binghe101 <none> <none>
kube-system kube-apiserver-binghe101 1/1 Running 3 15h 192.168.175.101 binghe101 <none> <none>
kube-system kube-controller-manager-binghe101 1/1 Running 3 15h 192.168.175.101 binghe101 <none> <none>
kube-system kube-proxy-cgq5n 1/1 Running 2 15h 192.168.175.102 binghe102 <none> <none>
kube-system kube-proxy-qnffb 1/1 Running 2 15h 192.168.175.101 binghe101 <none> <none>
kube-system kube-scheduler-binghe101 1/1 Running 3 15h 192.168.175.101 binghe101 <none> <none>
kube-system metrics-server-57bc7f4584-cwsn8 1/1 Running 0 109m 172.18.229.68 binghe102 <none> <none>
2.下載recommended.yaml文件
wget //raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
3.修改recommended.yaml文件
vim recommended.yaml
需要修改的內容如下所示。
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort #增加
ports:
- port: 443
targetPort: 8443
nodePort: 30000 #增加
selector:
k8s-app: kubernetes-dashboard
---
#因為自動生成的證書很多瀏覽器無法使用,所以我們自己創建,注釋掉kubernetes-dashboard-certs對象聲明
#apiVersion: v1
#kind: Secret
#metadata:
# labels:
# k8s-app: kubernetes-dashboard
# name: kubernetes-dashboard-certs
# namespace: kubernetes-dashboard
#type: Opaque
---
4.創建證書
mkdir dashboard-certs
cd dashboard-certs/
#創建命名空間
kubectl create namespace kubernetes-dashboard
# 創建key文件
openssl genrsa -out dashboard.key 2048
#證書請求
openssl req -days 36000 -new -out dashboard.csr -key dashboard.key -subj '/CN=dashboard-cert'
#自簽證書
openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
#創建kubernetes-dashboard-certs對象
kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard
5.安裝dashboard
kubectl create -f ~/recommended.yaml
注意:這裡可能會報如下所示。
Error from server (AlreadyExists): error when creating "./recommended.yaml": namespaces "kubernetes-dashboard" already exists
這是因為我們在創建證書時,已經創建了kubernetes-dashboard命名空間,所以,直接忽略此錯誤資訊即可。
6.查看安裝結果
[root@binghe101 ~]# kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-5b8b769fcd-l2tmm 1/1 Running 2 15h 172.18.203.71 binghe101 <none> <none>
kube-system calico-node-7b7fx 1/1 Running 2 15h 192.168.175.102 binghe102 <none> <none>
kube-system calico-node-8krsl 1/1 Running 2 15h 192.168.175.101 binghe101 <none> <none>
kube-system coredns-546565776c-rd2zr 1/1 Running 2 15h 172.18.203.72 binghe101 <none> <none>
kube-system coredns-546565776c-x8r7l 1/1 Running 2 15h 172.18.203.73 binghe101 <none> <none>
kube-system etcd-binghe101 1/1 Running 2 15h 192.168.175.101 binghe101 <none> <none>
kube-system kube-apiserver-binghe101 1/1 Running 3 15h 192.168.175.101 binghe101 <none> <none>
kube-system kube-controller-manager-binghe101 1/1 Running 3 15h 192.168.175.101 binghe101 <none> <none>
kube-system kube-proxy-cgq5n 1/1 Running 2 15h 192.168.175.102 binghe102 <none> <none>
kube-system kube-proxy-qnffb 1/1 Running 2 15h 192.168.175.101 binghe101 <none> <none>
kube-system kube-scheduler-binghe101 1/1 Running 3 15h 192.168.175.101 binghe101 <none> <none>
kube-system metrics-server-57bc7f4584-cwsn8 1/1 Running 0 133m 172.18.229.68 binghe102 <none> <none>
kubernetes-dashboard dashboard-metrics-scraper-6b4884c9d5-qccwt 1/1 Running 0 102s 172.18.229.75 binghe102 <none> <none>
kubernetes-dashboard kubernetes-dashboard-7b544877d5-s8cgd 1/1 Running 0 102s 172.18.229.74 binghe102 <none> <none>
[root@binghe101 ~]# kubectl get service -n kubernetes-dashboard -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
dashboard-metrics-scraper ClusterIP 10.96.249.138 <none> 8000/TCP 2m21s k8s-app=dashboard-metrics-scraper
kubernetes-dashboard NodePort 10.96.219.128 <none> 443:30000/TCP 2m21s k8s-app=kubernetes-dashboard
7.創建dashboard管理員
創建dashboard-admin.yaml文件。
vim dashboard-admin.yaml
文件的內容如下所示。
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: dashboard-admin
namespace: kubernetes-dashboard
保存退出後執行如下命令創建管理員。
kubectl create -f ./dashboard-admin.yaml
8.為用戶分配許可權
創建dashboard-admin-bind-cluster-role.yaml文件。
vim dashboard-admin-bind-cluster-role.yaml
文件內容如下所示。
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: dashboard-admin-bind-cluster-role
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: dashboard-admin
namespace: kubernetes-dashboard
保存退出後執行如下命令為用戶分配許可權。
kubectl create -f ./dashboard-admin-bind-cluster-role.yaml
9.查看並複製用戶Token
在命令行執行如下命令。
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')
具體執行情況如下所示。
[root@binghe101 ~]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')
Name: dashboard-admin-token-p8tng
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: dashboard-admin
kubernetes.io/service-account.uid: c3640b5f-cd92-468c-ba01-c886290c41ca
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 20 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IlVsRVBqTG5RNC1oTlpDS2xMRXF2cFIxWm44ZXhWeXlBRG5SdXpmQXpDdWcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tcDh0bmciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYzM2NDBiNWYtY2Q5Mi00NjhjLWJhMDEtYzg4NjI5MGM0MWNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.XOrXofgbk5EDa8COxOkv31mYwciUGXcBD9TQrb6QTOfT2W4eEpAAZUzKYzSmxLeHMqvu_IUIUF2mU5Lt6wN3L93C2NLfV9jqaopfq0Q5GjgWNgGRZAgsuz5W3v_ntlKz0_VW3a7ix3QQSrEWLBF6YUPrzl8p3r8OVWpDUndjx-OXEw5pcYQLH1edy-tpQ6Bc8S1BnK-d4Zf-ZuBeH0X6orZKhdSWhj9WQDJUx6DBpjx9DUc9XecJY440HVti5hmaGyfd8v0ofgtdsSE7q1iizm-MffJpcp4PGnUU3hy1J-XIP0M-8SpAyg2Pu_-mQvFfoMxIPEEzpOrckfC1grlZ3g
可以看到,此時的Token值為:
eyJhbGciOiJSUzI1NiIsImtpZCI6IlVsRVBqTG5RNC1oTlpDS2xMRXF2cFIxWm44ZXhWeXlBRG5SdXpmQXpDdWcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tcDh0bmciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYzM2NDBiNWYtY2Q5Mi00NjhjLWJhMDEtYzg4NjI5MGM0MWNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.XOrXofgbk5EDa8COxOkv31mYwciUGXcBD9TQrb6QTOfT2W4eEpAAZUzKYzSmxLeHMqvu_IUIUF2mU5Lt6wN3L93C2NLfV9jqaopfq0Q5GjgWNgGRZAgsuz5W3v_ntlKz0_VW3a7ix3QQSrEWLBF6YUPrzl8p3r8OVWpDUndjx-OXEw5pcYQLH1edy-tpQ6Bc8S1BnK-d4Zf-ZuBeH0X6orZKhdSWhj9WQDJUx6DBpjx9DUc9XecJY440HVti5hmaGyfd8v0ofgtdsSE7q1iizm-MffJpcp4PGnUU3hy1J-XIP0M-8SpAyg2Pu_-mQvFfoMxIPEEzpOrckfC1grlZ3g
查看dashboard介面
在瀏覽器中打開鏈接 //192.168.175.101:30000 ,如下所示。
這裡,我們選擇Token方式登錄,並輸入在命令行獲取到的Token,如下所示。
點擊登錄後進入dashboard,如下所示。
由於我們在《【K8S】K8s部署Metrics-Server服務》一文中安裝了Metrics-Server服務,所以,我們可以查看節點伺服器CPU和記憶體的使用情況,如下所示。
至此,dashboard 2.0.0安裝成功。
寫在最後
如果覺得文章對你有點幫助,請微信搜索並關注「 冰河技術 」微信公眾號,跟冰河學習各種編程技術。
最後附上K8S最全知識圖譜鏈接:
//www.processon.com/view/link/5ac64532e4b00dc8a02f05eb?spm=a2c4e.10696291.0.0.6ec019a4bYSFIw#map
祝大家在學習K8S時,少走彎路。