附024.Kubernetes_v1.18.3高可用部署架构二
- 2020 年 6 月 16 日
- 筆記
- dashboard介绍, HAProxy, helm, Ingress, keepalived, kubeadm, Kubernetes, Longhorn, metrics, 集群相关, 高可用
附024.Kubernetes_v1.18.3高可用部署架构二
kubeadm介绍
kubeadm概述
kubeadm功能
本方案描述
- 本方案采用kubeadm部署Kubernetes 1.18.3版本;
- etcd采用混部方式;
- KeepAlived:实现VIP高可用;
- HAProxy:以系统systemd形式运行,提供反向代理至3个master 6443端口;
- 其他主要部署组件包括:
- Metrics:度量;
- Dashboard:Kubernetes 图形UI界面;
- Helm:Kubernetes Helm包管理工具;
- Ingress:Kubernetes 服务暴露;
- Longhorn:Kubernetes 动态存储组件。
部署规划
节点规划
节点主机名 | IP | 类型 | 运行服务 |
---|---|---|---|
master01 | 172.24.8.71 | Kubernetes master节点 | docker、etcd、kube-apiserver、kube-scheduler、kube-controller-manager、 kubectl、kubelet、metrics、calico、HAProxy、KeepAlived |
master02 | 172.24.8.72 | Kubernetes master节点 | docker、etcd、kube-apiserver、kube-scheduler、kube-controller-manager、 kubectl、kubelet、metrics、calico、HAProxy、KeepAlived |
master03 | 172.24.8.73 | Kubernetes master节点 | docker、etcd、kube-apiserver、kube-scheduler、kube-controller-manager、 kubectl、kubelet、metrics、calico、HAProxy、KeepAlived |
worker01 | 172.24.8.74 | Kubernetes node节点1 | docker、kubelet、proxy、calico |
worker02 | 172.24.8.75 | Kubernetes node节点2 | docker、kubelet、proxy、calico |
worker03 | 172.24.8.76 | Kubernetes node节点3 | docker、kubelet、proxy、calico |
VIP | 172.24.8.100 |
Kubernetes的高可用主要指的是控制平面的高可用,即指多套Master节点组件和Etcd组件,工作节点通过负载均衡连接到各Master。
Kubernetes高可用架构中etcd与Master节点组件混布方式特点:
- 所需机器资源少
- 部署简单,利于管理
- 容易进行横向扩展
- 风险大,一台宿主机挂了,master和etcd就都少了一套,集群冗余度受到的影响比较大。
提示:本实验使用高可用架构一实现Kubernetes的高可用。
初始准备
[root@master01 ~]# hostnamectl set-hostname master01 #其他节点依次修改
[root@master01 ~]# cat >> /etc/hosts << EOF
172.24.8.71 master01
172.24.8.72 master02
172.24.8.73 master03
172.24.8.74 worker01
172.24.8.75 worker02
172.24.8.76 worker03
EOF
[root@master01 ~]# vi k8sinit.sh
# Initialize the machine. This needs to be executed on every machine.
# Install docker
useradd -m docker
yum -y install yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo //mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum -y install docker-ce
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["//dbzucv6w.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
systemctl restart docker
systemctl enable docker
systemctl status docker
# Disable the SELinux.
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
# Turn off and disable the firewalld.
systemctl stop firewalld
systemctl disable firewalld
# Modify related kernel parameters & Disable the swap.
cat > /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.tcp_tw_recycle = 0
vm.swappiness = 0
vm.overcommit_memory = 1
vm.panic_on_oom = 0
net.ipv6.conf.all.disable_ipv6 = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf >&/dev/null
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
modprobe br_netfilter
# Add ipvs modules
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
modprobe -- nf_conntrack
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
# Install rpm
yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget
# Update kernel
rpm --import //down.linuxsb.com:8888/RPM-GPG-KEY-elrepo.org
rpm -Uvh //down.linuxsb.com:8888/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
yum --disablerepo="*" --enablerepo="elrepo-kernel" install -y kernel-ml
sed -i 's/^GRUB_DEFAULT=.*/GRUB_DEFAULT=0/' /etc/default/grub
grub2-mkconfig -o /boot/grub2/grub.cfg
yum update -y
# Reboot the machine.
# reboot
提示:对于某些特性,可能需要升级内核,内核升级操作见《018.Linux升级内核》。
4.19版及以上内核nf_conntrack_ipv4已经改为nf_conntrack。**
互信配置
为了更方便远程分发文件和执行命令,本实验配置master01节点到其它节点的 ssh 信任关系。
[root@master01 ~]# ssh-keygen -f ~/.ssh/id_rsa -N ''
[root@master01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@master01
[root@master01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@master02
[root@master01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@master03
[root@master01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@worker01
[root@master01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@worker02
[root@master01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@worker03
其他准备
[root@master01 ~]# vi environment.sh
#!/bin/sh
#****************************************************************#
# ScriptName: environment.sh
# Author: xhy
# Create Date: 2020-05-30 16:30
# Modify Author: xhy
# Modify Date: 2020-06-15 17:55
# Version:
#***************************************************************#
# 集群 MASTER 机器 IP 数组
export MASTER_IPS=(172.24.8.71 172.24.8.72 172.24.8.73)
# 集群 MASTER IP 对应的主机名数组
export MASTER_NAMES=(master01 master02 master03)
# 集群 NODE 机器 IP 数组
export NODE_IPS=(172.24.8.74 172.24.8.75 172.24.8.76)
# 集群 NODE IP 对应的主机名数组
export NODE_NAMES=(worker01 worker02 worker03)
# 集群所有机器 IP 数组
export ALL_IPS=(172.24.8.71 172.24.8.72 172.24.8.73 172.24.8.74 172.24.8.75 172.24.8.76)
# 集群所有IP 对应的主机名数组
export ALL_NAMES=(master01 master02 master03 worker01 worker02 worker03)
[root@master01 ~]# source environment.sh
[root@master01 ~]# chmod +x *.sh
[root@master01 ~]# for all_ip in ${ALL_IPS[@]}
do
echo ">>> ${all_ip}"
scp -rp /etc/hosts root@${all_ip}:/etc/hosts
scp -rp k8sinit.sh root@${all_ip}:/root/
ssh root@${all_ip} "bash /root/k8sinit.sh"
done
集群部署
相关组件包
需要在每台机器上都安装以下的软件包:
- kubeadm: 用来初始化集群的指令;
- kubelet: 在集群中的每个节点上用来启动 pod 和 container 等;
- kubectl: 用来与集群通信的命令行工具。
kubeadm不能安装或管理 kubelet 或 kubectl ,所以得保证他们满足通过 kubeadm 安装的 Kubernetes 控制层对版本的要求。如果版本没有满足要求,可能导致一些意外错误或问题。
具体相关组件安装见《附001.kubectl介绍及使用》。
提示:Kubernetes 1.18版本所有兼容相应组件的版本参考://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md。
正式安装
[root@master01 ~]# for all_ip in ${ALL_IPS[@]}
do
echo ">>> ${all_ip}"
ssh root@${all_ip} "cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=//mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=//mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg //mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF"
ssh root@${all_ip} "yum install -y kubeadm-1.18.3-0.x86_64 kubelet-1.18.3-0.x86_64 kubectl-1.18.3-0.x86_64 --disableexcludes=kubernetes"
ssh root@${all_ip} "systemctl enable kubelet"
done
[root@master01 ~]# yum search -y kubelet --showduplicates #查看相应版本
提示:如上仅需Master01节点操作,从而实现所有节点自动化安装,同时此时不需要启动kubelet,初始化的过程中会自动启动的,如果此时启动了会出现报错,忽略即可。
说明:同时安装了cri-tools, kubernetes-cni, socat三个依赖:
socat:kubelet的依赖;
cri-tools:即CRI(Container Runtime Interface)容器运行时接口的命令行工具。
部署高可用组件
HAProxy安装
[root@master01 ~]# for master_ip in ${MASTER_IPS[@]}
do
echo ">>> ${master_ip}"
ssh root@${master_ip} "yum -y install gcc gcc-c++ make libnl libnl-devel libnfnetlink-devel openssl-devel wget openssh-clients systemd-devel zlib-devel pcre-devel libnl3-devel"
ssh root@${master_ip} "wget //down.linuxsb.com:8888/software/haproxy-2.1.6.tar.gz"
ssh root@${master_ip} "tar -zxvf haproxy-2.1.6.tar.gz"
ssh root@${master_ip} "cd haproxy-2.1.6/ && make ARCH=x86_64 TARGET=linux-glibc USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1 PREFIX=/usr/local/haprpxy && make install PREFIX=/usr/local/haproxy"
ssh root@${master_ip} "cp /usr/local/haproxy/sbin/haproxy /usr/sbin/"
ssh root@${master_ip} "useradd -r haproxy && usermod -G haproxy haproxy"
ssh root@${master_ip} "mkdir -p /etc/haproxy && cp -r /root/haproxy-2.1.6/examples/errorfiles/ /usr/local/haproxy/"
done
Keepalived安装
[root@master01 ~]# for master_ip in ${MASTER_IPS[@]}
do
echo ">>> ${master_ip}"
ssh root@${master_ip} "yum -y install gcc gcc-c++ make libnl libnl-devel libnfnetlink-devel openssl-devel"
ssh root@${master_ip} "wget //down.linuxsb.com:8888/software/keepalived-2.0.20.tar.gz"
ssh root@${master_ip} "tar -zxvf keepalived-2.0.20.tar.gz"
ssh root@${master_ip} "cd keepalived-2.0.20/ && ./configure --sysconf=/etc --prefix=/usr/local/keepalived && make && make install"
done
提示:如上仅需Master01节点操作,从而实现所有节点自动化安装。
创建配置文件
[root@master01 ~]# wget //down.linuxsb.com:8888/hakek8s.sh #拉取自动部署脚本
[root@master01 ~]# chmod u+x hakek8s.sh
[root@master01 ~]# vi hakek8s.sh
#!/bin/sh
#****************************************************************#
# ScriptName: hakek8s.sh
# Author: xhy
# Create Date: 2020-06-08 20:00
# Modify Author: xhy
# Modify Date: 2020-06-15 18:15
# Version: v2
#***************************************************************#
#######################################
# set variables below to create the config files, all files will create at ./config directory
#######################################
# master keepalived virtual ip address
export K8SHA_VIP=172.24.8.100
# master01 ip address
export K8SHA_IP1=172.24.8.71
# master02 ip address
export K8SHA_IP2=172.24.8.72
# master03 ip address
export K8SHA_IP3=172.24.8.73
# master01 hostname
export K8SHA_HOST1=master01
# master02 hostname
export K8SHA_HOST2=master02
# master03 hostname
export K8SHA_HOST3=master03
# master01 network interface name
export K8SHA_NETINF1=eth0
# master02 network interface name
export K8SHA_NETINF2=eth0
# master03 network interface name
export K8SHA_NETINF3=eth0
# keepalived auth_pass config
export K8SHA_KEEPALIVED_AUTH=412f7dc3bfed32194d1600c483e10ad1d
# kubernetes CIDR pod subnet
export K8SHA_PODCIDR=10.10.0.0
# kubernetes CIDR svc subnet
export K8SHA_SVCCIDR=10.20.0.0
[root@master01 ~]# ./hakek8s.sh
解释:如上仅需Master01节点操作。执行hakek8s.sh脚本后会生产如下配置文件清单:
- kubeadm-config.yaml:kubeadm初始化配置文件,位于当前目录
- keepalived:keepalived配置文件,位于各个master节点的/etc/keepalived目录
- haproxy:haproxy的配置文件,位于各个master节点的/etc/haproxy/目录
- calico.yaml:calico网络组件部署文件,位于config/calico/目录