k8s集群网络(12)-flannel udp overlay网络setup
- 2020 年 4 月 1 日
- 笔记
在上一篇文章里我们介绍了k8s集群中flannel vxlan overlay网络中pod到pod的通讯。在这里我们主要介绍flannel udp overlay网络setup,以便后面分析flannel udp overlay网络中pod到pod的通讯过程。
对于flannel udp overlay网络我们需要把以前文章setup的docker,flannel,kubelet,kube-proxy,kube-apiserver,kube-scheduler,kube-controller-manager服务停掉,然后在以前文章中安装的etcd里进行配置。
修改etcd配置:
- "Backend": {"Type":"udp"}==>表示为udp类型网络
- "Port ": 8285==>表示udp数据处理端口为8285
etcdctl --ca-file /etc/etcd/ca.crt --cert-file /etc/etcd/etcd-client.crt --key-file /etc/etcd/etcd-client.key set /cloudnetwork/config '{ "Network": "10.1.0.0/16", "SubnetLen": 24, "Backend": {"Type":"udp"}, "Port ": 8285}' etcdctl --ca-file /etc/etcd/ca.crt --cert-file /etc/etcd/etcd-client.crt --key-file /etc/etcd/etcd-client.key get /cloudnetwork/config

启动所有节点的flanneld service并查看网络生成情况:
systemctl start flanneld systemctl status flanneld etcdctl --ca-file /etc/etcd/ca.crt --cert-file /etc/etcd/etcd-client.crt --key-file /etc/etcd/etcd-client.key ls /cloudnetwork/subnets etcdctl --ca-file /etc/etcd/ca.crt --cert-file /etc/etcd/etcd-client.crt --key-file /etc/etcd/etcd-client.key get /cloudnetwork/subnets/10.1.55.0-24 etcdctl --ca-file /etc/etcd/ca.crt --cert-file /etc/etcd/etcd-client.crt --key-file /etc/etcd/etcd-client.key get /cloudnetwork/subnets/10.1.74.0-24 etcdctl --ca-file /etc/etcd/ca.crt --cert-file /etc/etcd/etcd-client.crt --key-file /etc/etcd/etcd-client.key get /cloudnetwork/subnets/10.1.82.0-24


我们发现:
- 在node 172.20.11.41上生成子网10.1.82.0/24
- 在node 172.20.11.42上生成子网10.1.55.0/24
- 在node 172.20.11.43上生成子网10.1.74.0/24
在3个node上分别check子网和docker网络配置文件:
ip addr|grep 41 etcdctl --ca-file /etc/etcd/ca.crt --cert-file /etc/etcd/etcd-client.crt --key-file /etc/etcd/etcd-client.key get /cloudnetwork/subnets/10.1.82.0-24 cat /var/run/flannel/docker ip addr|grep 42 etcdctl --ca-file /etc/etcd/ca.crt --cert-file /etc/etcd/etcd-client.crt --key-file /etc/etcd/etcd-client.key get /cloudnetwork/subnets/10.1.55.0-24 cat /var/run/flannel/docker ip addr|grep 43 etcdctl --ca-file /etc/etcd/ca.crt --cert-file /etc/etcd/etcd-client.crt --key-file /etc/etcd/etcd-client.key get /cloudnetwork/subnets/10.1.74.0-24 cat /var/run/flannel/docker



这里我们发现数据包的最大传输单元mtu(Maximum Transmission Unit)为1472,不是一般的1500。是因为flannel upd overlay网络利用udp包的payload来封装三层ip包,所以导致最大传输单元mtu变小了。
启动所有worker node docker服务:
systemctl start docker systemctl status docker

启动所有worker node kubelet服务:
systemctl start kubelet systemctl status kubelet

启动所有worker node kube-proxy服务:
systemctl start kube-proxy systemctl status kube-proxy

启动master node kube-apiserver服务:
systemctl start kube-apiserver systemctl status kube-apiserver

启动master node kube-scheduler服务:
systemctl start kube-scheduler systemctl status kube-scheduler

启动master node kube-controller-manager服务:
systemctl start kube-controller-manager systemctl status kube-controller-manager

打开所有worker node 8285 udp port :
firewall-cmd --permanent --zone=public --add-port=8285/udp firewall-cmd --reload firewall-cmd --list-all

flannel upd模式利用udp封包,根据上面etcd中的配置,udp使用8285端口接收数据,所以需要在所有的worker node上打开8285 udp port 。
检查以前部署的pod处于ready状态:
kubectl get pods -o wide --all-namespaces

访问以前文章部署的kube-dashboard:
https://172.20.11.43:9092/#!/overview?namespace=default

我们发现kube-dashboard可以访问,也说明我们的deployment,pods,services等资源都是正常的。
目前先写到这里,下一篇文章里我们继续介绍k8s集群flannel udp overlay网络下pod到pod的通讯过程。