kubernetes系列教程(十七)基於haproxy實現ingress服務暴露
- 2020 年 1 月 3 日
- 筆記
1. HAproxy Ingress控制器
1.1 HAproxy Ingress簡介
HAProxy Ingress watches in the k8s cluster and how it builds HAProxy configuration
和Nginx相類似,HAproxy通過監視kubernetes api獲取到service後端pod的狀態,動態更新haproxy配置文件,以實現七層的負載均衡。

HAproxy Ingress控制器具備的特性如下:
- Fast,Carefully built on top of the battle-tested HAProxy load balancer. 基於haproxy性能有保障
- Reliable,Trusted by sysadmins on clusters as big as 1,000 namespaces, 2,000 domains and 3,000 ingress objects. 可靠,支援1000最多1000個命名空間和2000多個域名
- Highly customizable,100+ configuration options and growing. 可訂製化強,支援100多個配置選項
1.2 HAproxy控制器安裝
haproxy ingress安裝相對簡單,官方提供了安裝的yaml文件,先將文件下載查看一下kubernetes資源配置,包含的資源類型有:
- ServiceAccount 和RBAC認證授權關聯
- RBAC認證 Role、ClusterRole、 ClusterRoleBinding
- Deployment 默認包含的一個後端backend應用伺服器,與之關聯一個Service
- Service 後端的一個service
- DaemonSet HAproxy最核心的一個控制器,關聯認證ServiceAccount和配置ConfigMap,定義了一個nodeSelector,label為role: ingress-controller,將運行在特定的節點上
- ConfigMap 實現haproxy ingress自定義配置
安裝文件路徑https://haproxy-ingress.github.io/resources/haproxy-ingress.yaml
1、創建命名空間,haproxy ingress部署在ingress-controller這個命名空間,先創建ns
[root@node-1 ~]# kubectl create namespace ingress-controller namespace/ingress-controller created [root@node-1 ~]# kubectl get namespaces ingress-controller -o yaml apiVersion: v1 kind: Namespace metadata: creationTimestamp: "2019-12-27T09:56:04Z" name: ingress-controller resourceVersion: "13946553" selfLink: /api/v1/namespaces/ingress-controller uid: ea70b2f7-efe4-43fd-8ce9-3b917b09b533 spec: finalizers: - kubernetes status: phase: Active
2、安裝haproxy ingress控制器
[root@node-1 ~]# wget https://haproxy-ingress.github.io/resources/haproxy-ingress.yaml [root@node-1 ~]# kubectl apply -f haproxy-ingress.yaml serviceaccount/ingress-controller created clusterrole.rbac.authorization.k8s.io/ingress-controller created role.rbac.authorization.k8s.io/ingress-controller created clusterrolebinding.rbac.authorization.k8s.io/ingress-controller created rolebinding.rbac.authorization.k8s.io/ingress-controller created deployment.apps/ingress-default-backend created service/ingress-default-backend created configmap/haproxy-ingress created daemonset.apps/haproxy-ingress created
3、 檢查haproxy ingress安裝情況,檢查haproxy ingress核心的DaemonSets,發現DS並未部署Pod,原因是配置文件中定義了nodeSelector節點標籤選擇器,因此需要給node設置合理的標籤
[root@node-1 ~]# kubectl get daemonsets -n ingress-controller NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE haproxy-ingress 0 0 0 0 0 role=ingress-controller 5m51s
4、 給node設置標籤,讓DaemonSets管理的Pod能調度到node節點上,生產環境中根據情況定義,將實現haproxy ingress功能的節點定義到特定的節點,對個node節點的訪問,需要藉助於負載均衡實現統一接入,本文主要以探究haproxy ingress功能,因此未部署負載均衡調度器,讀者可根據實際的情況部署。以node-1和node-2為例:
[root@node-1 ~]# kubectl label node node-1 role=ingress-controller node/node-1 labeled [root@node-1 ~]# kubectl label node node-2 role=ingress-controller node/node-2 labeled #查看labels的情況 [root@node-1 ~]# kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS node-1 Ready master 104d v1.15.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-1,kubernetes.io/os=linux,node-role.kubernetes.io/master=,role=ingress-controller node-2 Ready <none> 104d v1.15.3 app=web,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-2,kubernetes.io/os=linux,label=test,role=ingress-controller node-3 Ready <none> 104d v1.15.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-3,kubernetes.io/os=linux
5、再次查看ingress部署情況,已完成部署,並調度至node-1和node-2節點上
[root@node-1 ~]# kubectl get daemonsets -n ingress-controller NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE haproxy-ingress 2 2 2 2 2 role=ingress-controller 15m [root@node-1 ~]# kubectl get pods -n ingress-controller -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES haproxy-ingress-bdns8 1/1 Running 0 2m27s 10.254.100.102 node-2 <none> <none> haproxy-ingress-d5rnl 1/1 Running 0 2m31s 10.254.100.101 node-1 <none> <none>
haproxy ingress部署時候也通過deployments部署了一個後端backend服務,這是部署haproxy ingress必須部署服務,否則ingress controller無法啟動,可以通過查看Deployments列表確認
[root@node-1 ~]# kubectl get deployments -n ingress-controller NAME READY UP-TO-DATE AVAILABLE AGE ingress-default-backend 1/1 1 1 18m
6、 查看haproxy ingress的日誌,通過查詢日誌可知,多個haproxy ingress是通過選舉實現高可用HA機制。

其他資源包括ServiceAccount,ClusterRole,ConfigMaps請單獨確認,至此HAproxy ingress controller部署完畢。另外兩種部署方式:
2. haproxy ingress使用
2.1 haproxy ingress基礎
Ingress控制器部署完畢後需要定義Ingress規則,以方便Ingress控制器能夠識別到service後端Pod的資源,這個章節我們將來介紹在HAproxy Ingress Controller環境下Ingress的使用。
1、環境準備,創建一個deployments並暴露其埠
#創建應用並暴露埠 [root@node-1 haproxy-ingress]# kubectl run haproxy-ingress-demo --image=nginx:1.7.9 --port=80 --replicas=1 --expose kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead. service/haproxy-ingress-demo created deployment.apps/haproxy-ingress-demo created #查看應用 [root@node-1 haproxy-ingress]# kubectl get deployments haproxy-ingress-demo NAME READY UP-TO-DATE AVAILABLE AGE haproxy-ingress-demo 1/1 1 1 10s #查看service情況 [root@node-1 haproxy-ingress]# kubectl get services haproxy-ingress-demo NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE haproxy-ingress-demo ClusterIP 10.106.199.102 <none> 80/TCP 17s
2、創建ingress規則,如果有多個ingress控制器,可以通過ingress.class指定類型為haproxy
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: haproxy-ingress-demo labels: ingresscontroller: haproxy annotations: kubernetes.io/ingress.class: haproxy spec: rules: - host: www.happylau.cn http: paths: - path: / backend: serviceName: haproxy-ingress-demo servicePort: 80
3、應用ingress規則,並查看ingress詳情,查看Events日誌發現控制器已正常更新
[root@node-1 haproxy-ingress]# kubectl apply -f ingress-demo.yaml ingress.extensions/haproxy-ingress-demo created #查看詳情 [root@node-1 haproxy-ingress]# kubectl describe ingresses haproxy-ingress-demo Name: haproxy-ingress-demo Namespace: default Address: Default backend: default-http-backend:80 (<none>) Rules: Host Path Backends ---- ---- -------- www.happylau.cn / haproxy-ingress-demo:80 (10.244.2.166:80) Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"haproxy"},"labels":{"ingresscontroller":"haproxy"},"name":"haproxy-ingress-demo","namespace":"default"},"spec":{"rules":[{"host":"www.happylau.cn","http":{"paths":[{"backend":{"serviceName":"haproxy-ingress-demo","servicePort":80},"path":"/"}]}}]}} kubernetes.io/ingress.class: haproxy Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 27s ingress-controller Ingress default/haproxy-ingress-demo Normal CREATE 27s ingress-controller Ingress default/haproxy-ingress-demo Normal UPDATE 20s ingress-controller Ingress default/haproxy-ingress-demo Normal UPDATE 20s ingress-controller Ingress default/haproxy-ingress-demo
4、測試驗證ingress規則,可以將域名寫入到hosts文件中,我們直接使用gcurl測試,地址指向node-1或node-2均可
[root@node-1 haproxy-ingress]# curl http://www.happylau.cn --resolve www.happylau.cn:80:10.254.100.101 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
5、測試正常,接下來到haproxy ingress controller中剛查看對應生成規則配置文件
[root@node-1 ~]# kubectl exec -it haproxy-ingress-bdns8 -n ingress-controller /bin/sh #查看配置文件 /etc/haproxy # cat /etc/haproxy/haproxy.cfg # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # HAProxy Ingress Controller # # -------------------------- # # This file is automatically updated, do not edit # # # 全局配置文件內容 global daemon nbthread 2 cpu-map auto:1/1-2 0-1 stats socket /var/run/haproxy-stats.sock level admin expose-fd listeners maxconn 2000 hard-stop-after 10m lua-load /usr/local/etc/haproxy/lua/send-response.lua lua-load /usr/local/etc/haproxy/lua/auth-request.lua tune.ssl.default-dh-param 2048 ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK ssl-default-bind-options no-sslv3 no-tls-tickets #默認配置內容 defaults log global maxconn 2000 option redispatch option dontlognull option http-server-close option http-keep-alive timeout client 50s timeout client-fin 50s timeout connect 5s timeout http-keep-alive 1m timeout http-request 5s timeout queue 5s timeout server 50s timeout server-fin 50s timeout tunnel 1h #後端伺服器,即通過service服務發現機制,和後端的Pod關聯 backend default_haproxy-ingress-demo_80 mode http balance roundrobin acl https-request ssl_fc http-request set-header X-Original-Forwarded-For %[hdr(x-forwarded-for)] if { hdr(x-forwarded-for) -m found } http-request del-header x-forwarded-for option forwardfor http-response set-header Strict-Transport-Security "max-age=15768000" server srv001 10.244.2.166:80 weight 1 check inter 2s #後端Pod的地址 server srv002 127.0.0.1:1023 disabled weight 1 check inter 2s server srv003 127.0.0.1:1023 disabled weight 1 check inter 2s server srv004 127.0.0.1:1023 disabled weight 1 check inter 2s server srv005 127.0.0.1:1023 disabled weight 1 check inter 2s server srv006 127.0.0.1:1023 disabled weight 1 check inter 2s server srv007 127.0.0.1:1023 disabled weight 1 check inter 2s #默認安裝時創建的backend服務 ,初始安裝時需要使用到 backend _default_backend mode http balance roundrobin http-request set-header X-Original-Forwarded-For %[hdr(x-forwarded-for)] if { hdr(x-forwarded-for) -m found } http-request del-header x-forwarded-for option forwardfor server srv001 10.244.2.165:8080 weight 1 check inter 2s server srv002 127.0.0.1:1023 disabled weight 1 check inter 2s server srv003 127.0.0.1:1023 disabled weight 1 check inter 2s server srv004 127.0.0.1:1023 disabled weight 1 check inter 2s server srv005 127.0.0.1:1023 disabled weight 1 check inter 2s server srv006 127.0.0.1:1023 disabled weight 1 check inter 2s server srv007 127.0.0.1:1023 disabled weight 1 check inter 2s backend _error413 mode http errorfile 400 /usr/local/etc/haproxy/errors/413.http http-request deny deny_status 400 backend _error495 mode http errorfile 400 /usr/local/etc/haproxy/errors/495.http http-request deny deny_status 400 backend _error496 mode http errorfile 400 /usr/local/etc/haproxy/errors/496.http http-request deny deny_status 400 #前端監聽的80埠轉發規則,並配置有https跳轉,對應的主機配置在/etc/haproxy/maps/_global_http_front.map文件中定義 frontend _front_http mode http bind *:80 http-request set-var(req.base) base,lower,regsub(:[0-9]+/,/) http-request redirect scheme https if { var(req.base),map_beg(/etc/haproxy/maps/_global_https_redir.map,_nomatch) yes } http-request set-header X-Forwarded-Proto http http-request del-header X-SSL-Client-CN http-request del-header X-SSL-Client-DN http-request del-header X-SSL-Client-SHA1 http-request del-header X-SSL-Client-Cert http-request set-var(req.backend) var(req.base),map_beg(/etc/haproxy/maps/_global_http_front.map,_nomatch) use_backend %[var(req.backend)] unless { var(req.backend) _nomatch } default_backend _default_backend #前端監聽的443轉發規則,對應域名在/etc/haproxy/maps/ _front001_host.map文件中 frontend _front001 mode http bind *:443 ssl alpn h2,http/1.1 crt /ingress-controller/ssl/default-fake-certificate.pem http-request set-var(req.hostbackend) base,lower,regsub(:[0-9]+/,/),map_beg(/etc/haproxy/maps/_front001_host.map,_nomatch) http-request set-header X-Forwarded-Proto https http-request del-header X-SSL-Client-CN http-request del-header X-SSL-Client-DN http-request del-header X-SSL-Client-SHA1 http-request del-header X-SSL-Client-Cert use_backend %[var(req.hostbackend)] unless { var(req.hostbackend) _nomatch } default_backend _default_backend #狀態監聽器 listen stats mode http bind *:1936 stats enable stats uri / no log option forceclose stats show-legends #監控健康檢查 frontend healthz mode http bind *:10253 monitor-uri /healthz
查看主機名隱射文件,包含有前端主機名和轉發到後端backend的名稱
/etc/haproxy/maps # cat /etc/haproxy/maps/_global_http_front.map # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # HAProxy Ingress Controller # # -------------------------- # # This file is automatically updated, do not edit # # # www.happylau.cn/ default_haproxy-ingress-demo_80
通過上面的基礎配置可以實現基於haproxy的七層負載均衡實現,haproxy ingress controller通過kubernetes api動態識別到service後端規則配置並更新至haproxy.cfg配置文件中,從而實現負載均衡功能實現。
2.2 動態更新和負載均衡
後端Pod是實時動態變化的,haproxy ingress通過service的服務發現機制,動態識別到後端Pod的變化情況,並動態更新haproxy.cfg配置文件,並重載配置(實際不需要重啟haproxy服務),本章節將演示haproxy ingress動態更新和負載均衡功能。
1、動態更新,我們以擴容pod的副本為例,將副本數從replicas=1擴容至3個
[root@node-1 ~]# kubectl scale --replicas=3 deployment haproxy-ingress-demo deployment.extensions/haproxy-ingress-demo scaled [root@node-1 ~]# kubectl get deployments haproxy-ingress-demo NAME READY UP-TO-DATE AVAILABLE AGE haproxy-ingress-demo 3/3 3 3 43m #查看擴容後Pod的IP地址 [root@node-1 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES haproxy-ingress-demo-5d487d4fc-5pgjt 1/1 Running 0 43m 10.244.2.166 node-3 <none> <none> haproxy-ingress-demo-5d487d4fc-pst2q 1/1 Running 0 18s 10.244.0.52 node-1 <none> <none> haproxy-ingress-demo-5d487d4fc-sr8tm 1/1 Running 0 18s 10.244.1.149 node-2 <none> <none>
2、查看haproxy配置文件內容,可以看到backend後端主機列表已動態發現新增的pod地址
backend default_haproxy-ingress-demo_80 mode http balance roundrobin acl https-request ssl_fc http-request set-header X-Original-Forwarded-For %[hdr(x-forwarded-for)] if { hdr(x-forwarded-for) -m found } http-request del-header x-forwarded-for option forwardfor http-response set-header Strict-Transport-Security "max-age=15768000" server srv001 10.244.2.166:80 weight 1 check inter 2s #新增的pod地址 server srv002 10.244.0.52:80 weight 1 check inter 2s server srv003 10.244.1.149:80 weight 1 check inter 2s server srv004 127.0.0.1:1023 disabled weight 1 check inter 2s server srv005 127.0.0.1:1023 disabled weight 1 check inter 2s server srv006 127.0.0.1:1023 disabled weight 1 check inter 2s server srv007 127.0.0.1:1023 disabled weight 1 check inter 2s
4、查看haproxy ingress日誌,日誌中提示HAProxy updated without needing to reload,即配置動態識別,不需要重啟haproxy服務就能夠識別,自從1.8後haproxy能支援動態配置更新的能力,以適應微服務的場景,詳情查看文章說明
[root@node-1 ~]# kubectl logs haproxy-ingress-bdns8 -n ingress-controller -f I1227 12:21:11.523066 6 controller.go:274] Starting HAProxy update id=20 I1227 12:21:11.561001 6 instance.go:162] HAProxy updated without needing to reload. Commands sent: 3 I1227 12:21:11.561057 6 controller.go:325] Finish HAProxy update id=20: ingress=0.149764ms writeTmpl=37.738947ms total=37.888711ms
5、接下來測試負載均衡的功能,為了驗證測試效果,往pod中寫入不同的內容,以測試負載均衡的效果
[root@node-1 ~]# kubectl exec -it haproxy-ingress-demo-5d487d4fc-5pgjt /bin/bash root@haproxy-ingress-demo-5d487d4fc-5pgjt:/# echo "web-1" > /usr/share/nginx/html/index.html [root@node-1 ~]# kubectl exec -it haproxy-ingress-demo-5d487d4fc-pst2q /bin/bash root@haproxy-ingress-demo-5d487d4fc-pst2q:/# echo "web-2" > /usr/share/nginx/html/index.html [root@node-1 ~]# kubectl exec -it haproxy-ingress-demo-5d487d4fc-sr8tm /bin/bash root@haproxy-ingress-demo-5d487d4fc-sr8tm:/# echo "web-3" > /usr/share/nginx/html/index.html
6、測試驗證負載均衡效果,haproxy採用輪詢的調度演算法,因此可以明顯看到輪詢效果
[root@node-1 ~]# curl http://www.happylau.cn --resolve www.happylau.cn:80:10.254.100.102 web-1 [root@node-1 ~]# curl http://www.happylau.cn --resolve www.happylau.cn:80:10.254.100.102 web-2 [root@node-1 ~]# curl http://www.happylau.cn --resolve www.happylau.cn:80:10.254.100.102 web-3
這個章節驗證了haproxy ingress控制器動態配置更新的能力,相比於nginx ingress控制器而言,haproxy ingress控制器不需要重載服務進程就能夠動態識別到配置,在微服務場景下將具有非常大的優勢;並通過一個實例驗證了ingress負載均衡調度能力。
2.3 基於名稱虛擬主機
這個小節將演示haproxy ingress基於虛擬雲主機功能的實現,定義兩個虛擬主機news.happylau.cn和sports.happylau.cn,將請求各自轉發至haproxy-1和haproxy-2
1、 準備環境測試環境,創建兩個應用haproxy-1和haproxy並暴露服務埠
[root@node-1 ~]# kubectl run haproxy-1 --image=nginx:1.7.9 --port=80 --replicas=1 --expose=true [root@node-1 ~]# kubectl run haproxy-2 --image=nginx:1.7.9 --port=80 --replicas=1 --expose=true 查看應用 [root@node-1 ~]# kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE haproxy-1 1/1 1 1 39s haproxy-2 1/1 1 1 36s 查看service [root@node-1 ~]# kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE haproxy-1 ClusterIP 10.100.239.114 <none> 80/TCP 55s haproxy-2 ClusterIP 10.100.245.28 <none> 80/TCP 52s
3、定義ingress規則,定義不同的主機並將請求轉發至不同的service中
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: haproxy-ingress-virtualhost annotations: kubernetes.io/ingress.class: haproxy spec: rules: - host: news.happylau.cn http: paths: - path: / backend: serviceName: haproxy-1 servicePort: 80 - host: sports.happylau.cn http: paths: - path: / backend: serviceName: haproxy-2 servicePort: 80 #應用ingress規則並查看列表 [root@node-1 haproxy-ingress]# kubectl apply -f ingress-virtualhost.yaml ingress.extensions/haproxy-ingress-virtualhost created [root@node-1 haproxy-ingress]# kubectl get ingresses haproxy-ingress-virtualhost NAME HOSTS ADDRESS PORTS AGE haproxy-ingress-virtualhost news.happylau.cn,sports.happylau.cn 80 8s 查看ingress規則詳情 [root@node-1 haproxy-ingress]# kubectl describe ingresses haproxy-ingress-virtualhost Name: haproxy-ingress-virtualhost Namespace: default Address: Default backend: default-http-backend:80 (<none>) Rules: Host Path Backends ---- ---- -------- news.happylau.cn / haproxy-1:80 (10.244.2.168:80) sports.happylau.cn / haproxy-2:80 (10.244.2.169:80) Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"haproxy"},"name":"haproxy-ingress-virtualhost","namespace":"default"},"spec":{"rules":[{"host":"news.happylau.cn","http":{"paths":[{"backend":{"serviceName":"haproxy-1","servicePort":80},"path":"/"}]}},{"host":"sports.happylau.cn","http":{"paths":[{"backend":{"serviceName":"haproxy-2","servicePort":80},"path":"/"}]}}]}} kubernetes.io/ingress.class: haproxy Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 37s ingress-controller Ingress default/haproxy-ingress-virtualhost Normal CREATE 37s ingress-controller Ingress default/haproxy-ingress-virtualhost Normal UPDATE 20s ingress-controller Ingress default/haproxy-ingress-virtualhost Normal UPDATE 20s ingress-controller Ingress default/haproxy-ingress-virtualhost
4、測試驗證虛擬機主機配置,通過curl直接解析的方式,或者通過寫hosts文件

5、查看配置配置文件內容,配置中更新了haproxy.cfg的front段和backend段的內容
/etc/haproxy/haproxy.cfg 配置文件內容 backend default_haproxy-1_80 #haproxy-1後端 mode http balance roundrobin acl https-request ssl_fc http-request set-header X-Original-Forwarded-For %[hdr(x-forwarded-for)] if { hdr(x-forwarded-for) -m found } http-request del-header x-forwarded-for option forwardfor http-response set-header Strict-Transport-Security "max-age=15768000" server srv001 10.244.2.168:80 weight 1 check inter 2s server srv002 127.0.0.1:1023 disabled weight 1 check inter 2s server srv003 127.0.0.1:1023 disabled weight 1 check inter 2s server srv004 127.0.0.1:1023 disabled weight 1 check inter 2s server srv005 127.0.0.1:1023 disabled weight 1 check inter 2s server srv006 127.0.0.1:1023 disabled weight 1 check inter 2s server srv007 127.0.0.1:1023 disabled weight 1 check inter 2s #haproxy-2後端 backend default_haproxy-2_80 mode http balance roundrobin acl https-request ssl_fc http-request set-header X-Original-Forwarded-For %[hdr(x-forwarded-for)] if { hdr(x-forwarded-for) -m found } http-request del-header x-forwarded-for option forwardfor http-response set-header Strict-Transport-Security "max-age=15768000" server srv001 10.244.2.169:80 weight 1 check inter 2s server srv002 127.0.0.1:1023 disabled weight 1 check inter 2s server srv003 127.0.0.1:1023 disabled weight 1 check inter 2s server srv004 127.0.0.1:1023 disabled weight 1 check inter 2s server srv005 127.0.0.1:1023 disabled weight 1 check inter 2s server srv006 127.0.0.1:1023 disabled weight 1 check inter 2s server srv007 127.0.0.1:1023 disabled weight 1 check inter 2s 配置關聯內容 / # cat /etc/haproxy/maps/_global_http_front.map # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # HAProxy Ingress Controller # # -------------------------- # # This file is automatically updated, do not edit # # # news.happylau.cn/ default_haproxy-1_80 sports.happylau.cn/ default_haproxy-2_80
2.4 URL自動跳轉
haproxy ingress支援自動跳轉的能力,需要通過annotations定義,通過ingress.kubernetes.io/ssl-redirect設置即可,默認為false,設置為true即可實現http往https跳轉的能力,當然可以將配置寫入到ConfigMap中實現默認跳轉的能力,本文以編寫annotations為例,實現訪問http跳轉https的能力。
1、定義ingress規則,設置ingress.kubernetes.io/ssl-redirect實現跳轉功能
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: haproxy-ingress-virtualhost annotations: kubernetes.io/ingress.class: haproxy ingress.kubernetes.io/ssl-redirect: true #實現跳轉功能 spec: rules: - host: news.happylau.cn http: paths: - path: / backend: serviceName: haproxy-1 servicePort: 80 - host: sports.happylau.cn http: paths: - path: / backend: serviceName: haproxy-2 servicePort: 80
2、
2.4 基於TLS加密
haproxy ingress默認集成了一個
1、生成自簽名證書和私鑰
[root@node-1 haproxy-ingress]# openssl req -x509 -newkey rsa:2048 -nodes -days 365 -keyout tls.key -out tls.crt Generating a 2048 bit RSA private key ...........+++ .......+++ writing new private key to 'tls.key' ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [XX]:CN State or Province Name (full name) []:GD Locality Name (eg, city) [Default City]:ShenZhen Organization Name (eg, company) [Default Company Ltd]:Tencent Organizational Unit Name (eg, section) []:HappyLau Common Name (eg, your name or your server's hostname) []:www.happylau.cn Email Address []:[email protected]
2、創建Secrets,關聯證書和私鑰
[root@node-1 haproxy-ingress]# kubectl create secret tls haproxy-tls --cert=tls.crt --key=tls.key secret/haproxy-tls created [root@node-1 haproxy-ingress]# kubectl describe secrets haproxy-tls Name: haproxy-tls Namespace: default Labels: <none> Annotations: <none> Type: kubernetes.io/tls Data ==== tls.crt: 1424 bytes tls.key: 1704 bytes
3、編寫ingress規則,通過tls關聯Secrets
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: haproxy-ingress-virtualhost annotations: kubernetes.io/ingress.class: haproxy spec: tls: - hosts: - news.happylau.cn - sports.happylau.cn secretName: haproxy-tls rules: - host: news.happylau.cn http: paths: - path: / backend: serviceName: haproxy-1 servicePort: 80 - host: sports.happylau.cn http: paths: - path: / backend: serviceName: haproxy-2 servicePort: 80
4、應用配置並查看詳情,在TLS中可以看到TLS關聯的證書
[root@node-1 haproxy-ingress]# kubectl apply -f ingress-virtualhost.yaml ingress.extensions/haproxy-ingress-virtualhost configured [root@node-1 haproxy-ingress]# kubectl describe ingresses haproxy-ingress-virtualhost Name: haproxy-ingress-virtualhost Namespace: default Address: Default backend: default-http-backend:80 (<none>) TLS: haproxy-tls terminates news.happylau.cn,sports.happylau.cn Rules: Host Path Backends ---- ---- -------- news.happylau.cn / haproxy-1:80 (10.244.2.168:80) sports.happylau.cn / haproxy-2:80 (10.244.2.169:80) Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"haproxy"},"name":"haproxy-ingress-virtualhost","namespace":"default"},"spec":{"rules":[{"host":"news.happylau.cn","http":{"paths":[{"backend":{"serviceName":"haproxy-1","servicePort":80},"path":"/"}]}},{"host":"sports.happylau.cn","http":{"paths":[{"backend":{"serviceName":"haproxy-2","servicePort":80},"path":"/"}]}}],"tls":[{"hosts":["news.happylau.cn","sports.happylau.cn"],"secretName":"haproxy-tls"}]}} kubernetes.io/ingress.class: haproxy Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 37m ingress-controller Ingress default/haproxy-ingress-virtualhost Normal CREATE 37m ingress-controller Ingress default/haproxy-ingress-virtualhost Normal UPDATE 7s (x2 over 37m) ingress-controller Ingress default/haproxy-ingress-virtualhost Normal UPDATE 7s (x2 over 37m) ingress-controller Ingress default/haproxy-ingress-virtualhost
5、測試https站點訪問,可以看到安全的https訪問

參考文檔
官方安裝文檔:https://haproxy-ingress.github.io/docs/getting-started/
haproxy ingress官方配置:https://www.haproxy.com/documentation/hapee/1-7r2/traffic-management/k8s-image-controller/
當你的才華撐不起你的野心時,你就應該靜下心來學習
附錄
#RBAC認證帳號,和角色關聯 apiVersion: v1 kind: ServiceAccount metadata: name: ingress-controller namespace: ingress-controller --- # 集群角色,訪問資源對象和具體訪問許可權 apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: ingress-controller rules: - apiGroups: - "" resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - "" resources: - nodes verbs: - get - apiGroups: - "" resources: - services verbs: - get - list - watch - apiGroups: - "extensions" resources: - ingresses verbs: - get - list - watch - apiGroups: - "" resources: - events verbs: - create - patch - apiGroups: - "extensions" resources: - ingresses/status verbs: - update --- #角色定義 apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: name: ingress-controller namespace: ingress-controller rules: - apiGroups: - "" resources: - configmaps - pods - secrets - namespaces verbs: - get - apiGroups: - "" resources: - configmaps verbs: - get - update - apiGroups: - "" resources: - configmaps verbs: - create - apiGroups: - "" resources: - endpoints verbs: - get - create - update --- #集群角色綁定ServiceAccount和ClusterRole關聯 apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: ingress-controller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ingress-controller subjects: - kind: ServiceAccount name: ingress-controller namespace: ingress-controller - apiGroup: rbac.authorization.k8s.io kind: User name: ingress-controller --- #角色綁定 apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: ingress-controller namespace: ingress-controller roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: ingress-controller subjects: - kind: ServiceAccount name: ingress-controller namespace: ingress-controller - apiGroup: rbac.authorization.k8s.io kind: User name: ingress-controller --- #後端服務應用,haproxy ingress默認需要一個關聯的應用 apiVersion: apps/v1 kind: Deployment metadata: labels: run: ingress-default-backend name: ingress-default-backend namespace: ingress-controller spec: selector: matchLabels: run: ingress-default-backend template: metadata: labels: run: ingress-default-backend spec: containers: - name: ingress-default-backend image: gcr.io/google_containers/defaultbackend:1.0 ports: - containerPort: 8080 resources: limits: cpu: 10m memory: 20Mi --- #後端應用的service定義 apiVersion: v1 kind: Service metadata: name: ingress-default-backend namespace: ingress-controller spec: ports: - port: 8080 selector: run: ingress-default-backend --- #haproxy ingress配置,實現自定義配置功能 apiVersion: v1 kind: ConfigMap metadata: name: haproxy-ingress namespace: ingress-controller --- #haproxy ingress核心的DaemonSet apiVersion: apps/v1 kind: DaemonSet metadata: labels: run: haproxy-ingress name: haproxy-ingress namespace: ingress-controller spec: updateStrategy: type: RollingUpdate selector: matchLabels: run: haproxy-ingress template: metadata: labels: run: haproxy-ingress spec: hostNetwork: true #網路模式為hostNetwork,即使用宿主機的網路 nodeSelector: #節點選擇器,將調度至包含特定標籤的節點 role: ingress-controller serviceAccountName: ingress-controller #實現RBAC認證授權 containers: - name: haproxy-ingress image: quay.io/jcmoraisjr/haproxy-ingress args: - --default-backend-service=$(POD_NAMESPACE)/ingress-default-backend - --configmap=$(POD_NAMESPACE)/haproxy-ingress - --sort-backends ports: - name: http containerPort: 80 - name: https containerPort: 443 - name: stat containerPort: 1936 livenessProbe: httpGet: path: /healthz port: 10253 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace