Docker默認橋接網絡是如何工作的

1. 啟動一個Docker容器

一般來說,我們起一個容器比如一個簡單的nginx服務會向這樣

docker run -d --rm nginx:XXX

OK容器起來了,但是並不能通過宿主機被外面的機器訪問,此時我們需要對容器做一個端口映射,像是這樣

docker run -d --rm -p 80:80 nginx:XXX

這樣外面就可以通弄映射出來的宿主機端口訪問到容器里的服務了。那麼這在這其中流量包時如何在網口,netfilter,路由,網絡隔離的namespace,linux虛擬網橋,veth pair設備間送到指定的容器內進程呢?

2. 先說說docker的網絡資源隔離是如何做到的

2.1 Network Namespace

linux network namespace(網絡命名空間),linux 可以通過namespce對系統的資源做隔離,包括網絡資源,我們先做一個小實驗。對這一塊比較熟悉的同學可以跳過。
建立一個net1和net2的網絡名稱空間

# 創建net1
ip netns add net1
# 創建net2
ip netns add net2
# 打印名稱空間
ip netns list
# 在指定的名稱空間執行命令,ip netns exec [network namespace name] [command]
ip netns exec net1 ip addr

我們可以看到回顯為

[root@localhost ~]# ip netns exec net1 ip addr
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
[root@localhost ~]# 

OK, 看不到宿主機網絡的任何信息,完全是一個空的網絡配置。這就是network namespace的效果。

2.2 不同Network Namespace間的通訊

在docker 容器中,不同的容器是可以互相訪問的,也就是說namespace間是可以互通的,那麼應該如何做到?。
先來了解下veth pair這個東西,它是linux的中虛擬網絡設備,注意是「虛擬」,是模擬出來的並不真實存在。veth設備總是成對出現,也叫veth pair,一端發送的數據會由另外一端接收。[滑稽]機靈的同學應該已經發現了,有兩端而且兩段的數據互通這特么不就是網線么,哎理解的很到位,再展開一下上面的net1, net2不就是兩台虛機的主機(從網絡資源上講可以這麼說)么,那麼我把它們之間用虛擬的網線(veth pair)一連那麼不就通了。bingo理論行對了,讓我們來實踐一下,繼續上一實驗的環境。
先創建一對veth pair設備 veth1, veth2

ip link add veth1 type veth peer name veth2

在用這對veth設備連接兩個namespace

ip link set veth1 netns net1
ip link set veth2 netns net2

進入net1配置並啟動虛擬網卡

[root@localhost ~]# ip netns exec ip addr
Cannot open network namespace "ip": No such file or directory
[root@localhost ~]# ip netns exec net1 bash
[root@localhost ~]# ip addr
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
7: veth1@if6: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 0e:ad:28:91:48:51 brd ff:ff:ff:ff:ff:ff link-netnsid 1
[root@localhost ~]# ip addr add 10.0.0.1/24 dev veth1
[root@localhost ~]# ip link set dev veth1 up
[root@localhost ~]# 

進入net2配置並啟動網卡

[root@localhost ~]# ip netns exec net2 bash
[root@localhost ~]# ip addr
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
6: veth2@if7: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 4a:4a:a2:0a:cc:05 brd ff:ff:ff:ff:ff:ff link-netnsid 0
[root@localhost ~]# ip addr add 10.0.0.2/24 dev veth2
[root@localhost ~]# ip link set dev veth2 up
[root@localhost ~]# 

OK, 理論上這就相當於兩台直連主機(就好像你的一台pc直接把網線懟到另一台pc),讓我們來ping一下

# 進入net1 ping net2的地址
[root@localhost ~]# ip netns exec net1 bash
[root@localhost ~]# ping 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.069 ms
64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=0.055 ms
64 bytes from 10.0.0.2: icmp_seq=4 ttl=64 time=0.065 ms
^C
--- 10.0.0.2 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3001ms
rtt min/avg/max/mdev = 0.055/0.065/0.073/0.010 ms
[root@localhost ~]# 

通了ok,讓我們把vnet2 的veth2 down 了試下(相當於把對端pc的網線拔了)

[root@localhost ~]# ip netns exec net2 bash
[root@localhost ~]# ip link set dev veth2 down
[root@localhost ~]# exit
exit
[root@localhost ~]# ip netns exec net1 bash
[root@localhost ~]# ping 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
^C
--- 10.0.0.2 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 3999ms

[root@localhost ~]#

意料之中,完全不同(都拔線了咋可能通)。

2.3 比起插線(veth pair)更好的方案 bridge

細心的你一定想到了一個問題,我在機器上有這麼多的容器每個容器都要實現互通,難道每個namespace之間都要通過veth pair來連接么,這樣得差n的二次方的網線啊,這也太離譜的。
這個問題可以回歸到現實是如果做到的,在公司的一個網段里有那麼多的主機,互插線的話網口都不夠用,那麼插哪裡呢?交換機。由交換機幫我把對應的數據送到對應的網口,像是物流站一樣,所有的pc把網線插上去就完事。這裡我們引入另一個linux的虛擬網絡設備 bridge(網橋),你可以理解為這就是一個二層交換機的功能。所以無論我們新增多少容器,都可以通過veth pair 連接到 bridge上實現容器互通,而在docker里這個網橋就是docker0。

[root@localhost ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:9b:21:ef brd ff:ff:ff:ff:ff:ff
    inet 192.168.144.128/24 brd 192.168.144.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::2ac9:5d64:5e4b:6619/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:91:d3:be:6c brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:91ff:fed3:be6c/64 scope link 
       valid_lft forever preferred_lft forever
5: veth299c707@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 8a:eb:25:8f:78:f0 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::88eb:25ff:fe8f:78f0/64 scope link 
       valid_lft forever preferred_lft forever

docker 會在啟動是創建docker0網橋,在啟動容器時創建一對veth pair設備,一端會放置在容器里命名為eth0,另一端會放在host主機的上叫做veth*****並加載網橋上。
我們可以手動模擬這個過程,我們新建一個網橋br0, 再建立兩個網絡名稱空間net3, net4, 並通過veth pair把它們連接到這個網橋上。

# 創建網絡名稱空間net3 net4
ip netns add net3
ip netns add net4

# 創建兩隊veth pair設備 veth*in 放置於namespace, veth*out放置於網橋
ip link add veth3in type veth peer name veth3out
ip link add veth4in type veth peer name veth4out

# 連接net3並配置ip
ip link set veth3in netns net3
ip netns exec net3 ip addr add 10.0.0.3/24 dev veth3in
ip netns exec net3 ip link set dev veth3in up

# 連接net4並配置ip
ip link set veth4in netns net4
ip netns exec net4 ip addr add 10.0.0.4/24 dev veth4in
ip netns exec net4 ip link set dev veth4in up

# 可以試不連網橋是否ping通

# 創建br0網橋
ip link add name br0 type bridge

# 連接網橋
ip link set dev veth3out master br0
ip link set dev veth3out up
ip link set dev veth4out master br0
ip link set dev veth4out up

# 先網橋還沒有啟動你看看是否能ping通

# 啟動網橋
ip link set br0 up
# 啟動網橋後net3 終於能ping通net4了

如果宿主機想訪問這個bridge上的namespace怎麼辦,再加個veth pair 給一端配置ip 另一端接到bridge上? 正確,但是不必這麼麻煩。其實我們在ip addr 是看到的那個br0並不是網橋的本地,br0也只是網橋的一個口子罷了,我們是可給這個OK配置上IP直接訪問在這個二層網絡的,像是這樣:

[root@localhost ~]# ip addr add 10.0.0.254/24 dev br0
[root@localhost ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.144.2   0.0.0.0         UG    100    0        0 ens33
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 br0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.144.0   0.0.0.0         255.255.255.0   U     100    0        0 ens33
[root@localhost ~]# ping 10.0.0.4
PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data.
64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms
64 bytes from 10.0.0.4: icmp_seq=2 ttl=64 time=0.051 ms
64 bytes from 10.0.0.4: icmp_seq=3 ttl=64 time=0.048 ms
^C
--- 10.0.0.4 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2024ms
rtt min/avg/max/mdev = 0.048/0.055/0.068/0.012 ms
[root@localhost ~]# 

所以我們在安裝了docker的主機中可以看到帶有與ip 172.17.0.1 的docker0網橋。

2.4 小總結

到這一節,其實就是run一個容器時不加端口映射的全部情況了。可以梳理下就是。docker 通過Network Namespace, bridge和veth pair這三個虛擬設備實現一個簡單的二層網絡,不同的namespace實現了不同容器的網絡隔離讓他們分別有自己的ip,通過veth pair 連接到docker0網橋上實現了容器間和宿主機的互通。其實就這麼簡單。

3. docker端口映射原理

docker 的端口映射是通過netfilter實現的,netfilter是內核的一個數據包處理框架,可以在數據包在路由前,路由後,進入用戶態前,從用戶態出來後等等好幾個點對數據包進行處理包括但不限於攔截,源地址轉換,目的地址換等,用戶態表現為iptables工具和防火牆。一般而言都喜歡稱呼為docker的端口映射是通過iptables實現,其實iptables只是用戶態的一個規則下發工具而已,實際工作的是netfilter,看個人理解吧,其實都差不多。關於iptables的使用內容太多,此處不加以贅述如果有同學這方面知識比較生疏強烈安利一篇博客[//www.zsythink.net/archives/tag/iptables/]。
一下內容我都是簡歷在對iptables規則較為熟悉的前提下

3.1 數據包是如何到容器裏面的又是如何從容器出去的

我們先創建一個容器在開放8000端口的服務到宿主機的8000端口,命令如下:

# 用別的服務也可以,都行,映射了端口就好
docker run -d -p 8000:8000 centos:7 python -m SimpleHTTPServer

確認端口開發成功,外部可以訪問後我們先來看下docker在iptables配置了什麼

# 查看nat表(這個表負責轉發和地址轉換)
[root@localhost ~]# iptables -t nat -nvL
Chain PREROUTING (policy ACCEPT 32 packets, 4465 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    2   112 DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 25 packets, 3913 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 184 packets, 13926 bytes)
 pkts bytes target     prot opt in     out     source               destination         
   10   740 DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 185 packets, 14010 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 MASQUERADE  all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ADDRTYPE match src-type LOCAL
    0     0 MASQUERADE  all  --  *      !docker0  172.17.0.0/16        0.0.0.0/0           
    0     0 MASQUERADE  tcp  --  *      *       172.17.0.2           172.17.0.2           tcp dpt:8000

Chain DOCKER (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8000 to:172.17.0.2:8000
[root@localhost ~]# 

可以看到在 PREROUTING 鏈中(路由前)有一條自定義的DOCKER鏈,DOCKER鏈中有一條規則,翻譯下就是「只要是tcp的包目的地址的端口是8000的統統把目的地址改為172.17.0.2,目的端口改為8000」, 這樣一來原先的數據包src_ip:src_port->dst_ip:8000就會在路由前變成src_ip:src_port->172.17.0.2:8000。再看路由表:

[root@localhost ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.144.2   0.0.0.0         UG    100    0        0 ens33
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 br0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.144.0   0.0.0.0         255.255.255.0   U     100    0        0 ens33

匹配到172.17.0.0這條,命中路由走input這條鏈,被直接發往Iface:docker0也就是網橋在host主機的這個口子上。然後這個數據包會通過veth pair 設備走到容器中。

3.2 抓包驗證

口說無憑眼見為實,我覺得只要兄弟們不是明天項目要上限有閑功夫來看我得我文章的都應該去抓個包看看。
同事對物理網卡和docker0抓包

[root@localhost ~]# tcpdump -i ens33 -n -vvv tcp port 8000
tcpdump: listening on ens33, link-type EN10MB (Ethernet), capture size 262144 bytes
00:31:38.768744 IP (tos 0x0, ttl 64, id 28655, offset 0, flags [DF], proto TCP (6), length 60)
    192.168.144.129.47106 > 192.168.144.128.irdmi: Flags [S], cksum 0x3e4b (correct), seq 2908588551, win 29200, options [mss 1460,sackOK,TS val 12707304 ecr 0,nop,wscale 7], length 0
00:31:38.768941 IP (tos 0x0, ttl 63, id 0, offset 0, flags [DF], proto TCP (6), length 60)
    192.168.144.128.irdmi > 192.168.144.129.47106: Flags [S.], cksum 0xa281 (incorrect -> 0x90e0), seq 2883195023, ack 2908588552, win 28960, options [mss 1460,sackOK,TS val 12710174 ecr 12707304,nop,wscale 7], length 0
00:31:38.772124 IP (tos 0x0, ttl 64, id 28656, offset 0, flags [DF], proto TCP (6), length 52)
    192.168.144.129.47106 > 192.168.144.128.irdmi: Flags [.], cksum 0x2fe5 (correct), seq 1, ack 1, win 229, options [nop,nop,TS val 12707307 ecr 12710174], length 0
00:31:38.772185 IP (tos 0x0, ttl 64, id 28657, offset 0, flags [DF], proto TCP (6), length 136)
    192.168.144.129.47106 > 192.168.144.128.irdmi: Flags [P.], cksum 0x0f8c (correct), seq 1:85, ack 1, win 229, options [nop,nop,TS val 12707307 ecr 12710174], length 84
00:31:38.772281 IP (tos 0x0, ttl 63, id 57777, offset 0, flags [DF], proto TCP (6), length 52)
    192.168.144.128.irdmi > 192.168.144.129.47106: Flags [.], cksum 0xa279 (incorrect -> 0x2f90), seq 1, ack 85, win 227, options [nop,nop,TS val 12710177 ecr 12707307], length 0
00:31:38.779728 IP (tos 0x0, ttl 63, id 57778, offset 0, flags [DF], proto TCP (6), length 69)
    192.168.144.128.irdmi > 192.168.144.129.47106: Flags [P.], cksum 0xa28a (incorrect -> 0x6fab), seq 1:18, ack 85, win 227, options [nop,nop,TS val 12710184 ecr 12707307], length 17
00:31:38.780273 IP (tos 0x0, ttl 63, id 57779, offset 0, flags [DF], proto TCP (6), length 990)
    192.168.144.128.irdmi > 192.168.144.129.47106: Flags [FP.], cksum 0xa623 (incorrect -> 0x8580), seq 18:956, ack 85, win 227, options [nop,nop,TS val 12710185 ecr 12707307], length 938
00:31:38.780359 IP (tos 0x0, ttl 64, id 28658, offset 0, flags [DF], proto TCP (6), length 52)
    192.168.144.129.47106 > 192.168.144.128.irdmi: Flags [.], cksum 0x2f6d (correct), seq 85, ack 18, win 229, options [nop,nop,TS val 12707316 ecr 12710184], length 0
00:31:38.780566 IP (tos 0x0, ttl 64, id 28659, offset 0, flags [DF], proto TCP (6), length 52)
    192.168.144.129.47106 > 192.168.144.128.irdmi: Flags [.], cksum 0x2bb3 (correct), seq 85, ack 957, win 243, options [nop,nop,TS val 12707316 ecr 12710185], length 0
00:31:38.780806 IP (tos 0x0, ttl 64, id 28660, offset 0, flags [DF], proto TCP (6), length 52)
    192.168.144.129.47106 > 192.168.144.128.irdmi: Flags [F.], cksum 0x2bb2 (correct), seq 85, ack 957, win 243, options [nop,nop,TS val 12707316 ecr 12710185], length 0
00:31:38.780864 IP (tos 0x0, ttl 63, id 55149, offset 0, flags [DF], proto TCP (6), length 52)
    192.168.144.128.irdmi > 192.168.144.129.47106: Flags [.], cksum 0x2bc1 (correct), seq 957, ack 86, win 227, options [nop,nop,TS val 12710186 ecr 12707316], length 0
^C
[root@localhost ~]# tcpdump -i docker0 -n -vvv tcp port 8000
tcpdump: listening on docker0, link-type EN10MB (Ethernet), capture size 262144 bytes
00:31:38.768858 IP (tos 0x0, ttl 63, id 28655, offset 0, flags [DF], proto TCP (6), length 60)
    192.168.144.129.47106 > 172.17.0.2.irdmi: Flags [S], cksum 0xe360 (correct), seq 2908588551, win 29200, options [mss 1460,sackOK,TS val 12707304 ecr 0,nop,wscale 7], length 0
00:31:38.768929 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 60)
    172.17.0.2.irdmi > 192.168.144.129.47106: Flags [S.], cksum 0xfd6b (incorrect -> 0x35f6), seq 2883195023, ack 2908588552, win 28960, options [mss 1460,sackOK,TS val 12710174 ecr 12707304,nop,wscale 7], length 0
00:31:38.772152 IP (tos 0x0, ttl 63, id 28656, offset 0, flags [DF], proto TCP (6), length 52)
    192.168.144.129.47106 > 172.17.0.2.irdmi: Flags [.], cksum 0xd4fa (correct), seq 1, ack 1, win 229, options [nop,nop,TS val 12707307 ecr 12710174], length 0
00:31:38.772190 IP (tos 0x0, ttl 63, id 28657, offset 0, flags [DF], proto TCP (6), length 136)
    192.168.144.129.47106 > 172.17.0.2.irdmi: Flags [P.], cksum 0xb4a1 (correct), seq 1:85, ack 1, win 229, options [nop,nop,TS val 12707307 ecr 12710174], length 84
00:31:38.772270 IP (tos 0x0, ttl 64, id 57777, offset 0, flags [DF], proto TCP (6), length 52)
    172.17.0.2.irdmi > 192.168.144.129.47106: Flags [.], cksum 0xfd63 (incorrect -> 0xd4a5), seq 1, ack 85, win 227, options [nop,nop,TS val 12710177 ecr 12707307], length 0
00:31:38.779674 IP (tos 0x0, ttl 64, id 57778, offset 0, flags [DF], proto TCP (6), length 69)
    172.17.0.2.irdmi > 192.168.144.129.47106: Flags [P.], cksum 0xfd74 (incorrect -> 0x14c1), seq 1:18, ack 85, win 227, options [nop,nop,TS val 12710184 ecr 12707307], length 17
00:31:38.780084 IP (tos 0x0, ttl 64, id 57779, offset 0, flags [DF], proto TCP (6), length 990)
    172.17.0.2.irdmi > 192.168.144.129.47106: Flags [FP.], cksum 0x010e (incorrect -> 0x2a96), seq 18:956, ack 85, win 227, options [nop,nop,TS val 12710185 ecr 12707307], length 938
00:31:38.780389 IP (tos 0x0, ttl 63, id 28658, offset 0, flags [DF], proto TCP (6), length 52)
    192.168.144.129.47106 > 172.17.0.2.irdmi: Flags [.], cksum 0xd482 (correct), seq 85, ack 18, win 229, options [nop,nop,TS val 12707316 ecr 12710184], length 0
00:31:38.780578 IP (tos 0x0, ttl 63, id 28659, offset 0, flags [DF], proto TCP (6), length 52)
    192.168.144.129.47106 > 172.17.0.2.irdmi: Flags [.], cksum 0xd0c8 (correct), seq 85, ack 957, win 243, options [nop,nop,TS val 12707316 ecr 12710185], length 0
00:31:38.780818 IP (tos 0x0, ttl 63, id 28660, offset 0, flags [DF], proto TCP (6), length 52)
    192.168.144.129.47106 > 172.17.0.2.irdmi: Flags [F.], cksum 0xd0c7 (correct), seq 85, ack 957, win 243, options [nop,nop,TS val 12707316 ecr 12710185], length 0
00:31:38.780847 IP (tos 0x0, ttl 64, id 55149, offset 0, flags [DF], proto TCP (6), length 52)
    172.17.0.2.irdmi > 192.168.144.129.47106: Flags [.], cksum 0xd0d6 (correct), seq 957, ack 86, win 227, options [nop,nop,TS val 12710186 ecr 12707316], length 0
^C
11 packets captured
11 packets received by filter
0 packets dropped by kernel

可以明顯看到目的地址192.168.144.128:8000 在路由前被篡改為了172.17.0.2:8000,並在數據包返回時iptables自動幫我們吧返回的數據包源地址改回了192。168.144.128:8000。

3.2 手動添加端口映射

既然docker自己都是通過iptables實現映射的那麼,當然我們手敲命令也可以實現同樣的效果。我們新增一條host主機9000端口,命令如下

iptables -t nat -A DOCKER -p tcp --dport 9000 -j DNAT --to-destination 172.17.0.2:8000
# 訪問成功,絲滑
[root@localhost ~]# curl 192.168.144.128:9000
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><html>
<title>Directory listing for /</title>
<body>
<h2>Directory listing for /</h2>
<hr>
<ul>
<li><a href=".dockerenv">.dockerenv</a>
<li><a href="anaconda-post.log">anaconda-post.log</a>
<li><a href="bin/">bin@</a>
<li><a href="dev/">dev/</a>
<li><a href="etc/">etc/</a>
<li><a href="home/">home/</a>
<li><a href="lib/">lib@</a>
<li><a href="lib64/">lib64@</a>
<li><a href="media/">media/</a>
<li><a href="mnt/">mnt/</a>
<li><a href="opt/">opt/</a>
<li><a href="proc/">proc/</a>
<li><a href="root/">root/</a>
<li><a href="run/">run/</a>
<li><a href="sbin/">sbin@</a>
<li><a href="srv/">srv/</a>
<li><a href="sys/">sys/</a>
<li><a href="tmp/">tmp/</a>
<li><a href="usr/">usr/</a>
<li><a href="var/">var/</a>
</ul>
<hr>
</body>
</html>

3.3 小節

docker對於轉發的實現還是比較簡單的,只要耐心的去查看相應的轉發規則都可以讀懂數據的走象,至於在容器里發出的請求,他的snat是如果做的,本地的docker-proxy的服務存在的意義可以自己好好思考下,我這裡就不多BB了。