測試工程師Docker進階
- 2021 年 3 月 10 日
- 筆記
學習整理來源 B站 狂神說Java //space.bilibili.com/95256449/
四、docker鏡像
1、鏡像是什麼
鏡像是一種輕量級、可執行的獨立軟體包,用來打包軟體運行環境和基於運行環境開發的軟體,它包含了運行某個軟體所需的所有內容,包括程式碼、運行時庫、環境變數和配置問價等。
將所有的應用和環境直接打包成鏡像,就可以直接運行。
2、鏡像分層原理
docker的鏡像實際上由一層一層的文件系統組成,這種層級的文件系統UnionFS。平時我們安裝進虛擬機的CentOS都是好幾個G,為什麼Docker這裡才200M?
對於個精簡的OS,rootfs可以很小,只需要包合最基本的命令,工具和程式庫就可以了,因為底層直接用 Host的kernel,自己只需要提供rootfs就可以了。由此可見對於不同的Linux發行版, boots基本是一致 的, rootfs會有差別,因此不同的發行版可以公用bootfs. 虛擬機是分鐘級別,容器是秒級!
3、分層理解
[root@ecs-x-large-2-linux-20200305213344 ~]# docker pull redis:5.0
Trying to pull repository docker.io/library/redis ...
5.0: Pulling from docker.io/library/redis
45b42c59be33: Already exists
5ce2e937bf62: Already exists
2a031498ff58: Already exists
ec50b60c87ea: Pull complete
2bf0c804a5c0: Pull complete
6a3615492950: Pull complete
Digest: sha256:6ba62effb31d8d74e6e2dec4b7ef9c8985e7fcc85c4f179e13f622f5785a4135
Status: Downloaded newer image for docker.io/redis:5.0
docker鏡像為什麼要採用這種分層的結構呢?
最大的好處,我覺得莫過於資源共享了!比如有多個鏡像都從相同的Base鏡像構建而來,那麼宿主機
只需在磁碟上保留一份base鏡像,同時記憶體中也只需要載入一份base鏡像,這樣就可以為所有的容器
服務了,而且鏡像的每一層都可以被共享。
總結:
所有的 Docker鏡像都起始於一個基礎鏡像層,當進行修改或培加新的內容時,就會在當前鏡像層之
上,創建新的鏡像層。Docker 鏡像都是只讀的,當容器啟動時,一個新的可寫層載入到鏡像的頂部!這一層就是我們通常說的容器層,容器之下的都叫鏡像層!
查看docker鏡像分層資訊
命令:docker inspect 鏡像id或鏡像名稱
[root@ecs-x-large-2-linux-20200305213344 ~]# docker inspect d00afcde654e
[
{
"Id": "sha256:d00afcde654e3125384d52fb872c88986d2046fa598a12abcee52ff0d98e7562",
"RepoTags": [
"docker.io/redis:5.0"
],
"RepoDigests": [
"docker.io/redis@sha256:6ba62effb31d8d74e6e2dec4b7ef9c8985e7fcc85c4f179e13f622f5785a4135"
],
"Parent": "",
"Comment": "",
"Created": "2021-03-02T23:29:46.396151327Z",
"Container": "6a7820655f2592fdc2b254036170652520beb98f79a41e6aedc17987ccec3829",
"ContainerConfig": {
"Hostname": "6a7820655f25",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"6379/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"GOSU_VERSION=1.12",
"REDIS_VERSION=5.0.12",
"REDIS_DOWNLOAD_URL=//download.redis.io/releases/redis-5.0.12.tar.gz",
"REDIS_DOWNLOAD_SHA=7040eba5910f7c3d38f05ea5a1d88b480488215bdbd2e10ec70d18380108e31e"
],
"Cmd": [
"/bin/sh",
"-c",
"#(nop) ",
"CMD [\"redis-server\"]"
],
"Image": "sha256:f43399b52be67a391b4bf53e210c55002a2bce5e4fa5f1021d4dc9725ec7f537",
"Volumes": {
"/data": {}
},
"WorkingDir": "/data",
"Entrypoint": [
"docker-entrypoint.sh"
],
"OnBuild": null,
"Labels": {}
},
"DockerVersion": "19.03.12",
"Author": "",
"Config": {
"Hostname": "",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"6379/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"GOSU_VERSION=1.12",
"REDIS_VERSION=5.0.12",
"REDIS_DOWNLOAD_URL=//download.redis.io/releases/redis-5.0.12.tar.gz",
"REDIS_DOWNLOAD_SHA=7040eba5910f7c3d38f05ea5a1d88b480488215bdbd2e10ec70d18380108e31e"
],
"Cmd": [
"redis-server"
],
"Image": "sha256:f43399b52be67a391b4bf53e210c55002a2bce5e4fa5f1021d4dc9725ec7f537",
"Volumes": {
"/data": {}
},
"WorkingDir": "/data",
"Entrypoint": [
"docker-entrypoint.sh"
],
"OnBuild": null,
"Labels": null
},
"Architecture": "amd64",
"Os": "linux",
"Size": 98358570,
"VirtualSize": 98358570,
"GraphDriver": {
"Name": "overlay2",
"Data": {
"LowerDir": "/var/lib/docker/overlay2/343be33bc297acdf8bc2b57b335c025ea76b8d1263548ba269c0aefb81aaf28d/diff:/var/lib/docker/overlay2/3302ce8415cd3a8a1e1e9753eebbb38df5b15cc02fef109e30be41f4310ee810/diff:/var/lib/docker/overlay2/44c8b45db6fd63960703e604f43a4acc5633f09a3a91a8d7263ad2f9bfd0d038/diff:/var/lib/docker/overlay2/5eb368e142c6079aa1f507149216281ca79b5df08ba19bad51390d74dfbf3c1f/diff:/var/lib/docker/overlay2/219cf0492ba08d03dc4f2a5649ec1124fff82ebe22c6f9a0a26ccf303be0e0d1/diff",
"MergedDir": "/var/lib/docker/overlay2/d38f31592715a55459f4556623786c5878014bf8ffdcc1e88506069e32ba75dc/merged",
"UpperDir": "/var/lib/docker/overlay2/d38f31592715a55459f4556623786c5878014bf8ffdcc1e88506069e32ba75dc/diff",
"WorkDir": "/var/lib/docker/overlay2/d38f31592715a55459f4556623786c5878014bf8ffdcc1e88506069e32ba75dc/work"
}
},
"RootFS": {
"Type": "layers",
"Layers": [ #鏡像分層資訊
"sha256:9eb82f04c782ef3f5ca25911e60d75e441ce0fe82e49f0dbf02c81a3161d1300",
"sha256:f973e3e0e07c6e9f9418a6dd0c453cd70c7fb87a0826172275883ab4bdb61bf4",
"sha256:c16b4f3a3f99ebbcd59795b54faf4cdf2e00ee09b85124fda5d0746d64237ca6",
"sha256:01b7eeecc774b7669892f89fc8b84eea781263448978a411f0f429b867410fc5",
"sha256:f2df42e57d5eef289656ef8aad072d2828a61e93833e2928a789a88bc2bc1cbc",
"sha256:b537eb7339bcbff729ebdc63a0f910b39ae3d5540663a74f55081b62e92f66e3"
]
}
}
]
五、容器數據卷
1、什麼是數據卷
docker的理念就是將應用和環境打包成一個鏡像。但是如果容器被刪除,那麼容器中的數據將會丟失。我們需要將容器中的數據進行持久化,並且容器之間可以共享數據。這就是容器卷,保障容器數據的持久化。將容器數據目錄掛在到宿主機的目錄。我們以後修改只需要在本地修改即可,容器內會自動同步!
2、使用數據卷
方式一:指定路徑掛載
docker run -it -v 主機目錄:容器內目錄 -p 主機埠:容器內埠
#1、啟動並掛在centos鏡像
[root@ecs-x-large-2-linux-20200305213344 ~]# docker run -it -v /home/ceshi:/home 300e315adb2f /bin/bash
#2、查看該容器掛載資訊
[root@ecs-x-large-2-linux-20200305213344 ~]# docker inspect 242dbee6c4a9
},
"Mounts": [
{
"Type": "bind",
"Source": "/home/ceshi",
"Destination": "/home",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
],
#3、測試文件數據的同步
#宿主機創建文件
[root@ecs-x-large-2-linux-20200305213344 ~]# cd /home/ceshi/
[root@ecs-x-large-2-linux-20200305213344 ceshi]# echo "11111" > fanxiang.java
[root@ecs-x-large-2-linux-20200305213344 ceshi]# ls
fanxiang.java
[root@ecs-x-large-2-linux-20200305213344 ceshi]# cat fanxiang.java
11111
#查看容器目錄是否存在該文件
[root@ecs-x-large-2-linux-20200305213344 ceshi]# docker exec -it 242dbee6c4a9 /bin/bash
[root@242dbee6c4a9 /]# ls
bin dev etc home lib lib64 lost+found media mnt opt proc root run sbin srv sys tmp usr var
[root@242dbee6c4a9 /]# cd home/
[root@242dbee6c4a9 home]# ls
fanxiang.java
[root@242dbee6c4a9 home]# cat fanxiang.java
11111
#修改容器內文件內容,看是否可以同步到宿主機文件中
[root@242dbee6c4a9 home]# vi fanxiang.java
[root@242dbee6c4a9 home]# cat fanxiang.java
11111
22222
[root@242dbee6c4a9 home]#
#查看宿主機文件
[root@ecs-x-large-2-linux-20200305213344 ceshi]# cat /home/ceshi/fanxiang.java
11111
22222
[root@ecs-x-large-2-linux-20200305213344 ceshi]#
方式二、具名掛載
[root@ecs-x-large-2-linux-20200305213344 ceshi]# docker run -d --name nginx02 -P -v nginx-fanxiang:/etc/nginx 35c43ace9216
2cbc89399189416b4a4c04e1d20fc945eb9bffff3e3f400d4d7cacb45701281a
#查看生成卷資訊
[root@ecs-x-large-2-linux-20200305213344 ceshi]# docker volume ls
DRIVER VOLUME NAME
local bb91e16d78d66b188bba86c8f5646b10c93b955953695737b1162fca0fbef279
local nginx-fanxiang #卷名稱
#查看卷詳細資訊
[root@ecs-x-large-2-linux-20200305213344 ceshi]# docker inspect nginx-fanxiang
[
{
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/nginx-fanxiang/_data",
"Name": "nginx-fanxiang",
"Options": {},
"Scope": "local"
}
]
#所有的docker容器內的卷,沒有指定目錄的情況下都是在 /var/lib/docker/volumes/xxxx/_data 下
#如果指定了目錄,docker volume ls 是查看不到的。
方式三、匿名掛載
[root@ecs-x-large-2-linux-20200305213344 ceshi]# docker run -d --name nginx01 -P -v /etc/nginx 35c43ace9216
5c0d290079eb097bdc50a8bc66cc4a044fc35ae7826f7c2e934c0bd86f9b8544
[root@ecs-x-large-2-linux-20200305213344 ceshi]# docker volume ls
DRIVER VOLUME NAME
local bb91e16d78d66b188bba86c8f5646b10c93b955953695737b1162fca0fbef279
#查看生成卷資訊
[root@ecs-x-large-2-linux-20200305213344 ceshi]# docker volume ls
DRIVER VOLUME NAME
local bb91e16d78d66b188bba86c8f5646b10c93b955953695737b1162fca0fbef279
總結
# 三種掛載: 匿名掛載、具名掛載、指定路徑掛載
-v 容器內路徑 #匿名掛載
-v 卷名:容器內路徑 #具名掛載
-v /宿主機路徑:容器內路徑 #指定路徑掛載 docker volume ls 是查看不到的
# 通過 -v 容器內路徑: ro rw 改變讀寫許可權
ro #readonly 只讀
rw #readwrite 可讀可寫
docker run -d -P --name nginx05 -v juming:/etc/nginx:ro nginx
docker run -d -P --name nginx05 -v juming:/etc/nginx:rw nginx
# ro 只要看到ro就說明這個路徑只能通過宿主機來操作,容器內部是無法操作!
數據卷容器
#命令:-–volumes-from
#實戰
[root@ecs-x-large-2-linux-20200305213344 ~]# docker run -d -P --name nginx01 -v /home/ceshimu /home/ceshi:/var 35c43ace9216
e7039f7e2285f4280ed097fce518832c4ddbaac9de7cf803ba0e3a68cd559616
[root@ecs-x-large-2-linux-20200305213344 ~]# docker run -d -P --name nginx02 --volumes-from nginx01 35c43ace9216
cae26f23da995802cf827b247a1b85364e925716a312c215a01729a8526be11b
[root@ecs-x-large-2-linux-20200305213344 ~]# docker run -d -P --name nginx03 --volumes-from nginx01 35c43ace9216
65d9eb1bd803a24fe16f22c65bcb8d1ea2ab300833801e11e7d334d1014ea422
#這樣這三個容器就共用了 /home/ceshimu 目錄,達到了容器間文件和數據共享
#容器之間的配置資訊的傳遞,數據卷容器的生命周期一直持續到沒有容器使用為止。
#但是一旦你持久化到了本地,這個時候,本地的數據是不會刪除的!
六、DockerFile
1、DockerFile介紹
dockerfile是用來構建鏡像的文件,是命令行參數腳本;具體構建步驟如下:
- 編寫一個dockerfile文件
- docker build構建成為一個鏡像
- docker run 運行鏡像
- docker push 發布鏡像到docker hub
因為很多官網鏡像都是基礎包,很多功能都沒有,所以我們經常通過dockerfie構建我們自己的鏡像;
2、DockerFile構建過程
基礎知識:
- 每個保留的關鍵字其實就是指令,都必須大寫;
- 執行順序從上至下;
-
表示注釋的意思;
- 每個指令都會創建並提交一個新的鏡像層;
其實dockerfile是面向開發的,我們以後要是發布項目、做鏡像就需要編寫dockerfile文件,這個文件其實十分的簡單;
dockerfile:構建文件,定義了一切的步驟、源程式碼;
dockerimages:通過dockerfile構建生成的鏡像文件,最終發布和運行的產品;
docker容器:容器其實就是鏡像運行起來提供服務;
DockerFile常用指令
FROM # 基礎鏡像,一切從這裡開始構建
MAINTAINER # 鏡像是誰寫的, 姓名+郵箱
RUN # 鏡像構建的時候需要運行的命令
ADD # 步驟,tomcat鏡像,這個tomcat壓縮包!添加內容 添加同目錄
WORKDIR # 鏡像的工作目錄
VOLUME # 掛載的目錄
EXPOSE # 保留埠配置
CMD # 指定這個容器啟動的時候要運行的命令,只有最後一個會生效,可被替代。
ENTRYPOINT # 指定這個容器啟動的時候要運行的命令,可以追加命令
ONBUILD # 當構建一個被繼承 DockerFile 這個時候就會運行ONBUILD的指令,觸髮指令。
COPY # 類似ADD,將我們文件拷貝到鏡像中
ENV # 構建的時候設置環境變數!
實戰測試
#1、編寫一個我們自己的centos鏡像
vim dockerfile
FROM centos
MAINTAINER fanxiang<[email protected]>
ENV MYPATH /usr/local
WORKDIR $MYPATH
RUN yum -y install vim
EXPOSE 80
CMD echo $MYPATH
CMD echo "---end---"
CMD /bin/bash
#2、構建鏡像
[root@ecs-x-large-2-linux-20200305213344 ~]# docker build -f dockerfile -t mycentos:0.1.1 .
Sending build context to Docker daemon 24.06 kB
Step 1/9 : FROM centos
---> 300e315adb2f
Step 2/9 : MAINTAINER fanxiang<[email protected]>
---> Running in c194f8179cc6
---> 49f504155f74
Removing intermediate container c194f8179cc6
Step 3/9 : ENV MYPATH /usr/local
---> Running in 49486765253a
---> 72d62d727c0c
Removing intermediate container 49486765253a
Step 4/9 : WORKDIR $MYPATH
---> a2830ad521ca
Removing intermediate container a508b7e1f11c
Step 5/9 : RUN yum -y install vim
---> Running in 9b78249d5c31
#3、運行鏡像文件為容器
[root@ecs-x-large-2-linux-20200305213344 ~]# docker run -it ba7d934dc97e /bin/bash
[root@a60302927139 local]# pwd
/usr/local
比較CMD和ENTRYPOINT的區別
CMD # 指定這個容器啟動的時候要運行的命令,只有最後一個會生效,可被替代。
ENTRYPOINT # 指定這個容器啟動的時候要運行的命令,可以追加命令
#實戰
#測試CMD
#1、編寫鏡像文件
FROM centos
CMD ["ls","-a"]
#2、構建鏡像
[root@ecs-x-large-2-linux-20200305213344 ~]# docker build -f dockerfile-cmd -t centos-cmd:1.0.0 .
Sending build context to Docker daemon 25.09 kB
Step 1/2 : FROM centos
---> 300e315adb2f
Step 2/2 : CMD ls -a
---> Running in 6d49ee474c06
---> 2c8c1dc0ba19
Removing intermediate container 6d49ee474c06
Successfully built 2c8c1dc0ba19
#3、運行鏡像
root@ecs-x-large-2-linux-20200305213344 ~]# docker run 2c8c1dc0ba19
.
..
.dockerenv
bin
dev
etc
home
lib
lib64
#4、此時追加一個命令在後面 -l 編程了 ls -al
[root@ecs-x-large-2-linux-20200305213344 ~]# docker run 2c8c1dc0ba19 -l
container_linux.go:235: starting container process caused "exec: \"-l\": executable file not found in $PATH"
/usr/bin/docker-current: Error response from daemon: oci runtime error: container_linux.go:235: starting container process caused "exec: \"-l\": executable file not found in $PATH".
#因為cmd的情況下,由於命令後面追加了-l,所以替換了CMD ["ls","-a"]為CMD ["ls","-l"],因為-l不是命令,所以報錯了;
#測試ENTRYPOINT
#1、編寫鏡像文件
FROM centos
ENTRYPOINT ["ls","-a"]
#2、構建鏡像
[root@ecs-x-large-2-linux-20200305213344 ~]# docker build -f dockerfile-en -t centos-en:1.1.1 .
Sending build context to Docker daemon 26.11 kB
Step 1/2 : FROM centos
---> 300e315adb2f
Step 2/2 : ENTRYPOINT ls -a
---> Running in 49c8f33c0272
---> da1b02c99aec
Removing intermediate container 49c8f33c0272
Successfully built da1b02c99aec
#3、運行鏡像
[root@ecs-x-large-2-linux-20200305213344 ~]# docker run da1b02c99aec
.
..
.dockerenv
bin
dev
etc
home
#4、運行鏡像在後面追加命令
[root@ecs-x-large-2-linux-20200305213344 ~]# docker run da1b02c99aec -l
total 72
drwxr-xr-x. 1 root root 4096 Mar 9 09:00 .
drwxr-xr-x. 1 root root 4096 Mar 9 09:00 ..
-rwxr-xr-x. 1 root root 0 Mar 9 09:00 .dockerenv
lrwxrwxrwx. 1 root root 7 Nov 3 15:22 bin -> usr/bin
drwxr-xr-x. 5 root root 340 Mar 9 09:00 dev
drwxr-xr-x. 1 root root 4096 Mar 9 09:00 etc
drwxr-xr-x. 2 root root 4096 Nov 3 15:22 home
lrwxrwxrwx. 1 root root 7 Nov 3 15:22 lib -> usr/
dockerfile綜合實戰測試,生成tomcat鏡像
1、準備tomcat、jdk包到當前目錄
-rwxrwxrwx 1 root root 10515248 3月 9 17:06 apache-tomcat-8.5.63.tar.gz #tomcat
-rwxrwxrwx 1 root root 143142634 12月 2 13:21 jdk-8u271-linux-x64.tar.gz #jdk
drwxrwxrwx 2 3434 3434 4096 7月 22 2020 node_exporter-0.18.1.linux-amd64
-rwxrwxrwx 1 root root 8083296 3月 18 2020 node_exporter-0.18.1.linux-amd64.tar.gz
-rwxrwxrwx 1 root root 20143759 12月 7 2015 Python-3.5.1.tgz
drwxrwxrwx 2 root root 4096 6月 28 2018 scripts
2、編寫dockerfile
FROM centos #
MAINTAINER cheng<[email protected]>
COPY README /usr/local/README #複製文件
ADD jdk-8u231-linux-x64.tar.gz /usr/local/ #複製解壓
ADD apache-tomcat-9.0.35.tar.gz /usr/local/ #複製解壓
RUN yum -y install vim ENV MYPATH /usr/local #設置環境變數
WORKDIR $MYPATH #設置工作目錄
ENV JAVA_HOME /usr/local/jdk1.8.0_231 #設置環境變數
ENV CATALINA_HOME /usr/local/apache-tomcat-9.0.35 #設置環境變數
ENV PATH $PATH:$JAVA_HOME/bin:$CATALINA_HOME/lib #設置環境變數 分隔符是:
EXPOSE 8080 #設置暴露的埠
CMD /usr/local/apache-tomcat-9.0.35/bin/startup.sh && tail -F /usr/local/apache- tomcat-9.0.35/logs/catalina.out #設置默認命令
3、構建鏡像
# 因為dockerfile命名使用默認命名 因此不用使用-f 指定文件
$ docker build -t mytomcat:0.1 .
4、運行鏡像
$ docker run -d -p 8080:8080 --name tomcat01 -v /home/kuangshen/build/tomcat/test:/usr/local/apache-tomcat-9.0.35/webapps/test - v /home/kuangshen/build/tomcat/tomcatlogs/:/usr/local/apache-tomcat-9.0.35/logs mytomcat:0.1
5、訪問測試
6、發布項目(由於做了卷掛載,我們直接在本地編寫項目就可以發布了!)
發現:項目部署成功,可以直接訪問!
我們以後開發的步驟:需要掌握Dockerfile的編寫!我們之後的一切都是使用docker鏡像來發布運行!
發布自己的鏡像到dockerhub
1、地址 //hub.docker.com/
2、確定這個帳號可以登錄
3、登錄
4、提交 push鏡像
# 會發現push不上去,因為如果沒有前綴的話默認是push到 官方的library
# 解決方法
# 第一種 build的時候添加你的dockerhub用戶名,然後在push就可以放到自己的倉庫了 $ docker build -t chengcoder/mytomcat:0.1 .
# 第二種 使用docker tag #然後再次push $ docker tag 容器id chengcoder/mytomcat:1.0 #然後再次push
七、Docker網路
1、如何理解docker0
[root@ecs-x-large-2-linux-20200305213344 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo #本地迴環地址
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:87:2c:ab brd ff:ff:ff:ff:ff:ff#阿里雲內網地址
inet 192.168.0.158/24 brd 192.168.0.255 scope global noprefixroute dynamic eth0
valid_lft 75530sec preferred_lft 75530sec
inet6 fe80::f816:3eff:fe87:2cab/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:95:d5:1c:dd brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0 #docker0地址
valid_lft forever preferred_lft forever
inet6 fe80::42:95ff:fed5:1cdd/64 scope link
valid_lft forever preferred_lft forever
一共三個網路
2、docker如何處理容器網路請求
#運行一個tomcat容器
[root@ecs-x-large-2-linux-20200305213344 ~]# docker run -d --name tomcat01 tomcat
3a030b178e509a29f76c9fe6cd0fe70ba644983c35807c18b61466f2efa63461
#查看目前宿主機網路狀態
[root@ecs-x-large-2-linux-20200305213344 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:87:2c:ab brd ff:ff:ff:ff:ff:ff
inet 192.168.0.158/24 brd 192.168.0.255 scope global noprefixroute dynamic eth0
valid_lft 75211sec preferred_lft 75211sec
inet6 fe80::f816:3eff:fe87:2cab/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:95:d5:1c:dd brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:95ff:fed5:1cdd/64 scope link
valid_lft forever preferred_lft forever
#多了一組IP地址,59 -58
59: vethece8904@if58: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 22:70:2a:91:fb:f9 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::2070:2aff:fe91:fbf9/64 scope link
valid_lft forever preferred_lft forever
#進入內容內部查看網路情況
[root@ecs-x-large-2-linux-20200305213344 ~]# docker exec -it 3a030b178e50 /bin/bash
root@3a030b178e50:/usr/local/tomcat# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
# 58 - 59
58: eth0@if59: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.2/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe11:2/64 scope link
valid_lft forever preferred_lft forever
#測試宿主機和容器是否可以互相ping通
可以ping通
總結原理:
1、我們每啟動一個docker容器,docker就會給docker容器分配一個ip,我們只要按照了docker, 就會有一個docker0橋接模式,使用的技術是veth-pair技術!
2 、在啟動一個容器測試,發現又多了一對網路。我們發現這個容器帶來網卡,都是一對對的 veth-pair 就是一對的虛擬設備介面,他們都是成對出現的,一端連著協議,一端彼此相連 正因為有這個特性 veth-pair 充當一個橋樑,連接各種虛擬網路設備的 OpenStac,Docker容器之間的連接,OVS的連接,都是使用evth-pair技術。
3、我們來測試下tomcat01和tomcat02是否可以ping通
**結論**:
tomcat01和tomcat02公用一個路由器,docker0。 所有的容器不指定網路的情況下,都是docker0路由的,docker會給我們的容器分配一個默認的可用 ip。Docker使用的是Linux的橋接,宿主機是一個Docker容器的網橋 docker0。
3、-link的使用
問題:我們想要訪問一個容器,比如mysql容器。但是我們不想因為該容易重啟後ip地址變了,導致我需要修改我的鏈接配置文件,我想通過名字來訪問該mysql容器?
不同容器之間通過docker0是可以相互ping通的,但是無法通過名字來ping通,所以產生了-link;
[root@ecs-x-large-2-linux-20200305213344 ~]# docker run -d --name tomcat02 tomcat
815f89b080aa83ba93e023237a962fa327198dfeef8e1b770156fd03717b21fc
[root@ecs-x-large-2-linux-20200305213344 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
815f89b080aa tomcat "catalina.sh run" 5 seconds ago Up 4 seconds 8080/tcp tomcat02
3a030b178e50 tomcat "catalina.sh run" 18 minutes ago Up 18 minutes 8080/tcp tomcat01
[root@ecs-x-large-2-linux-20200305213344 ~]# docker exec 815f89b080aa ping tomcat01
ping: tomcat01: Name or service not known #無法通過名字ping通
[root@ecs-x-large-2-linux-20200305213344 ~]# docker exec 3a030b178e50 ping tomcat02
ping: tomcat02: Name or service not known
[root@ecs-x-large-2-linux-20200305213344 ~]#
#但是通過ip是可以ping通的
[root@ecs-x-large-2-linux-20200305213344 ~]# docker exec 815f89b080aa ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
60: eth0@if61: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.3/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe11:3/64 scope link
valid_lft forever preferred_lft forever
[root@ecs-x-large-2-linux-20200305213344 ~]# docker exec 3a030b178e50 ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
58: eth0@if59: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.2/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe11:2/64 scope link
valid_lft forever preferred_lft forever
[root@ecs-x-large-2-linux-20200305213344 ~]# docker exec 815f89b080aa ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.058 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.045 ms
64 bytes from 172.17.0.2: icmp_seq=3 ttl=64 time=0.055 ms
64 bytes from 172.17.0.2: icmp_seq=4 ttl=64 time=0.050 ms
64 bytes from 172.17.0.2: icmp_seq=5 ttl=64 time=0.055 ms
使用 -link
#運行tomcat03 --link tomcat02
[root@ecs-x-large-2-linux-20200305213344 ~]# docker run -d --name tomcat03 --link tomcat02 tomcat
cb7d633736d4fa52916950e2477ec229dd2a4193e2d2121a55fa42a16471840f
#用tomcat03可以ping通tomcat02
[root@ecs-x-large-2-linux-20200305213344 ~]# docker exec -it tomcat03 ping tomcat02
PING tomcat02 (172.17.0.3) 56(84) bytes of data.
64 bytes from tomcat02 (172.17.0.3): icmp_seq=1 ttl=64 time=0.078 ms
64 bytes from tomcat02 (172.17.0.3): icmp_seq=2 ttl=64 time=0.043 ms
64 bytes from tomcat02 (172.17.0.3): icmp_seq=3 ttl=64 time=0.049 ms
#用 tomcat02 卻無法ping通tomcat03
[root@ecs-x-large-2-linux-20200305213344 ~]# docker exec -it tomcat02 ping tomcat03
ping: tomcat03: Name or service not known
#原理
#查看tomcat03的hosts文件
[root@ecs-x-large-2-linux-20200305213344 ~]# docker exec -it tomcat03 cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.3 tomcat02 815f89b080aa #存在tomcat02
172.17.0.4 cb7d633736d4
#查看tomcat02的hosts文件
[root@ecs-x-large-2-linux-20200305213344 ~]# docker exec -it tomcat02 cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes #沒有tomcat03
ff02::2 ip6-allrouters
172.17.0.3 815f89b080aa
--link的本質就是在hosts配置中增加映射,不過現在已經不建議使用--link,也不建議使用docker0
4、自定義網路
[root@ecs-x-large-2-linux-20200305213344 ~]# docker network
Commands:
connect Connect a container to a network
create Create a network
disconnect Disconnect a container from a network
inspect Display detailed information on one or more networks
ls List networks
prune Remove all unused networks
rm Remove one or more networks
查看所有的docker網路
[root@ecs-x-large-2-linux-20200305213344 ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
049b2d455318 bridge bridge local
ea9aabe8b492 host host local
4c9f7b958e9b none null local
bridge :橋接 docker(默認,自己創建也是用bridge模式)
none :不配置網路,一般不用
host :和所主機共享網路
測試實戰
# 我們直接啟動的命令 --net bridge,而這個就是我們得docker0
# bridge就是docker0 $ docker run -d -P --name tomcat01 tomcat 等價於 => docker run -d -P --name tomcat01 --net bridge tomcat
# docker0,特點:默認,域名不能訪問。 --link可以打通連接,但是很麻煩!
#自定義一個網路
[root@ecs-x-large-2-linux-20200305213344 ~]# docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet
a8ba53cc0f3b5b37e972fd57012d68fb78f95d2b9b4c3ef3316717687887ae6d
[root@ecs-x-large-2-linux-20200305213344 ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
049b2d455318 bridge bridge local
ea9aabe8b492 host host local
a8ba53cc0f3b mynet bridge local #自定義網路
4c9f7b958e9b none null local
#查看自定義網路詳情
[root@ecs-x-large-2-linux-20200305213344 ~]# docker inspect mynet
[
{
"Name": "mynet",
"Id": "a8ba53cc0f3b5b37e972fd57012d68fb78f95d2b9b4c3ef3316717687887ae6d",
"Created": "2021-03-09T19:16:52.302646691+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.0.0/16",
"Gateway": "192.168.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Containers": {},
"Options": {},
"Labels": {}
}
]
#啟動兩個tomcat
[root@ecs-x-large-2-linux-20200305213344 ~]# docker run -d --name tomcat01 --net mynet tomcat
e14ddd601d9713635adab96caef5b054c263eff6f37c9d052e512ef6efb1015b
[root@ecs-x-large-2-linux-20200305213344 ~]# docker run -d --name tomcat02 --net mynet tomcat
ff97ba52684433594ca09511a34842401b9d7f0027729f251e23b12f183a182b
[root@ecs-x-large-2-linux-20200305213344 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ff97ba526844 tomcat "catalina.sh run" 4 seconds ago Up 4 seconds 8080/tcp tomcat02
e14ddd601d97 tomcat "catalina.sh run" 11 seconds ago Up 10 seconds 8080/tcp tomcat01
#查看mynet詳情
[root@ecs-x-large-2-linux-20200305213344 ~]# docker inspect mynet
[
{
"Name": "mynet",
"Id": "a8ba53cc0f3b5b37e972fd57012d68fb78f95d2b9b4c3ef3316717687887ae6d",
"Created": "2021-03-09T19:16:52.302646691+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.0.0/16",
"Gateway": "192.168.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Containers": {
"e14ddd601d9713635adab96caef5b054c263eff6f37c9d052e512ef6efb1015b": {
"Name": "tomcat01",
"EndpointID": "f7d9b56bdef9d5739d016f8495c619273ebabbb069ff63bfa45be36a3579ca47",
"MacAddress": "02:42:c0:a8:00:02",
"IPv4Address": "192.168.0.2/16",
"IPv6Address": ""
},
"ff97ba52684433594ca09511a34842401b9d7f0027729f251e23b12f183a182b": {
"Name": "tomcat02",
"EndpointID": "699eb4fdba2f17d4a3750451139380f2c5ce7ceb2ed65f66a6e748c87c4ff405",
"MacAddress": "02:42:c0:a8:00:03",
"IPv4Address": "192.168.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
#在自定義網路下可以相互ping通
[root@ecs-x-large-2-linux-20200305213344 ~]# docker exec -it tomcat01 ping tomcat02
PING tomcat02 (192.168.0.3) 56(84) bytes of data.
64 bytes from tomcat02.mynet (192.168.0.3): icmp_seq=1 ttl=64 time=0.045 ms
64 bytes from tomcat02.mynet (192.168.0.3): icmp_seq=2 ttl=64 time=0.046 ms
64 bytes from tomcat02.mynet (192.168.0.3): icmp_seq=3 ttl=64 time=0.040 ms
^C
--- tomcat02 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.040/0.043/0.046/0.008 ms
[root@ecs-x-large-2-linux-20200305213344 ~]# docker exec -it tomcat02 ping tomcat01
PING tomcat01 (192.168.0.2) 56(84) bytes of data.
64 bytes from tomcat01.mynet (192.168.0.2): icmp_seq=1 ttl=64 time=0.029 ms
64 bytes from tomcat01.mynet (192.168.0.2): icmp_seq=2 ttl=64 time=0.034 ms
^C
--- tomcat01 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1ms
rtt min/avg/max/mdev = 0.029/0.031/0.034/0.006 ms
#總結
我們自定義的網路為我們維護好的相應關係,我們推薦使用該中方式;
5、網路聯通
#測試兩個不同的網路聯通(啟動一個tomcat03.默認使用docker0)
[root@ecs-x-large-2-linux-20200305213344 ~]# docker run -d --name tomcat03 tomcat
70fb08bab733dd100a748ba587f5e72b36045fc8575ebd4fb2ae6f5bd695652f
[root@ecs-x-large-2-linux-20200305213344 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
70fb08bab733 tomcat "catalina.sh run" 4 seconds ago Up 4 seconds 8080/tcp tomcat03
ff97ba526844 tomcat "catalina.sh run" 7 minutes ago Up 7 minutes 8080/tcp tomcat02
e14ddd601d97 tomcat "catalina.sh run" 7 minutes ago Up 7 minutes 8080/tcp tomcat01
#測試tomcat03 ping tomcat01
[root@ecs-x-large-2-linux-20200305213344 ~]# docker exec -it tomcat03 ping tomcat01
ping: tomcat01: Name or service not known
#要將tomcat03 聯通到tomgcat01,就是將tomcat03添加到mynet網路中
docker network connect mynet tomcat03
#測試tomcat03和tomcat01ping
[root@ecs-x-large-2-linux-20200305213344 ~]# docker exec -it tomcat03 ping tomcat01
PING tomcat01 (192.168.0.2) 56(84) bytes of data.
64 bytes from tomcat01.mynet (192.168.0.2): icmp_seq=1 ttl=64 time=0.045 ms
64 bytes from tomcat01.mynet (192.168.0.2): icmp_seq=2 ttl=64 time=0.033 ms
64 bytes from tomcat01.mynet (192.168.0.2): icmp_seq=3 ttl=64 time=0.053 ms
^C
--- tomcat01 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.033/0.043/0.053/0.011 ms
[root@ecs-x-large-2-linux-20200305213344 ~]# docker exec -it tomcat01 ping tomcat03
PING tomcat03 (192.168.0.4) 56(84) bytes of data.
64 bytes from tomcat03.mynet (192.168.0.4): icmp_seq=1 ttl=64 time=0.025 ms
64 bytes from tomcat03.mynet (192.168.0.4): icmp_seq=2 ttl=64 time=0.045 ms
^C
--- tomcat03 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.025/0.035/0.045/0.010 ms
#結論
假如要跨網路鏈接伺服器,那就要使用docker network connect 將容器與網路聯通;