天天看点

Linux docker的4种容器网络的配置与内核名称空间的创建

文章目录

    • docker的4种网络模式
      • bridge模式
      • container模式
      • host模式
      • none模式
    • 名称空间的创建
      • ip netns创建
      • netns的操作
      • 转移设备
        • veth pair
        • 创建veth pair
        • 实现Network Namespace间通信
        • veth设备重命名
    • 四种网络模式配置
      • bridge模式配置
      • none模式配置
      • container模式配置
      • host模式配置
    • 容器的常用操作
      • 手动指定容器要使用的DNS
      • 手动往/etc/hosts文件中注入主机名到IP地址的映射
      • 开放容器端口
      • 自定义docker0桥的网络属性信息
      • docker远程连接
      • docker创建自定义桥

docker的4种网络模式

网络模式 配置 说明
host -network host 容器和宿主机共享Network namespace
container –network container:NAME_OR_ID 容器和另外一个容器共享Network namespace
none –network none 容器有独立的Network namespace,但并没有对其进行任何网络设置,如分配veth pair 和网桥连接,配置IP等
bridge –network bridge 默认模式

bridge模式

当Docker进程启动时,会在主机上创建一个名为docker0的虚拟网桥,此主机上启动的Docker容器会连接到这个虚拟网桥上。虚拟网桥的工作方式和物理交换机类似,这样主机上的所有容器就通过交换 机连在了一个二层网络中。

从docker0子网中分配一个IP给容器使用,并设置docker0的IP地址为容器的默认网关。在主机上创建一对虚拟网卡veth pair设备,Docker将veth pair设备的一端放在新创建的容器中,并命名为eth0(容器的网卡),另一端放在主机中,以vethxxx这样类似的名字命名,并将这个网络设备加入到docker0网桥中。可以通过brctl show命令查看。

bridge模式是docker的默认网络模式,不写–network参数,就是bridge模式。使用docker run -p时,docker实际是在iptables做了DNAT规则,实现端口转发功能。可以使用iptables -t nat -vnL查看。

#打开ip转发
[[email protected] ~]# cat /proc/sys/net/ipv4/ip_forward
1
#创建一个容器
[[email protected] ~]# docker run -it nginx /bin/bash
[[email protected] ~]# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
b3843c8ae4d9        nginx               "/bin/bash"         36 minutes ago      Up 36 minutes       80/tcp              bold_spence

[[email protected] ~]# iptables -t nat -vnL
Chain PREROUTING (policy ACCEPT 72 packets, 13929 bytes)
pkts bytes target     prot opt in     out     source               destination         
   5   260 DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 72 packets, 13929 bytes)
pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 707 packets, 52946 bytes)
pkts bytes target     prot opt in     out     source               destination         
   0     0 DOCKER     all  --  *      *       0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 707 packets, 52946 bytes)
pkts bytes target     prot opt in     out     source               destination         
   0     0 MASQUERADE  all  --  *      !docker0  172.17.0.0/16        0.0.0.0/0           

Chain DOCKER (2 references)
pkts bytes target     prot opt in     out     source               destination         
   0     0 RETURN     all  --  docker0 *       0.0.0.0/0            0.0.0.0/0           
#创建centos容器,访问nginx容器
[[email protected] ~]# docker run -it centos /bin/bash
[[email protected] /]# curl http://172.17.0.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
   body {
       width: 35em;
       margin: 0 auto;
       font-family: Tahoma, Verdana, Arial, sans-serif;
   }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/" target="_blank" rel="external nofollow"  target="_blank" rel="external nofollow" >nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/" target="_blank" rel="external nofollow"  target="_blank" rel="external nofollow" >nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
           

container模式

这个模式指定新创建的容器和已经存在的一个容器共享一个 Network Namespace,而不是和宿主机共享。新创建的容器不会创建自己的网卡,配置自己的 IP,而是和一个指定的容器共享 IP、端口范围等。同样,两个容器除了网络方面,其他的如文件系统、进程列表等还是隔离的。两个容器的进程可以通过 lo 网卡设备通信。

#创建centos容器用container模式
[[email protected] ~]# docker run -it --network=container:8491d65371b4 centos /bin/bash

#在centos容器里没有ip
[[email protected] ~]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
92ed3b3c2687        centos              "/bin/bash"              3 minutes ago       Up 3 minutes                            jovial_snyder
8491d65371b4        nginx               "nginx -g 'daemon of…"   9 minutes ago       Up 9 minutes        80/tcp              optimistic_montalcini
[[email protected] ~]# docker inspect 92ed3b3c2687
...
"NetworkSettings": {
            "Bridge": "",
            "SandboxID": "",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {},
            "SandboxKey": "",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "",
            "Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "MacAddress": "",
            "Networks": {}
        }
...

#在centos容器里创建文件
[[email protected] /]# ls  
bin  dev  etc  home  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
[[email protected] /]# mkdir 123
[[email protected] /]# ls
123  dev  home	lib64	    media  opt	 root  sbin  sys  usr
bin  etc  lib	lost+found  mnt    proc  run   srv   tmp  var

#nginx容器里并没有出现123
[email protected]:/# ls
bin  boot  dev	etc  home  lib	lib64  media  mnt  opt	proc  root  run  sbin  srv  sys  tmp  usr  var
           

host模式

如果启动容器的时候使用host模式,那么这个容器将不会获得一个独立的Network Namespace,而是和宿主机共用一个Network Namespace。容器将不会虚拟出自己的网卡,配置自己的IP等,而是使用宿主机的IP和端口。但是,容器的其他方面,如文件系统、进程列表等还是和宿主机隔离的。

使用host模式的容器可以直接使用宿主机的IP地址与外界通信,容器内部的服务端口也可以使用宿主机的端口,不需要进行NAT,host最大的优势就是网络性能比较好,但是docker host上已经使用的端口就不能再用了,网络的隔离性不好。

#宿主机没有80端口
[[email protected] ~]# ss -antl
State       Recv-Q Send-Q         Local Address:Port                        Peer Address:Port              
LISTEN      0      128                        *:22                                     *:*                  
LISTEN      0      100                127.0.0.1:25                                     *:*                  
LISTEN      0      5                          *:873                                    *:*                  
LISTEN      0      128                       :::22                                    :::*                  
LISTEN      0      100                      ::1:25                                    :::*                  
LISTEN      0      5                         :::873                                   :::*                  
#创建host模式的nginx
[[email protected] ~]# docker run -d --network=host nginx
b52a048ed2e01e0ba25896b8b7e724789d91e6f634cc4dfb7421a8d9f8ef1e57
#宿主机有了80端口
[[email protected] ~]# ss -antl
State       Recv-Q Send-Q         Local Address:Port                        Peer Address:Port              
LISTEN      0      128                        *:80                                     *:*                  
LISTEN      0      128                        *:22                                     *:*                  
LISTEN      0      100                127.0.0.1:25                                     *:*                  
LISTEN      0      5                          *:873                                    *:*                  
LISTEN      0      128                       :::22                                    :::*                  
LISTEN      0      100                      ::1:25                                    :::*                  
LISTEN      0      5                         :::873                                   :::*   
           

访问宿主机机ip也可以访问到nginx

Linux docker的4种容器网络的配置与内核名称空间的创建

none模式

使用none模式,Docker容器拥有自己的Network Namespace,但是,并不为Docker容器进行任何网络配置。也就是说,这个Docker容器没有网卡、IP、路由等信息。需要我们自己为Docker容器添加网卡、配置IP等。

这种网络模式下容器只有lo回环网络,没有其他网卡。none模式可以在容器创建时通过–network none来指定。这种类型的网络没有办法联网,封闭的网络能很好的保证容器的安全性。

应用场景:

启动一个容器处理数据,比如转换数据格式

一些后台的计算和处理任务

#创建none模式的centos容器
[[email protected] ~]# docker run --network=none -it centos /bin/bash
[[email protected] /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
#没有ip地址
#查看容器信息
[[email protected] ~]# docker inspect f4c096312345
...
            "Networks": {
                "none": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "f312ddeef6847b34befd9207049ff97c6a16fa9cb299ae2d2d97814a6226d442",
                    "EndpointID": "efde2a34fee8d5bc99340685269bdb21abe081dc5bc833c3ec97b38288d5aead",
                    "Gateway": "",
                    "IPAddress": "",
                    "IPPrefixLen": 0,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "",
                    "DriverOpts": null
                }
#没有关于ipv4的信息

           

名称空间的创建

ip netns创建

[[email protected] ~]# ip netns list
[[email protected] ~]# ip netns add ns0
[[email protected] ~]# ip netns list
ns0
           

新创建的 Network Namespace 会出现在/var/run/netns/目录下。如果相同名字的 namespace 已经存在,命令会报Cannot create namespace file “/var/run/netns/ns0”: File exists的错误。

[[email protected] ~]# ls /var/run/netns/
ns0

[[email protected] ~]# ip netns add ns0
Cannot create namespace file "/var/run/netns/ns0": File exists
           

对于每个 Network Namespace 来说,它会有自己独立的网卡、路由表、ARP 表、iptables 等和网络相关的资源。

netns的操作

ip命令提供了ip netns exec子命令可以在对应的 Network Namespace 中执行命令。

查看新创建 Network Namespace 的网卡信息

[[email protected] ~]# ip netns exec ns0 ip addr
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
           

可以看到,新创建的Network Namespace中会默认创建一个lo回环网卡,此时网卡处于关闭状态。此时,尝试去 ping 该lo回环网卡,会提示Network is unreachable

[[email protected] ~]# ip netns exec ns0 ping 127.0.0.1
connect: Network is unreachable 
           

通过下面的命令启用lo回环网卡:

[[email protected] ~]# ip netns exec ns0 ip link set lo up
[[email protected] ~]# ip netns exec ns0 ping 127.0.0.1   
PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.073 ms
           

转移设备

我们可以在不同的 Network Namespace 之间转移设备(如veth)。由于一个设备只能属于一个 Network Namespace ,所以转移后在这个 Network Namespace 内就看不到这个设备了。

其中,veth设备属于可转移设备,而很多其它设备(如lo、vxlan、ppp、bridge等)是不可以转移的。

veth pair

veth pair 全称是 Virtual Ethernet Pair,是一个成对的端口,所有从这对端口一 端进入的数据包都将从另一端出来,反之也是一样。

引入veth pair是为了在不同的 Network Namespace 直接进行通信,利用它可以直接将两个 Network Namespace 连接起来。

创建veth pair

[[email protected] ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:b9:3c:6d brd ff:ff:ff:ff:ff:ff
    inet 192.168.220.20/24 brd 192.168.220.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::59a:c315:2825:665f/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:fe:7e:5c:ed brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:feff:fe7e:5ced/64 scope link 
       valid_lft forever preferred_lft forever

[[email protected] ~]# ip link add type veth
[[email protected] ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:b9:3c:6d brd ff:ff:ff:ff:ff:ff
    inet 192.168.220.20/24 brd 192.168.220.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::59a:c315:2825:665f/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:fe:7e:5c:ed brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:feff:fe7e:5ced/64 scope link 
       valid_lft forever preferred_lft forever
10: [email protected]: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether a2:d8:72:1d:30:c2 brd ff:ff:ff:ff:ff:ff
11: [email protected]: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 4e:c3:09:83:e9:ee brd ff:ff:ff:ff:ff:ff
           

可以看到,此时系统中新增了一对veth pair,将veth0和veth1两个虚拟网卡连接了起来,此时这对 veth pair 处于”未启用“状态。

实现Network Namespace间通信

下面我们利用veth pair实现两个不同的 Network Namespace 之间的通信。刚才我们已经创建了一个名为ns0的 Network Namespace,下面再创建一个信息Network Namespace,命名为ns1

[[email protected] ~]# ip netns add ns1
[[email protected] ~]# ip netns list
ns1
ns0
           

然后我们将veth0加入到ns0,将veth1加入到ns1

[[email protected] ~]# ip link set veth0 netns ns0
[[email protected] ~]# ip link set veth1 netns ns1
           

然后我们分别为这对veth pair配置上ip地址,并启用它们

[[email protected] ~]# ip netns exec ns0 ip link set veth0 up
[[email protected] ~]# ip netns exec ns0 ip addr add 10.0.0.1/24 dev veth0
[[email protected] ~]# ip netns exec ns0 ip addr
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
10: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether a2:d8:72:1d:30:c2 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet 10.0.0.1/24 scope global veth0
       valid_lft forever preferred_lft forever
    inet6 fe80::a0d8:72ff:fe1d:30c2/64 scope link 
       valid_lft forever preferred_lft forever

[[email protected] ~]# ip netns exec ns1 ip link set lo up
[[email protected] ~]# ip netns exec ns1 ip link set veth1 up
[[email protected] ~]# ip netns exec ns1 ip addr add 10.0.0.2/24 dev veth1
[[email protected] ~]# ip netns exec ns1 ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
11: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether 4e:c3:09:83:e9:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.0.2/24 scope global veth1
       valid_lft forever preferred_lft forever
    inet6 fe80::4cc3:9ff:fe83:e9ee/64 scope link 
       valid_lft forever preferred_lft forever

           

查看这对veth pair的状态

[[email protected] ~]# ip netns exec ns0 ip a
10: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 32:c4:23:dd:a7:1c brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet 10.0.0.1/24 scope global veth0
       valid_lft forever preferred_lft forever
    inet6 fe80::30c4:23ff:fedd:a71c/64 scope link 
       valid_lft forever preferred_lft forever
       

[[email protected] ~]# ip netns exec ns1 ip a
11: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether a2:52:52:cd:54:62 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.0.2/24 scope global veth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a052:52ff:fecd:5462/64 scope link 
       valid_lft forever preferred_lft forever
           

从上面可以看出,我们已经成功启用了这个veth pair,并为每个veth设备分配了对应的ip地址。我们尝试在ns1中访问ns0中的ip地址:

[[email protected] ~]# ip netns exec ns1 ping 10.0.0.1
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms
64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.082 ms   
           

可以看到,veth pair成功实现了两个不同Network Namespace之间的网络交互。

veth设备重命名

[[email protected] ~]# ip netns exec ns0 ip link set veth0 down
[[email protected] ~]# ip netns exec ns0 ip link set dev veth0 name eth0
[[email protected] ~]# ip netns exec ns0 ifconfig -a
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.1  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::30c4:23ff:fedd:a71c  prefixlen 64  scopeid 0x20<link>
        ether 32:c4:23:dd:a7:1c  txqueuelen 1000  (Ethernet)
        RX packets 12  bytes 928 (928.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 20  bytes 1576 (1.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
[[email protected] ~]# ip netns exec ns0 ip link set eth0 up
           

四种网络模式配置

bridge模式配置

[[email protected] ~]# docker run -it --name t1 --rm busybox
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:0A:00:00:02  
          inet addr:10.0.0.2  Bcast:10.0.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:6 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:508 (508.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

/ # exit
[[email protected] ~]# docker container ls -a


# 在创建容器时添加--network bridge与不加--network选项效果是一致的
[[email protected] ~]# docker run -it --name t1 --network bridge --rm busybox    
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:0A:00:00:02  
          inet addr:10.0.0.2  Bcast:10.0.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:6 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:508 (508.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

/ # exit
           

none模式配置

[[email protected] ~]# docker run -it --name t1 --network none --rm busybox      
/ # ifconfig -a
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

/ # exit
           

container模式配置

启动第一个容器

[email protected] ~]# docker run -it --name b1 --rm busybox
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:0A:00:00:02  
          inet addr:10.0.0.2  Bcast:10.0.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:6 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:508 (508.0 B)  TX bytes:0 (0.0 B)
           

启动第二个容器

[[email protected] ~]# docker run -it --name b2 --rm busybox
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:0A:00:00:03  
          inet addr:10.0.0.3  Bcast:10.0.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:6 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:508 (508.0 B)  TX bytes:0 (0.0 B)        
           

可以看到名为b2的容器IP地址是10.0.0.3,与第一个容器的IP地址不是一样的,也就是说并没有共享网络,此时如果我们将第二个容器的启动方式改变一下,就可以使名为b2的容器IP与B1容器IP一致,也即共享IP,但不共享文件系统。

[[email protected] ~]# docker run -it --name b2 --rm --network container:b1 busybox
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:0A:00:00:02  
          inet addr:10.0.0.2  Bcast:10.0.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:648 (648.0 B)  TX bytes:0 (0.0 B)        
           

此时我们在b1容器上创建一个目录

/ # mkdir /tmp/data
/ # ls /tmp
data  
           

到b2容器上检查/tmp目录会发现并没有这个目录,因为文件系统是处于隔离状态,仅仅是共享了网络而已。

在b2容器上部署一个站点

/ # echo 'hello world' > /tmp/index.html
/ # ls /tmp
index.html
/ # httpd -h /tmp
/ # netstat -antl
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       
tcp        0      0 :::80                   :::*                    LISTEN    
           

在b1容器上用本地地址去访问此站点

/ # wget -O - -q 127.0.0.1:80
hello world
           

由此可见,container模式下的容器间关系就相当于一台主机上的两个不同进程

host模式配置

启动容器时直接指明模式为host

[[email protected] ~]# docker run -it --name b2 --rm --network host busybox        
/ # ifconfig
docker0   Link encap:Ethernet  HWaddr 02:42:06:25:98:91  
          inet addr:10.0.0.1  Bcast:10.0.255.255  Mask:255.255.0.0
          inet6 addr: fe80::42:6ff:fe25:9891/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:55 errors:0 dropped:0 overruns:0 frame:0
          TX packets:82 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:8339 (8.1 KiB)  TX bytes:7577 (7.3 KiB)

ens33     Link encap:Ethernet  HWaddr 00:0C:29:01:78:90  
          inet addr:192.168.10.144  Bcast:192.168.10.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe01:7890/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:55301 errors:0 dropped:0 overruns:0 frame:0
          TX packets:26269 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:63769938 (60.8 MiB)  TX bytes:2672449 (2.5 MiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:42 errors:0 dropped:0 overruns:0 frame:0
          TX packets:42 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:4249 (4.1 KiB)  TX bytes:4249 (4.1 KiB)

vethffa4d46 Link encap:Ethernet  HWaddr 06:4F:68:16:6E:B0  
          inet6 addr: fe80::44f:68ff:fe16:6eb0/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:648 (648.0 B)
           

此时如果我们在这个容器中启动一个http站点,我们就可以直接用宿主机的IP直接在浏览器中访问这个容器中的站点了。

容器的常用操作

查看容器的主机名

[[email protected] ~]# docker run -it --name t1 --network bridge --rm busybox
/ # hostname
7769d784c6da  
           

在容器启动时注入主机名

[[email protected] ~]# docker run -it --name t1 --network bridge --hostname wangqing --rm busybox
/ # hostname
wangqing
/ # cat /etc/hosts
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
10.0.0.2        wangqing    # 注入主机名时会自动创建主机名到IP的映射关系
/ # cat /etc/resolv.conf 
# Generated by NetworkManager
search localdomain
nameserver 192.168.10.2     # DNS也会自动配置为宿主机的DNS
/ # ping www.baidu.com
PING www.baidu.com (182.61.200.7): 56 data bytes
64 bytes from 182.61.200.7: seq=0 ttl=127 time=26.073 ms
64 bytes from 182.61.200.7: seq=1 ttl=127 time=26.378 ms  
           

手动指定容器要使用的DNS

[[email protected] ~]# docker run -it --name t1 --network bridge --hostname wangqing --dns 114.114.114.114 --rm busybox
/ # cat /etc/resolv.conf 
search localdomain
nameserver 114.114.114.114
/ # nslookup -type=a www.baidu.com
Server:         114.114.114.114
Address:        114.114.114.114:53

Non-authoritative answer:
www.baidu.com   canonical name = www.a.shifen.com
Name:   www.a.shifen.com
Address: 182.61.200.6
Name:   www.a.shifen.com
Address: 182.61.200.7
           

手动往/etc/hosts文件中注入主机名到IP地址的映射

[[email protected] ~]# docker run -it --name t1 --network bridge --hostname wangqing --add-host www.a.com:1.1.1.1 --rm busybox      
/ # cat /etc/hosts
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
1.1.1.1 www.a.com
10.0.0.2        wangqing
           

开放容器端口

执行docker run的时候有个-p选项,可以将容器中的应用端口映射到宿主机中,从而实现让外部主机可以通过访问宿主机的某端口来访问容器内应用的目的。

-p选项能够使用多次,其所能够暴露的端口必须是容器确实在监听的端口。

-p选项的使用格式:

  • -p

    将指定的容器端口映射至主机所有地址的一个动态端口

  • -p :

    将容器端口映射至指定的主机端口

  • -p ::

    将指定的容器端口映射至主机指定的动态端口

  • -p ::

    将指定的容器端口映射至主机指定的端口

动态端口指的是随机端口,具体的映射结果可使用docker port命令查看。

[[email protected] ~]# docker run --name web --rm -p 80 nginx
           

以上命令执行后会一直占用着前端,我们新开一个终端连接来看一下容器的80端口被映射到了宿主机的什么端口上

[[email protected] ~]# docker port web
80/tcp -> 0.0.0.0:32769
           

由此可见,容器的80端口被暴露到了宿主机的32769端口上,此时我们在宿主机上访问一下这个端口看是否能访问到容器内的站点

[[email protected] ~]# curl http://127.0.0.1:32769
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/" target="_blank" rel="external nofollow"  target="_blank" rel="external nofollow" >nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/" target="_blank" rel="external nofollow"  target="_blank" rel="external nofollow" >nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
           

iptables防火墙规则将随容器的创建自动生成,随容器的删除自动删除规则。

将容器端口映射到指定IP的随机端口

[[email protected] ~]# docker run --name web --rm -p 192.168.10.144::80 nginx
           

在另一个终端上查看端口映射情况

[[email protected] ~]# docker port web
80/tcp -> 192.168.10.144:32768
           

将容器端口映射到宿主机的指定端口

[[email protected] ~]# docker run --name web --rm -p 80:80 nginx
           

在另一个终端上查看端口映射情况

[[email protected] ~]# docker port web
80/tcp -> 0.0.0.0:80
           

自定义docker0桥的网络属性信息

自定义docker0桥的网络属性信息需要修改/etc/docker/daemon.json配置文件

{
    "bip": "192.168.1.5/24",
    "fixed-cidr": "192.168.1.5/25",
    "fixed-cidr-v6": "2001:db8::/64",
    "mtu": 1500,
    "default-gateway": "10.20.1.1",
    "default-gateway-v6": "2001:db8:abcd::89",
    "dns": ["10.20.1.2","10.20.1.3"]
}
           

核心选项为bip,即bridge ip之意,用于指定docker0桥自身的IP地址;其它选项可通过此地址计算得出

docker远程连接

dockerd守护进程的C/S,其默认仅监听Unix Socket格式的地址(/var/run/docker.sock),如果要使用TCP套接字,则需要修改/etc/docker/daemon.json配置文件,添加如下内容,然后重启docker服务:

"hosts": ["tcp://0.0.0.0:2375", "unix:///var/run/docker.sock"]
           

在客户端上向dockerd直接传递“-H|–host”选项指定要控制哪台主机上的docker容器

docker -H 192.168.10.145:2375 ps
           

docker创建自定义桥

创建一个额外的自定义桥,区别于docker0

[[email protected] ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
413997d70707        bridge              bridge              local
0a04824fc9b6        host                host                local
4dcb8fbdb599        none                null                local

[[email protected] ~]# docker network create -d bridge --subnet "192.168.2.0/24" --gateway "192.168.2.1" br0
b340ce91fb7c569935ca495f1dc30b8c37204b2a8296c56a29253a067f5dedc9

[[email protected] ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
b340ce91fb7c        br0                 bridge              local
413997d70707        bridge              bridge              local
0a04824fc9b6        host                host                local
4dcb8fbdb599        none                null                local
           

使用新创建的自定义桥来创建容器:

[[email protected] ~]# docker run -it --name b1 --network br0 busybox
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:C0:A8:02:02  
          inet addr:192.168.2.2  Bcast:192.168.2.255     Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:11 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:926 (926.0 B)  TX bytes:0 (0.0 B)
           

再创建一个容器,使用默认的bridge桥:

[email protected] ~]# docker run --name b2 -it busybox
/ # ls
bin   dev   etc   home  proc  root  sys   tmp   usr   var
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:0A:00:00:02  
          inet addr:10.0.0.2  Bcast:10.0.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:6 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:508 (508.0 B)  TX bytes:0 (0.0 B)