laitimes

With 5 years of work experience, you can't even tell how many network modes of Docker are, can you believe it?

author:The road of migrant workers' technology
With 5 years of work experience, you can't even tell how many network modes of Docker are, can you believe it?

This is a required interview question!

Some time ago, I asked more than a dozen senior operation and maintenance engineers who claimed to have 5 years of experience, but none of them could fully and clearly express the key content.

With 5 years of work experience, you can't even tell how many network modes of Docker are, can you believe it?

docker container networking

Docker automatically provides 3 networks after installation, which can be viewed using the docker network ls command

[root@localhost ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
cd97bb997b84        bridge              bridge              local
0a04824fc9b6        host                host                local
4dcb8fbdb599        none                null                local
           

Docker uses Linux bridging, and a virtual Docker container bridge (docker0) is used on the host, and when Docker starts a container, it assigns an IP address to the container according to the network segment of the Docker bridge, called Container-IP, and the Docker bridge is the default gateway for each container. Because containers in the same host are connected to the same bridge, containers can communicate directly with each other through the container-IP of the containers.

Docker's 4 network modes

With 5 years of work experience, you can't even tell how many network modes of Docker are, can you believe it?
With 5 years of work experience, you can't even tell how many network modes of Docker are, can you believe it?

bridge mode

When the Docker process starts, a virtual bridge named docker0 is created on the host, and the Docker containers launched on this host are connected to the virtual bridge. A virtual bridge works similarly to a physical switch, so that all the containers on the host are connected to a layer 2 network through the switch.

Assign an IP from the docker0 subnet to the container and set the docker0 IP address as the default gateway for the container. Create a pair of virtual network card veth pair devices on the host, Docker puts one end of the veth pair device in the newly created container and names it eth0 (the container's network card), and the other end in the host with a similar name like vethxxx, and adds the network device to the docker0 bridge. This can be viewed with the brctl show command.

Bridge mode is the default network mode of docker, and if you don't write the --network parameter, it is bridge mode. When docker run -p is used, docker actually does DNAT rules in iptables to implement port forwarding. This can be viewed using iptables -t nat -vnL.

The bridge mode is shown in the following diagram:

With 5 years of work experience, you can't even tell how many network modes of Docker are, can you believe it?

Let's say there is an nginx running in docker2 in the figure above, let's think about a few questions:

  • Can two containers communicate directly with each other on the same host, for example, can docker1 directly access the nginx site of docker2?
  • Is it possible to access the nginx site of docker2 directly on the host?
  • How do I access this nginx site on node1 on another host?

The Docker bridge is virtualized by the host, not a real network device, and the external network is unaddressable, which means that the external network cannot access the container through direct container-IP. If the container wants to be accessible by external access, you can map the container port to the host host (port mapping), that is, docker run can be enabled by the -p or -p parameter when creating the container, and the container can be accessed through [host IP]:[container port] when accessing the container.

container 模式

This pattern specifies that the newly created container shares a Network Namespace with an existing container, rather than with the host. The newly created container does not create its own NIC and configure its own IP, but shares the IP address and port range with a specified container. Similarly, in addition to the network aspect, the two containers are isolated from other aspects such as file systems and process lists. The processes of the two containers can communicate through the LO NIC device.

container 模式如下图所示:

With 5 years of work experience, you can't even tell how many network modes of Docker are, can you believe it?

host mode

If you start a container in the host mode, the container will not get a separate Network Namespace, but will share a Network Namespace with the host. The container will not virtualize its own NIC and configure its own IP, but will use the IP and port of the host. However, other aspects of the container, such as the file system and process list, are isolated from the host.

The container using the host mode can directly use the IP address of the host to communicate with the outside world, and the service port inside the container can also use the port of the host, without NAT, the biggest advantage of the host is that the network performance is relatively good, but the port that has been used on the docker host can no longer be used, and the isolation of the network is not good.

Host 模式如下图所示:

With 5 years of work experience, you can't even tell how many network modes of Docker are, can you believe it?

none mode

With the none pattern, the Docker container has its own Network Namespace, however, there is no network configuration for the Docker container. In other words, this Docker container does not have information such as network cards, IPs, routes, etc. We need to add NICs, configure IPs, etc. for the Docker container ourselves.

In this network mode, the container only has an LO loopback network and no other NICs. The none mode can be specified at the time of container creation with --network none. This type of network has no way to connect to the Internet, and a closed network can ensure the security of containers.

Application scenarios

  • Start a container to process data, such as transforming data formats
  • Some calculation and processing tasks in the background

The none mode is shown in the following figure:

With 5 years of work experience, you can't even tell how many network modes of Docker are, can you believe it?
docker network inspect bridge   #查看bridge网络的详细配置
           

docker container network configuration

The Linux kernel implements the creation of namespaces

IP netns 命令

You can use the ip netns command to accomplish a variety of operations on the Network Namespace. The ip netns command comes from the iproute installation package, which is generally installed by default, if not, please install it yourself.

Note: The ip netns command requires sudo permissions to modify the network configuration.

You can use the ip netns command to complete the operation related to the network namespace, and you can use ip netns help to view the command help information:

[root@localhost ~]# ip netns help
Usage: ip netns list
       ip netns add NAME
       ip netns set NAME NETNSID
       ip [-all] netns delete [NAME]
       ip netns identify [PID]
       ip netns pids NAME
       ip [-all] netns exec [NAME] cmd ...
       ip netns monitor
       ip netns list-id
           

By default, there is no Network Namespace on Linux, so the ip netns list command does not return any information.

创建 Network Namespace

Create a namespace named ns0 with the command:

[root@localhost ~]# ip netns list
[root@localhost ~]# ip netns add ns0
[root@localhost ~]# ip netns list
ns0
           

The newly created Network Namespace will appear in the /var/run/netns/ directory. If a namespace with the same name already exists, the command will report the error Cannot create namespace file "/var/run/netns/ns0": File exists.

[root@localhost ~]# ls /var/run/netns/
ns0
[root@localhost ~]# ip netns add ns0
Cannot create namespace file "/var/run/netns/ns0": File exists
           

For each Network Namespace, it will have its own independent network interfaces, route tables, ARP tables, iptables, and other network-related resources.

操作 Network Namespace

The ip command provides the ip netns exec subcommand that can be executed in the corresponding Network Namespace.

View the NIC information of the newly created Network Namespace

[root@localhost ~]# ip netns exec ns0 ip addr
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
           

As you can see, a lo loop NIC will be created in the newly created Network Namespace by default, and the NIC is turned off at this time. At this time, try to ping the lo loop NIC, and it will prompt Network is unreachable

[root@localhost ~]# ip netns exec ns0 ping 127.0.0.1
connect: Network is unreachable

127.0.0.1是默认回环网卡
           

Enable the LO loopback NIC with the following command:

[root@localhost ~]# ip netns exec ns0 ip link set lo up
[root@localhost ~]# ip netns exec ns0 ping 127.0.0.1
PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.029 ms
^C
--- 127.0.0.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1036ms
rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms
           

Transfer the device

We can transfer devices (e.g. veth) between different Network Namespaces. Since a device can only belong to one Network Namespace, the device will not be visible in the Network Namespace after the transfer.

Among them, VETH devices are transferable devices, while many other devices (such as LO, VXLAN, PPP, BRIDGE, etc.) cannot be transferred.

veth pair

The veth pair stands for Virtual Ethernet Pair, and it is a pair of ports, and all packets coming in from one end of the pair of ports will come out of the other end, and vice versa.

The veth pair was introduced to communicate directly between different Network Namespaces, allowing it to directly connect two Network Namespaces.

With 5 years of work experience, you can't even tell how many network modes of Docker are, can you believe it?

Create a veth pair

[root@localhost ~]# ip link add type veth
[root@localhost ~]# ip a

4: veth0@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 0a:f4:e2:2d:37:fb brd ff:ff:ff:ff:ff:ff
5: veth1@veth0: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 5e:7e:f6:59:f0:4f brd ff:ff:ff:ff:ff:ff
           

You can see that a pair of veth pairs are added to the system at this time, connecting veth0 and veth1 virtual network cards, and the veth pair is in the "not enabled" state.

实现 Network Namespace 间通信

Let's use the veth pair to communicate between two different Network Namespaces. We've just created a Network Namespace named ns0, and let's create a Network Namespace named ns1

[root@localhost ~]# ip netns add ns1
[root@localhost ~]# ip netns list
ns1
ns0
           

Then we add veth0 to ns0 and veth1 to ns1

[root@localhost ~]# ip link set veth0 netns ns0
[root@localhost ~]# ip link set veth1 netns ns1
           

Then we configure the IP addresses on each of the veth pairs and enable them

[root@localhost ~]# ip netns exec ns0 ip link set veth0 up
[root@localhost ~]# ip netns exec ns0 ip addr add 192.0.0.1/24 dev veth0
[root@localhost ~]# ip netns exec ns1 ip link set veth1 up
[root@localhost ~]# ip netns exec ns1 ip addr add 192.0.0.2/24 dev veth1
           

Check out the status of the veth pair

[root@localhost ~]# ip netns exec ns0 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
4: veth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 0a:f4:e2:2d:37:fb brd ff:ff:ff:ff:ff:ff link-netns ns1
    inet 192.0.0.1/24 scope global veth0
       valid_lft forever preferred_lft forever
    inet6 fe80::8f4:e2ff:fe2d:37fb/64 scope link
       valid_lft forever preferred_lft forever
           
[root@localhost ~]# ip netns exec ns1 ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
5: veth1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 5e:7e:f6:59:f0:4f brd ff:ff:ff:ff:ff:ff link-netns ns0
    inet 192.0.0.2/24 scope global veth1
       valid_lft forever preferred_lft forever
    inet6 fe80::5c7e:f6ff:fe59:f04f/64 scope link
       valid_lft forever preferred_lft forever
           

As you can see from the above, we have successfully enabled this Veth Pair and assigned a corresponding IP address to each Veth device. We are trying to access the IP address in ns1 in ns0

[root@localhost ~]# ip netns exec ns1 ping 192.0.0.1
PING 192.0.0.1 (192.0.0.1) 56(84) bytes of data.
64 bytes from 192.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms
64 bytes from 192.0.0.1: icmp_seq=2 ttl=64 time=0.041 ms
^C
--- 192.0.0.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.033/0.037/0.041/0.004 ms
[root@localhost ~]# ip netns exec ns0 ping 192.0.0.2
PING 192.0.0.2 (192.0.0.2) 56(84) bytes of data.
64 bytes from 192.0.0.2: icmp_seq=1 ttl=64 time=0.025 ms
64 bytes from 192.0.0.2: icmp_seq=2 ttl=64 time=0.025 ms
^C
--- 192.0.0.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1038ms
rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms
           

As you can see, the veth pair successfully implements network interaction between two different Network Namespaces.

Four network mode configurations

Bridge mode configuration

[root@localhost ~]# docker run -it --name ti --rm busybox
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:02
          inet addr:172.17.0.2  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:12 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1032 (1.0 KiB)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
           

Adding --network bridge when creating a container has the same effect as not adding the --network option

[root@localhost ~]# docker run -it --name t1 --network bridge --rm busybox
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:02
         inet addr:172.17.0.2  Bcast:172.17.255.255  Mask:255.255.0.0
         UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
         RX packets:8 errors:0 dropped:0 overruns:0 frame:0
         TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
         collisions:0 txqueuelen:0
         RX bytes:696 (696.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback
         inet addr:127.0.0.1  Mask:255.0.0.0
         UP LOOPBACK RUNNING  MTU:65536  Metric:1
         RX packets:0 errors:0 dropped:0 overruns:0 frame:0
         TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
         collisions:0 txqueuelen:1000
         RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
           

none mode configuration

[root@localhost ~]# docker run -it --name t1 --network none --rm busybox
/ # ifconfig -a
lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
           

container 模式配置

Start the first container

[root@localhost ~]# docker run -dit --name b3 busybox
af5ba32f990ebf5a46d7ecaf1eec67f1712bbef6ad7df37d52b7a8a498a592a0

[root@localhost ~]# docker exec -it b3 /bin/sh
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:02
          inet addr:172.17.0.2  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:11 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:906 (906.0 B)  TX bytes:0 (0.0 B)
           

Start the second container

[root@localhost ~]# docker run -it --name b2 --rm busybox
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:03
          inet addr:172.17.0.3  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:6 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:516 (516.0 B)  TX bytes:0 (0.0 B)
           

You can see that the IP address of the container named b2 is 10.0.0.3, which is not the same as the IP address of the first container, that is, there is no shared network, so if we change the startup mode of the second container, we can make the IP of the container named b2 the same as the IP of the B3 container, that is, the IP is shared, but the file system is not shared.

[root@localhost ~]# docker run -it --name b2 --rm --network container:b3 busybox
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:02
          inet addr:172.17.0.2  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:14 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1116 (1.0 KiB)  TX bytes:0 (0.0 B)
           

At this point, we create a directory on the B1 container

/ # mkdir /tmp/data
/ # ls /tmp
data
           

If you go to the b2 container, you will find that the /tmp directory does not have it, because the filesystem is in an isolated state and is just sharing the network.

Deploy a site on a B2 container

/ # echo 'hello world' > /tmp/index.html
/ # ls /tmp
index.html
/ # httpd -h /tmp
/ # netstat -antl
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 :::80                   :::*                    LISTEN
           

Access the site with a local address on the B1 container

/ # wget -O - -q 172.17.0.2:80
hello world
           

host mode configuration

When starting a container, the mode is directly indicated as host

[root@localhost ~]# docker run -it --name b2 --rm --network host busybox
/ # ifconfig
docker0   Link encap:Ethernet  HWaddr 02:42:B8:7F:8E:2C
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
          inet6 addr: fe80::42:b8ff:fe7f:8e2c/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:3 errors:0 dropped:0 overruns:0 frame:0
          TX packets:20 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:116 (116.0 B)  TX bytes:1664 (1.6 KiB)

ens33     Link encap:Ethernet  HWaddr 00:0C:29:95:19:47
          inet addr:192.168.203.138  Bcast:192.168.203.255  Mask:255.255.255.0
          inet6 addr: fe80::2e61:1ea3:c05a:3d9b/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:9626 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3950 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:3779562 (3.6 MiB)  TX bytes:362386 (353.8 KiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

veth09ee47e Link encap:Ethernet  HWaddr B2:10:53:7B:66:AE
          inet6 addr: fe80::b010:53ff:fe7b:66ae/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:3 errors:0 dropped:0 overruns:0 frame:0
          TX packets:19 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:158 (158.0 B)  TX bytes:1394 (1.3 KiB)
           

At this time, if we start an http site in this container, we can directly use the IP of the host to access the site in this container directly in the browser.

Common operations for containers

View the hostname of the container

[root@localhost ~]# docker run -it --name t1 --network bridge --rm busybox
/ # hostname
48cb45a0b2e7
           

Inject the hostname when the container starts

[root@localhost ~]# docker run -it --name t1 --network bridge --hostname ljl --rm busybox
/ # hostname
ljl
/ # cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.3 ljl
/ # cat /etc/resolv.conf
# Generated by NetworkManager
search localdomain
nameserver 192.168.203.2
/ # ping www.baidu.com
PING www.baidu.com (182.61.200.7): 56 data bytes
64 bytes from 182.61.200.7: seq=0 ttl=127 time=31.929 ms
64 bytes from 182.61.200.7: seq=1 ttl=127 time=41.062 ms
64 bytes from 182.61.200.7: seq=2 ttl=127 time=31.540 ms
^C
--- www.baidu.com ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 31.540/34.843/41.062 ms
           

Manually specify the DNS to be used by the container

[root@localhost ~]# docker run -it --name t1 --network bridge --hostname ljl --dns 114.114.114.114 --rm busybox
/ # cat /etc/resolv.conf
search localdomain
nameserver 114.114.114.114
/ # nslookup -type=a www.baidu.com
Server:  114.114.114.114
Address: 114.114.114.114:53

Non-authoritative answer:
www.baidu.com canonical name = www.a.shifen.com
Name: www.a.shifen.com
Address: 182.61.200.6
Name: www.a.shifen.com
Address: 182.61.200.7
           

Manually inject the hostname-to-IP address mapping into the /etc/hosts file

[root@localhost ~]# docker run -it --name t1 --network bridge --hostname ljl --add-host www.a.com:1.1.1.1 --rm busybox
/ # cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
1.1.1.1 www.a.com
172.17.0.3 ljl
           

Open container ports

When docker run is executed, there is a -p option that maps the application port in the container to the host, so that external hosts can access the application in the container by accessing a port on the host.

The -p option can be used multiple times, and the port it exposes must be the one that the container is actually listening on.

  • -p option:
    • -p containerPort
    • Maps the specified container port to a dynamic port at all addresses of the host
    • -p hostPort : containerPort
    • Map the container port containerPort to the specified host port hostPort
    • -p ip :: containerPort
    • Maps the specified container port to the dynamic port of the host's specified IP
    • -p ip : hostPort : containerPort
    • Maps the specified container port containerPort to the port hostPort of the host's specified IP

A dynamic port is a random port, and you can use the docker port command to view the mapping.

[root@localhost ~]# docker run -dit --name web1 -p 192.168.203.138::80 httpd
e97bc1774e40132659990090f0e98a308a7f83986610ca89037713e9af8a6b9f
[root@localhost ~]# docker ps
CONTAINER ID   IMAGE     COMMAND              CREATED          STATUS          PORTS                           NAMES
e97bc1774e40   httpd     "httpd-foreground"   6 seconds ago    Up 5 seconds    192.168.203.138:49153->80/tcp   web1
af5ba32f990e   busybox   "sh"                 48 minutes ago   Up 48 minutes                                   b3
[root@localhost ~]# ss -antl
State    Recv-Q   Send-Q        Local Address:Port        Peer Address:Port   Process
LISTEN   0        128         192.168.203.138:49153            0.0.0.0:*
LISTEN   0        128                 0.0.0.0:22               0.0.0.0:*
LISTEN   0        128                    [::]:22                  [::]:*
           

After the above command is executed, it will occupy the frontend, let's open a new terminal connection to see what port 80 of the container is mapped to the host

[root@localhost ~]# docker port web1
80/tcp -> 192.168.203.138:49153
           

It can be seen that port 80 of the container is exposed to port 49153 of the host, and we need to access this port on the host to see if we can access the site in the container

[root@localhost ~]# curl http://192.168.203.138:49153
<html><body><h1>It works!</h1></body></html>
           

iptables firewall rules are automatically generated with the creation of the container and automatically deleted with the deletion of the container.

[root@localhost ~]# iptables -t nat -nvL
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
    3   164 DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
    4   261 MASQUERADE  all  --  *      !docker0  172.17.0.0/16        0.0.0.0/0
    0     0 MASQUERADE  tcp  --  *      *       172.17.0.3           172.17.0.3           tcp dpt:80

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
    2   120 DOCKER     all  --  *      *       0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain DOCKER (2 references)
 pkts bytes target     prot opt in     out     source               destination
    1    60 RETURN     all  --  docker0 *       0.0.0.0/0            0.0.0.0/0
    1    60 DNAT       tcp  --  !docker0 *       0.0.0.0/0            192.168.203.138      tcp dpt:49153 to:172.17.0.3:80
           

Map the container port to a random port for the specified IP

[root@localhost ~]# docker run -dit --name web1 -p 192.168.203.138::80 httpd
           

View the port mapping on another endpoint

[root@localhost ~]# docker port web1
80/tcp -> 192.168.203.138:49153
           

Customize the network attribute information of the docker0 bridge

To customize the network attribute information of the docker0 bridge, you need to modify the /etc/docker/daemon.json configuration file

[root@localhost ~]# cd /etc/docker/
[root@localhost docker]# vim daemon.json
[root@localhost docker]# systemctl daemon-reload
[root@localhost docker]# systemctl restart docker

{
    "registry-mirrors": ["https://4hygggbu.mirror.aliyuncs.com/"],
    "bip": "192.168.1.5/24"
}
EOF

``` ```ruby
[root@localhost ~]# vim /lib/systemd/system/docker.service

ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -H tcp://0.0.0.0:2375  -H unix:///var/run/docker.sock
[root@localhost ~]# systemctl daemon-reload
[root@localhost ~]# systemctl restart docker
           

The "-H|--host" option is passed directly to dockerd on the client specifying which host you want to control the docker container on

[root@localhost ~]# docker -H 192.168.203.138:2375 ps
CONTAINER ID   IMAGE     COMMAND              CREATED             STATUS          PORTS                           NAMES
e97bc1774e40   httpd     "httpd-foreground"   30 minutes ago      Up 11 seconds   192.168.203.138:49153->80/tcp   web1
af5ba32f990e   busybox   "sh"                 About an hour ago   Up 14 seconds                                   b3
           

Create a new network

[root@localhost ~]# docker network create ljl -d bridge
883eda50812bb214c04986ca110dbbcb7600eba8b033f2084cd4d750b0436e12
[root@localhost ~]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
0c5f4f114c27   bridge    bridge    local
8c2d14f1fb82   host      host      local
883eda50812b   ljl       bridge    local
85ed12d38815   none      null      local
           

Create an additional custom bridge, distinct from docker0

[root@localhost ~]# docker network create -d bridge --subnet "192.168.2.0/24" --gateway "192.168.2.1" br0
af9ba80deb619de3167939ec5b6d6136a45dce90907695a5bc5ed4608d188b99
[root@localhost ~]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
af9ba80deb61   br0       bridge    local
0c5f4f114c27   bridge    bridge    local
8c2d14f1fb82   host      host      local
883eda50812b   ljl       bridge    local
85ed12d38815   none      null      local
           

Use the newly created custom bridge to create the container:

[root@localhost ~]# docker run -it --name b1 --network br0 busybox
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:C0:A8:02:02
          inet addr:192.168.2.2  Bcast:192.168.2.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:11 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:962 (962.0 B)  TX bytes:0 (0.0 B)
           

Create another container, using the default bridge bridge:

[root@localhost ~]# docker run --name b2 -it busybox
/ # ls
bin   dev   etc   home  proc  root  sys   tmp   usr   var
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:C0:A8:01:03
          inet addr:192.168.1.3  Bcast:192.168.1.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:6 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:516 (516.0 B)  TX bytes:0 (0.0 B)
           
Article source: cnblogs.com/loronoa/p/16566818.html

That's all for today's sharing, if it helps, welcome to support it with one click triple (like, comment, forward)!

Reader-only group: We sincerely invite you to join the technical exchange group and roll together!

If there are any errors or other problems, please leave comments and corrections. If it helps, welcome to like + forward and share. For more related open source technical articles, please stay tuned!Resource sharing (Xiaobian has carefully prepared various academic Xi materials for 2048G.) Including system operation and maintenance, database, redis, MogoDB, e-book, Java basic course, Java practical project, architect comprehensive tutorial, architect practical project, big data, Docker container, ELK Stack, machine learning Xi, BAT interview intensive lecture video, etc. )

Read on