1、本文介绍centos7下的docker容器互联及端口映射问题
环境介绍:
docker1:192.168.1.230
docker2:192.168.1.231
a.两台宿主分别更改主机名docker1 and docker2
<code># hostnamectl set-hostname docker1</code>
<code># reboot</code>
b.在docker1和docker2上分别用yum方式安装docker并启动服务
<code>[root@docker1 ~]</code><code># yum -y install docker</code>
<code>[root@docker1 ~]</code><code># service docker start</code>
<code>Redirecting to </code><code>/bin/systemctl</code> <code>start docker.service</code>
<code>通过</code><code>ps</code> <code>-ef | </code><code>grep</code> <code>docker 查看进程已经启动</code>
c.在docker1和docker2上分别安装open vswitch及依赖环境
<code>[root@docker1 ~]</code><code># yum -y install openssl-devel kernel-devel</code>
<code>[root@docker1 ~]</code><code># yum groupinstall "Development Tools"</code>
<code>[root@docker1 ~]</code><code># adduser ovswitch</code>
<code>[root@docker1 ~]</code><code># su - ovswitch</code>
<code>[ovswitch@docker1 ~]$ wget [ovswitch@docker1 ~]$ </code><code>tar</code> <code>-zxvpf openvswitch-2.3.0.</code><code>tar</code><code>.gz </code>
<code>[ovswitch@docker1 ~]$ </code><code>mkdir</code> <code>-p ~</code><code>/rpmbuild/SOURCES</code>
<code>[ovswitch@docker1 ~]$ </code><code>sed</code> <code>'s/openvswitch-kmod, //g'</code> <code>openvswitch-2.3.0</code><code>/rhel/openvswitch</code><code>.spec > openvswitch-2.3.0</code><code>/rhel/openvswitch_no_kmod</code><code>.spec</code>
<code>[ovswitch@docker1 ~]$ </code><code>cp</code> <code>openvswitch-2.3.0.</code><code>tar</code><code>.gz rpmbuild</code><code>/SOURCES/</code>
<code>[ovswitch@docker1 ~]$ rpmbuild -bb --without check ~</code><code>/openvswitch-2</code><code>.3.0</code><code>/rhel/openvswitch_no_kmod</code><code>.spec</code>
<code>[ovswitch@docker1 ~]$ </code><code>exit</code>
<code>[root@docker1 ~]</code><code># yum localinstall /home/ovswitch/rpmbuild/RPMS/x86_64/openvswitch-2.3.0-1.x86_64.rpm</code>
<code>[root@docker1 ~]</code><code># systemctl start openvswitch.service # 启动ovs</code>
<code>[root@docker1 ~]</code><code># systemctl status openvswitch.service -l #查看服务状态</code>
<code>鈼[0m openvswitch.service - LSB: Open vSwitch switch</code>
<code> </code><code>Loaded: loaded (</code><code>/etc/rc</code><code>.d</code><code>/init</code><code>.d</code><code>/openvswitch</code><code>)</code>
<code> </code><code>Active: active (running) since Fri 2016-04-22 02:37:10 EDT; 9s ago</code>
<code> </code><code>Docs: </code><code>man</code><code>:systemd-sysv-generator(8)</code>
<code> </code><code>Process: 24616 ExecStart=</code><code>/etc/rc</code><code>.d</code><code>/init</code><code>.d</code><code>/openvswitch</code> <code>start (code=exited, status=0</code><code>/SUCCESS</code><code>)</code>
<code> </code><code>CGroup: </code><code>/system</code><code>.slice</code><code>/openvswitch</code><code>.service</code>
<code> </code><code>鈹溾攢24640 ovsdb-server: monitoring pid 24641 (healthy) </code>
<code> </code><code>鈹溾攢24641 ovsdb-server </code><code>/etc/openvswitch/conf</code><code>.db -vconsole:emer -vsyslog:err -vfile:info --remote=punix:</code><code>/var/run/openvswitch/db</code><code>.sock --private-key=db:Open_vSwitch,SSL,private_key --certificate=db:Open_vSwitch,SSL,certificate --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --no-chdir --log-</code><code>file</code><code>=</code><code>/var/log/openvswitch/ovsdb-server</code><code>.log --pidfile=</code><code>/var/run/openvswitch/ovsdb-server</code><code>.pid --detach --monitor</code>
<code> </code><code>鈹溾攢24652 ovs-vswitchd: monitoring pid 24653 (healthy) </code>
<code> </code><code>鈹斺攢24653 ovs-vswitchd unix:</code><code>/var/run/openvswitch/db</code><code>.sock -vconsole:emer -vsyslog:err -vfile:info --mlockall --no-chdir --log-</code><code>file</code><code>=</code><code>/var/log/openvswitch/ovs-vswitchd</code><code>.log --pidfile=</code><code>/var/run/openvswitch/ovs-vswitchd</code><code>.pid --detach --monitor</code>
<code>Apr 22 02:37:10 docker1 openvswitch[24616]: </code><code>/etc/openvswitch/conf</code><code>.db does not exist ... (warning).</code>
<code>Apr 22 02:37:10 docker1 openvswitch[24616]: Creating empty database </code><code>/etc/openvswitch/conf</code><code>.db [ OK ]</code>
<code>Apr 22 02:37:10 docker1 openvswitch[24616]: Starting ovsdb-server [ OK ]</code>
<code>Apr 22 02:37:10 docker1 ovs-vsctl[24642]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- </code><code>set</code> <code>Open_vSwitch . db-version=7.6.0</code>
<code>Apr 22 02:37:10 docker1 ovs-vsctl[24647]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait </code><code>set</code> <code>Open_vSwitch . ovs-version=2.3.0 </code><code>"external-ids:system-id=\"7469bdac-d8b0-4593-b300-fd0931eacbc2\""</code> <code>"system-type=\"unknown\""</code> <code>"system-version=\"unknown\""</code>
<code>Apr 22 02:37:10 docker1 openvswitch[24616]: Configuring Open vSwitch system IDs [ OK ]</code>
<code>Apr 22 02:37:10 docker1 openvswitch[24616]: Inserting openvswitch module [ OK ]</code>
<code>Apr 22 02:37:10 docker1 openvswitch[24616]: Starting ovs-vswitchd [ OK ]</code>
<code>Apr 22 02:37:10 docker1 openvswitch[24616]: Enabling remote OVSDB managers [ OK ]</code>
<code>Apr 22 02:37:10 docker1 systemd[1]: Started LSB: Open vSwitch switch.</code>
d.在docker1和docker2分别建立桥接网卡和路由
<code>[root@docker1 ~]</code><code># cat /proc/sys/net/ipv4/ip_forward</code>
<code>1</code>
<code>[root@docker1 ~]</code><code># ovs-vsctl add-br obr0</code>
<code>[root@docker1 ~]</code><code># ovs-vsctl add-port obr0 gre0 -- set Interface gre0 type=gre options:remote_ip=192.168.1.230</code>
<code>[root@docker1 ~]</code><code># brctl addbr kbr0</code>
<code>[root@docker1 ~]</code><code># brctl addif kbr0 obr0 </code>
<code>[root@docker1 ~]</code><code># ip link set dev docker0 down</code>
<code>[root@docker1 ~]</code><code># ip link del dev docker0</code>
<code>[root@docker1 ~]</code><code># vi /etc/sysconfig/network-scripts/ifcfg-kbr0</code>
<code>ONBOOT=</code><code>yes</code>
<code>BOOTPROTO=static</code>
<code>IPADDR=192.168.100.10</code>
<code>NETMASK=255.255.255.0</code>
<code>GATEWAY=192.168.100.0</code>
<code>USERCTL=no</code>
<code>TYPE=Bridge</code>
<code>IPV6INIT=no</code>
<code>DEVICE=kbr0</code>
<code>[root@docker1 ~]</code><code># cat /etc/sysconfig/network-scripts/route-eth0</code>
<code>192.168.101.0</code><code>/24</code> <code>via 192.168.1.231 dev eth0~ </code>
<code>[root@docker1 ~]</code><code># systemctl restart network.service </code>
<code>[root@docker1 ~]</code><code># route -n</code>
<code>Kernel IP routing table</code>
<code>Destination Gateway Genmask Flags Metric Ref Use Iface</code>
<code>0.0.0.0 192.168.1.2 0.0.0.0 UG 100 0 0 eth0</code>
<code>169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0</code>
<code>169.254.0.0 0.0.0.0 255.255.0.0 U 1007 0 0 kbr0</code>
<code>192.168.1.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0</code>
<code>192.168.100.0 0.0.0.0 255.255.255.0 U 0 0 0 kbr0</code>
<code>192.168.101.0 192.168.1.231 255.255.255.0 UG 0 0 0 eth0</code>
<code>192.168.101.0 192.168.1.231 255.255.255.0 UG 100 0 0 eth0</code>
c.虚拟网卡绑定kbr0后下载容器启动测试
<code>[root@docker1 ~]</code><code># vi /etc/sysconfig/docker-network </code>
<code># /etc/sysconfig/docker-network</code>
<code>DOCKER_NETWORK_OPTIONS=</code><code>"-b=kbr0"</code>
<code>[root@docker1 ~]</code><code># service docker restart</code>
<code>Redirecting to </code><code>/bin/systemctl</code> <code>restart docker.service</code>
<code>下载镜像:</code>
<code>[root@docker1 ~]</code><code># docker search centos</code>
<code>[root@docker1 ~]</code><code># docker pull </code>
<code>[root@docker1 ~]</code><code># docker run -dti --name=mytest2 docker.io/nickistre/centos-lamp /bin/bash </code>
<code>[root@docker1 ~]</code><code># docker ps -l #查看容器的状态</code>
<code>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES</code>
<code>118479ccdebb docker.io</code><code>/nickistre/centos-lamp</code> <code>"/bin/bash"</code> <code>16 minutes ago Up About a minute 22</code><code>/tcp</code><code>, 80</code><code>/tcp</code><code>, 443</code><code>/tcp</code> <code>mytest1</code>
<code>[root@docker1 ~]</code><code># docker attach 118479ccdebb #进入容器</code>
<code>[root@118479ccdebb ~]</code><code># ifconfig #容器自动分配的一个ip地址</code>
<code>eth0 Link encap:Ethernet HWaddr 02:42:C0:A8:64:01 </code>
<code> </code><code>inet addr:192.168.100.1 Bcast:0.0.0.0 Mask:255.255.255.0</code>
<code> </code><code>inet6 addr: fe80::42:c0ff:fea8:6401</code><code>/64</code> <code>Scope:Link</code>
<code> </code><code>UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1</code>
<code> </code><code>RX packets:7112 errors:0 dropped:0 overruns:0 frame:0</code>
<code> </code><code>TX packets:3738 errors:0 dropped:0 overruns:0 carrier:0</code>
<code> </code><code>collisions:0 txqueuelen:0 </code>
<code> </code><code>RX bytes:12175213 (11.6 MiB) TX bytes:249982 (244.1 KiB)</code>
<code>lo Link encap:Local Loopback </code>
<code> </code><code>inet addr:127.0.0.1 Mask:255.0.0.0</code>
<code> </code><code>inet6 addr: ::1</code><code>/128</code> <code>Scope:Host</code>
<code> </code><code>UP LOOPBACK RUNNING MTU:65536 Metric:1</code>
<code> </code><code>RX packets:1 errors:0 dropped:0 overruns:0 frame:0</code>
<code> </code><code>TX packets:1 errors:0 dropped:0 overruns:0 carrier:0</code>
<code> </code><code>RX bytes:28 (28.0 b) TX bytes:28 (28.0 b)</code>
<code>[root@118479ccdebb ~]</code><code># ping 192.168.101.1 #101.1位docker2的容器的ip地址</code>
<code>PING 192.168.101.1 (192.168.101.1) 56(84) bytes of data.</code>
<code>64 bytes from 192.168.101.1: icmp_seq=1 ttl=62 </code><code>time</code><code>=1.30 ms</code>
<code>64 bytes from 192.168.101.1: icmp_seq=2 ttl=62 </code><code>time</code><code>=0.620 ms</code>
<code>64 bytes from 192.168.101.1: icmp_seq=3 ttl=62 </code><code>time</code><code>=0.582 ms</code>
<code>至此不同宿主下面的容器就能互相通信了 ,那么如何通过宿主ip访问容器里面的业务呢呢,,</code>
d.通过Dockerfile构建镜像
<code>[root@docker1 ~]</code><code># cat Dockerfile </code>
<code>#基本镜像</code>
<code>FROM docker.io</code><code>/nickistre/centos-lamp</code>
<code>#作者</code>
<code>MAINTAINER PAIPX</code>
<code>#RUN命令</code>
<code>ADD apache-tomcat-6.0.43 </code><code>/usr/local/apache-tomcat-6</code><code>.0.43</code>
<code>RUN </code><code>cd</code> <code>/usr/local/</code> <code>&& </code><code>mv</code> <code>apache-tomcat-6.0.43 tomcat</code>
<code>ADD jdk-6u22-linux-x64.bin </code><code>/root/</code>
<code>RUN </code><code>cd</code> <code>/root/</code> <code>&& </code><code>chmod</code> <code>+x jdk-6u22-linux-x64.bin && .</code><code>/jdk-6u22-linux-x64</code><code>.bin && </code><code>mkdir</code> <code>-p </code><code>/usr/java/</code> <code>&& </code><code>cp</code> <code>jdk1.6.0_22 </code><code>/usr/java/jdk</code> <code>-a</code>
<code>#构建环境变量</code>
<code>ENV JAVA_HOME </code><code>/usr/java/jdk</code>
<code>ENV CLASSPATH $CLASSPATH:$JAVA_HOME</code><code>/lib</code><code>:$JAVA_HOME</code><code>/jre/lib</code>
<code>ENV PATH $JAVA_HOME</code><code>/bin</code><code>:$JAVA_HOME</code><code>/jre/bin</code><code>:$PATH:$HOMR</code><code>/bin</code>
<code>ENV CATALINA_HOME </code><code>/usr/local/tomcat</code>
<code>ENV PATH $CATALINA_HOME</code><code>/bin</code><code>:$PATH</code>
<code>RUN </code><code>mkdir</code> <code>-p </code><code>"$CATALINA_HOME"</code>
<code>WORKDIR $</code>
<code>#暴露端口</code>
<code>EXPOSE 8080</code>
<code>CMD [ </code><code>"catalina.sh"</code><code>, </code><code>"run"</code><code>]</code>
build重构一个镜像
<code>[root@docker1 ~]</code><code># docker build -t tomcat2 .</code>
启动一个容器
<code>[root@docker1 ~]</code><code># docker run -dti -p 8000:8080 --name=mytest4 tomcat2</code>
然后通过http://ip:8000访问即可
本文转自 Anonymous123 51CTO博客,原文链接:http://blog.51cto.com/woshitieren/1766830