天天看點

從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置

寫在前面

  • 一直想學K8s,但是沒有環境,本身

    K8s

    就有些重。上學之前租了一個阿裡雲的

    ESC

    ,單核2G的,單機版K8s的勉強可以裝上去,多節點沒法搞,書裡的Demo也沒法學。需要多個節點,涉及到多機器操作,是以順便溫習一下

    ansible

  • 這是一個在

    Win10上從零搭建學習環境

    的教程,包含:
    • 通過

      Vmware Workstation

      安裝

      四個linux系統

      虛拟機,一個

      Master管理節點

      三個Node計算節點

    • 通過橋接模式,可以

      通路外網

      ,并且可以

      通過win10實體機ssh遠端通路

    • 可以通過

      Master節點機器ssh免密登入任意Node節點機

    • 配置

      Ansible

      Master節點做controller節點

      ,使用角色配置時間同步,使用

      playbook

      安裝配置

      docker K8S

      等。
    • Docker

      ,

      K8s叢集

      相關包安裝,網絡配置等
  • 關于

    Vmware Workstation 和 Linux ios包

    ,預設小夥伴已經擁有。

    Vmware Workstation

    預設小夥伴已經安裝好,沒有的可以網上下載下傳一下。

我所渴求的,無非是將心中脫穎語出的本性付諸生活,為何竟如此艱難呢 ------《彷徨少年時》

*

一,Linux 系統安裝

這裡預設小夥伴已經安裝了

Vmware Workstation(VMware-workstation-full-15.5.6-16341506.exe)

,已經準備了

linux系統 安裝CD光牒(CentOS-7-x86_64-DVD-1810.iso)

。括号内是我用的版本,我們的方式:

先安裝一個Node節點機器,然後通過克隆的方式得到剩餘的兩個Node機器和一個Master機器

1. 系統安裝

&&&&&&&&&&&&&&&&&&安裝步驟&&&&&&&&&&&&&&&&&&
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
給虛拟機起一個名稱,并指定虛拟機存放的位置。
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
記憶體設定這裡要結合自己機器的情況,如果8G記憶體,建議為2G,如果16G,建議4G,如果32G,建議8G
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
将存放在系統中的CD光牒鏡像放入光驅中。【通過”浏覽”找到即可】
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
如果顯示記憶體太大了,開不了機,可以适當減小記憶體,
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
點選螢幕,光标進入到系統,然後上下鍵選擇第一個。
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
建議初學者選擇“簡體中文(中國)”,單擊“繼續”。
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
檢查“安裝資訊摘要界面”,確定所有帶歎号的部分都已經完成,然後單擊右下方的“開始安裝”按鈕,将會執行正式安裝。
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
若密碼太簡單需要按兩次“完成”按鈕!
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
建立使用者。(使用者名字和密碼自定義),填寫完成後,單擊兩次“完成”。
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
這很需要時間,可以幹點别的事.........,安裝完成之後,會有 重新開機 按鈕,直接重新開機即可
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
啟動系統,這個需要一些時間,耐心等待
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
未列出以root使用者登入,然後是一些引導頁,直接下一步即可
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
嗯,這裡改一下,指令提示符。弄的好看一點想學習,直接輸入:

PS1="\[\033[1;32m\]┌──[\[\033[1;34m\]\u@\H\[\033[1;32m\]]-[\[\033[0;1m\]\w\[\033[1;32m\]] \n\[\033[1;32m\]└─\[\033[1;34m\]\$\[\033[0m\] "

或者寫到

.baserch

從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置

2. 配置網絡

&&&&&&&&&&&&&&&&&&配置網絡步驟&&&&&&&&&&&&&&&&&&
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
橋接模式下,要自己選擇橋接到哪個網卡(實際聯網用的網卡),然後确認
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
配置網卡為DHCP模式(自動配置設定IP位址):執行方式見表尾,這裡值得一說的是,如果網絡換了,那麼是以有的節點ip也會換掉,因為是動态的,但是還是在一個網段内。DNS和SSH免密也都不能用了,需要重新配置,但是如果你隻連一個網絡,那就沒影響。是以一般需要在配置設定IP之後,把IP擷取方式改成手動靜态IP。當然,建議在所有的機器克隆完成之後在更改IP設定方式,然後在配置DNS和SSH免密

nmcli connection modify 'ens33' ipv4.method auto connection.autoconnect yes #将網卡改為DHCP模式(動态配置設定IP)

nmcli connection up 'ens33'

從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置

配置網卡為DHCP模式(自動配置設定IP位址)

┌──[[email protected]]-[~] 
└─$ nmcli connection modify 'ens33' ipv4.method auto   connection.autoconnect yes
┌──[[email protected]]-[~] 
└─$ nmcli connection up 'ens33'
連接配接已成功激活(D-Bus 活動路徑:/org/freedesktop/NetworkManager/ActiveConnection/4)
┌──[[email protected]]-[~] 
└─$ ifconfig | head -2 
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.7  netmask 255.255.255.0  broadcast 192.168.1.255
┌──[[email protected]]-[~] 
└─$            
┌──[[email protected]]-[~] 
└─$ ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.7  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 fe80::8899:b0c7:4b50:73e0  prefixlen 64  scopeid 0x20<link>
        inet6 240e:319:707:b800:2929:3ab2:f378:715a  prefixlen 64  scopeid 0x0<global>
        ether 00:0c:29:b6:a6:52  txqueuelen 1000  (Ethernet)
        RX packets 535119  bytes 797946990 (760.9 MiB)
        RX errors 0  dropped 96  overruns 0  frame 0
        TX packets 59958  bytes 4119314 (3.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 616  bytes 53248 (52.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 616  bytes 53248 (52.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:2e:66:6d  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

┌──[[email protected]]-[~] 
└─$            

配置網卡為手動擷取IP,即把DHCP配置設定的ip設定為靜态IP,這一步在機器克隆完成後中每個機器執行

nmcli connection modify 'ens33' ipv4.method manual ipv4.addresses 192.168.1.7/24 ipv4.gateway 192.168.1.1 connection.autoconnect  yes
nmcli connection up 'ens33'           

3. 機器克隆

&&&&&&&&&&&&&&&&&&機器克隆步驟&&&&&&&&&&&&&&&&&&
關閉要克隆的虛拟機
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
連結克隆和完整克隆的差別:
建立連結克隆 #克隆的虛拟機占用磁盤空間很少,但是被克隆的虛拟機必須能夠正常使用,否則無法正常使用;
建立完整克隆 #新克隆的虛拟機跟被克隆的虛拟機之間沒有關聯,被克隆的虛拟機删除也不影響新克隆出來的虛拟機的使用
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
測試一下,可以通路外網(39.97.241是我的阿裡雲公網IP), 也可以和實體機互通,同時也可以和node互通
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
我們以相同的方式,克隆剩餘的一個node節點機器,和一個Master節點機。
我們以相同的方式,克隆剩餘的一個

node

節點機器,和一個

Master

節點機。這裡不做展示
克隆剩餘的,如果啟動時記憶體不夠,需要關閉虛拟機調整相應的記憶體
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置

nmcli connection modify 'ens33' ipv4.method manual ipv4.addresses 192.168.1.9/24 ipv4.gateway 192.168.1.1 connection.autoconnect yes

nmcli connection up 'ens33'

記得配置靜态IP呀

4.管理控制節點到計算節點DNS配置

Master節點DNS配置
Master節點配置DNS,可用通過主機名通路,為友善的話,可以修改每個節點機器的 主機名

/etc/hosts

下修改。
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
┌──[[email protected]]-[~] 
└─$ vim /etc/hosts
┌──[[email protected]]-[~] 
└─$ cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.7  node0
192.168.1.9  node1
192.168.1.11 node2
192.168.1.10 master
 
┌──[[email protected]]-[~] 
└─$ 
           

5.管理控制節點到計算節點SSH免密配置

Master節點配置SSH免密登入

ssh-keygen

生成密匙,全部回車
SSH 免密配置,使用

ssh-copy-id

傳遞密匙
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
免密測試,如果為了友善,這裡,Node1的主機名沒有修改。是以顯示為IP位址
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置

ssh-keygen

┌──[[email protected]]-[~] 
└─$ ssh
usage: ssh [-1246AaCfGgKkMNnqsTtVvXxYy] [-b bind_address] [-c cipher_spec]
           [-D [bind_address:]port] [-E log_file] [-e escape_char]
           [-F configfile] [-I pkcs11] [-i identity_file]
           [-J [user@]host[:port]] [-L address] [-l login_name] [-m mac_spec]
           [-O ctl_cmd] [-o option] [-p port] [-Q query_option] [-R address]
           [-S ctl_path] [-W host:port] [-w local_tun[:remote_tun]]
           [user@]hostname [command]
┌──[[email protected]]-[~] 
└─$ ls -ls ~/.ssh/
ls: 無法通路/root/.ssh/: 沒有那個檔案或目錄
┌──[[email protected]]-[~] 
└─$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:qHboVj/WfMTYCDFDZ5ISf3wEcmfsz0EXJH19U6SnxbY root@node0
The key's randomart image is:
+---[RSA 2048]----+
|      .o+.=o+.o+*|
|      ..=B +. o==|
|       ..+o.....O|
|       ... .. .=.|
|      . S. = o.E |
|     o.   o + o  |
|    +... o .     |
|   o..  + o .    |
|   ..  . . .     |
+----[SHA256]-----+           

ssh-copy-id

ssh-copy-id root@node0
ssh-copy-id root@node1
ssh-copy-id root@node2           

免密測試

ssh root@node0
ssh root@node1
ssh root@node2           
到這一步,我們已經做好了linux環境的搭建,想學linux的小夥伴就可以從這裡開始學習啦。 這是我linux學習一路整理的筆記,有些實戰,感興趣小夥伴可以看看

二,Ansible安裝配置

這裡為了友善,我們直接在實體機操作,而且我們已經配置了ssh,因為我本機的記憶體不夠,是以我隻能啟三台機器了。

主機名 IP 角色 備注
master 192.168.1.10 conteoller 控制機
node1 192.168.1.9 node 受管機
node2 192.168.1.11

 1. SSH到控制節點即192.168.1.10,配置yum源,安裝ansible

┌──(liruilong㉿Liruilong)-[/mnt/e/docker]
└─$ ssh [email protected]
Last login: Sat Sep 11 00:23:10 2021
┌──[root@master]-[~]
└─$ ls
anaconda-ks.cfg  initial-setup-ks.cfg  下載下傳  公共  圖檔  文檔  桌面  模闆  視訊  音樂
┌──[root@master]-[~]
└─$ cd /etc/yum.repos.d/
┌──[root@master]-[/etc/yum.repos.d]
└─$ ls
CentOS-Base.repo  CentOS-CR.repo  CentOS-Debuginfo.repo  CentOS-fasttrack.repo  CentOS-Media.repo  CentOS-Sources.repo  CentOS-Vault.repo  CentOS-x86_64-kernel.repo
┌──[root@master]-[/etc/yum.repos.d]
└─$ mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
┌──[root@master]-[/etc/yum.repos.d]
└─$ wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo           

查找 ansible安裝包

┌──[root@master]-[/etc/yum.repos.d]
└─$ yum list | grep ansible
ansible-collection-microsoft-sql.noarch     1.1.0-1.el7_9              extras
centos-release-ansible-27.noarch            1-1.el7                    extras
centos-release-ansible-28.noarch            1-1.el7                    extras
centos-release-ansible-29.noarch            1-1.el7                    extras
centos-release-ansible26.noarch             1-3.el7.centos             extras
┌──[root@master]-[/etc/yum.repos.d]           

阿裡雲的

yum鏡像沒有ansible

包,是以我們需要使用

epel

┌──[root@master]-[/etc/yum.repos.d]
└─$ wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
--2021-09-11 00:40:11--  http://mirrors.aliyun.com/repo/epel-7.repo
Resolving mirrors.aliyun.com (mirrors.aliyun.com)... 1.180.13.237, 1.180.13.236, 1.180.13.240, ...
Connecting to mirrors.aliyun.com (mirrors.aliyun.com)|1.180.13.237|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 664 [application/octet-stream]
Saving to: ‘/etc/yum.repos.d/epel.repo’

100%[=======================================================================================================================================================================>] 664         --.-K/s   in 0s

2021-09-11 00:40:12 (91.9 MB/s) - ‘/etc/yum.repos.d/epel.repo’ saved [664/664]

┌──[root@master]-[/etc/yum.repos.d]
└─$ yum install -y epel-release           

查找ansible安裝包,并安裝

┌──[root@master]-[/etc/yum.repos.d]
└─$ yum list|grep ansible
Existing lock /var/run/yum.pid: another copy is running as pid 12522.
Another app is currently holding the yum lock; waiting for it to exit...
  The other application is: PackageKit
    Memory :  28 M RSS (373 MB VSZ)
    Started: Sat Sep 11 00:40:41 2021 - 00:06 ago
    State  : Sleeping, pid: 12522
ansible.noarch                              2.9.25-1.el7               epel
ansible-collection-microsoft-sql.noarch     1.1.0-1.el7_9              extras
ansible-doc.noarch                          2.9.25-1.el7               epel
ansible-inventory-grapher.noarch            2.4.4-1.el7                epel
ansible-lint.noarch                         3.5.1-1.el7                epel
ansible-openstack-modules.noarch            0-20140902git79d751a.el7   epel
ansible-python3.noarch                      2.9.25-1.el7               epel
ansible-review.noarch                       0.13.4-1.el7               epel
ansible-test.noarch                         2.9.25-1.el7               epel
centos-release-ansible-27.noarch            1-1.el7                    extras
centos-release-ansible-28.noarch            1-1.el7                    extras
centos-release-ansible-29.noarch            1-1.el7                    extras
centos-release-ansible26.noarch             1-3.el7.centos             extras
kubernetes-ansible.noarch                   0.6.0-0.1.gitd65ebd5.el7   epel
python2-ansible-runner.noarch               1.0.1-1.el7                epel
python2-ansible-tower-cli.noarch            3.3.9-1.el7                epel
vim-ansible.noarch                          3.2-1.el7                  epel
┌──[root@master]-[/etc/yum.repos.d]
└─$ yum install -y  ansible           
┌──[root@master]-[/etc/yum.repos.d]
└─$ ansible --version
ansible 2.9.25
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, Oct 30 2018, 23:45:53) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
┌──[root@master]-[/etc/yum.repos.d]
└─$           

檢視主機清單

┌──[root@master]-[/etc/yum.repos.d]
└─$ ansible 127.0.0.1 --list-hosts
  hosts (1):
    127.0.0.1
┌──[root@master]-[/etc/yum.repos.d]           

2. ansible環境配置

我們這裡使用liruilong這個普通賬号,一開始裝機配置的那個使用者,生産中會配置特定的使用者,不使用root使用者;

1. 主配置檔案

ansible.cfg

編寫

┌──[root@master]-[/home/liruilong]
└─$ su liruilong
[liruilong@master ~]$ pwd
/home/liruilong
[liruilong@master ~]$ mkdir ansible;cd ansible;vim ansible.cfg
[liruilong@master ansible]$ cat ansible.cfg
[defaults]
# 主機清單檔案,就是要控制的主機清單
inventory=inventory
# 連接配接受管機器的遠端的使用者名
remote_user=liruilong
# 角色目錄
roles_path=roles
# 設定使用者的su 提權
[privilege_escalation]
become=True
become_method=sudo
become_user=root
become_ask_pass=False

[liruilong@master ansible]$           

2. 主機清單:

被控機清單,可以是 域名,IP,分組([組名]),聚合([組名:children]),也可以主動的設定使用者名密碼

[liruilong@master ansible]$ vim inventory
[liruilong@master ansible]$ cat inventory
[nodes]
node1
node2
[liruilong@master ansible]$ ansible all --list-hosts
  hosts (2):
    node1
    node2
[liruilong@master ansible]$ ansible nodes --list-hosts
  hosts (2):
    node1
    node2
[liruilong@master ansible]$ ls
ansible.cfg  inventory
[liruilong@master ansible]$           

3. 配置liruilong使用者的ssh免密

master節點上以liruilong使用者對三個節點分布配置

[liruilong@master ansible]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/liruilong/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/liruilong/.ssh/id_rsa.
Your public key has been saved in /home/liruilong/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:cJ+SHgfMk00X99oCwEVPi1Rjoep7Agfz8DTjvtQv0T0 liruilong@master
The key's randomart image is:
+---[RSA 2048]----+
|         .oo*oB. |
|       o +.+ B + |
|      . B . + o .|
|       o+=+o . o |
|        SO=o .o..|
|       ..==.. .E.|
|        .+o ..  .|
|         .o.o.   |
|          o+ ..  |
+----[SHA256]-----+           
[liruilong@master ansible]$ ssh-copy-id node1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/liruilong/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
liruilong@node1's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'node1'"
and check to make sure that only the key(s) you wanted were added.           

嗯 ,node2和mater也需要配置

[liruilong@master ansible]$ ssh-copy-id node2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/liruilong/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
liruilong@node2's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'node2'"
and check to make sure that only the key(s) you wanted were added.

[liruilong@master ansible]$           

4. 配置

liruilong

普通使用者提權

這裡有個問題,我的機器上配置了sudo免密,但是第一次沒有生效,需要輸入密碼,之後就不需要了,使用ansible還是不行。後來發現,在/etc/sudoers.d 下建立一個以普通使用者命名的檔案的授權就可以了,不知道啥原因了。

┌──[root@node1]-[~]
└─$ visudo
┌──[root@node1]-[~]
└─$ cat /etc/sudoers | grep liruilong
liruilong       ALL=(ALL)       NOPASSWD:ALL
┌──[root@node1]-[/etc/sudoers.d]
└─$ cd /etc/sudoers.d/
┌──[root@node1]-[/etc/sudoers.d]
└─$ vim liruilong
┌──[root@node1]-[/etc/sudoers.d]
└─$ cat liruilong
liruilong  ALL=(ALL) NOPASSWD:ALL
┌──[root@node1]-[/etc/sudoers.d]
└─$           
┌──[root@node2]-[~]
└─$ vim /etc/sudoers.d/liruilong           

node2 和 master 按照相同的方式設定

5. 測試臨時指令

ansible 清單主機位址清單 -m 子產品名 [-a '任務參數']

[liruilong@master ansible]$ ansible all -m ping
node2 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
node1 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
[liruilong@master ansible]$ ansible nodes -m command -a 'ip a list ens33'
node2 | CHANGED | rc=0 >>
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:de:77:f4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.11/24 brd 192.168.1.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 240e:319:735:db00:2b25:4eb1:f520:830c/64 scope global noprefixroute dynamic
       valid_lft 208192sec preferred_lft 121792sec
    inet6 fe80::8899:b0c7:4b50:73e0/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
node1 | CHANGED | rc=0 >>
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:94:35:31 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.9/24 brd 192.168.1.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::8899:b0c7:4b50:73e0/64 scope link tentative noprefixroute dadfailed
       valid_lft forever preferred_lft forever
    inet6 fe80::2024:5b1c:1812:f4c0/64 scope link tentative noprefixroute dadfailed
       valid_lft forever preferred_lft forever
    inet6 fe80::d310:173d:7910:9571/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
[liruilong@master ansible]$           
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置
嗯,到這一步,

ansible

我們就配置完成了,可以在目前環境學習

ansible

這是我

ansible

學習整理的筆記,主要是

CHRE

考試的筆記,有些實戰,感興趣小夥伴可以看看

三,Docker、K8s相關包安裝配置

關于docker以及k8s的安裝,我們可以通過

rhel-system-roles

基于角色進行安裝,也可以自定義角色進行安裝,也可以直接寫劇本進行安裝,這裡我們使用直接部署ansible劇本的方式,一步一步建構。

docker

的話,感興趣的小夥伴可以看看我的筆記。

容器化技術學習筆記

我們主要看看K8S,

1. 使用ansible部署Docker

這裡部署的話,一種是直接刷大佬寫好的腳本,一種是自己一步一步來,這裡我們使用第二種方式。

我們現在有的機器

kube-master 管理節點
kube-node 計算節點

1. 配置節點機yum源

這裡因為我們要用節點機裝包,是以需要配置yum源,ansible配置的方式有很多,可以通過

yum_repository

配置,我們這裡為了友善,直接使用執行

shell

的方式。

[liruilong@master ansible]$ ansible nodes -m shell -a 'mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup;wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo'
node2 | CHANGED | rc=0 >>
--2021-09-11 11:40:20--  http://mirrors.aliyun.com/repo/Centos-7.repo
Resolving mirrors.aliyun.com (mirrors.aliyun.com)... 1.180.13.241, 1.180.13.238, 1.180.13.237, ...
Connecting to mirrors.aliyun.com (mirrors.aliyun.com)|1.180.13.241|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2523 (2.5K) [application/octet-stream]
Saving to: ‘/etc/yum.repos.d/CentOS-Base.repo’

     0K ..                                                    100% 3.99M=0.001s

2021-09-11 11:40:20 (3.99 MB/s) - ‘/etc/yum.repos.d/CentOS-Base.repo’ saved [2523/2523]
node1 | CHANGED | rc=0 >>
--2021-09-11 11:40:20--  http://mirrors.aliyun.com/repo/Centos-7.repo
Resolving mirrors.aliyun.com (mirrors.aliyun.com)... 1.180.13.241, 1.180.13.238, 1.180.13.237, ...
Connecting to mirrors.aliyun.com (mirrors.aliyun.com)|1.180.13.241|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2523 (2.5K) [application/octet-stream]
Saving to: ‘/etc/yum.repos.d/CentOS-Base.repo’

     0K ..                                                    100%  346M=0s

2021-09-11 11:40:20 (346 MB/s) - ‘/etc/yum.repos.d/CentOS-Base.repo’ saved [2523/2523]
[liruilong@master ansible]$           
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置

配置好了yum源,我們需要确認一下

[liruilong@master ansible]$ ansible all -m shell -a 'yum repolist | grep aliyun'
[liruilong@master ansible]$           
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置

2. 配置時間同步

這裡為了友善。我們直接使用 ansible角色 安裝RHEL角色軟體包,拷貝角色目錄到角色目錄下,并建立劇本 timesync.yml

┌──[root@master]-[/home/liruilong/ansible]
└─$  yum -y install rhel-system-roles
已加載插件:fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
base                                                                                             | 3.6 kB  00:00:00
epel                                                                                             | 4.7 kB  00:00:00
extras                                                                                           | 2.9 kB  00:00:00
updates                                                                                          | 2.9 kB  00:00:00
(1/2): epel/x86_64/updateinfo                                                                    | 1.0 MB  00:00:00
(2/2): epel/x86_64/primary_db                                                                    | 7.0 MB  00:00:01
正在解決依賴關系
There are unfinished transactions remaining. You might consider running yum-complete-transaction, or "yum-complete-transaction --cleanup-only" and "yum history redo last", first to finish them. If those don't work you'll have to try removing/installing packages by hand (maybe package-cleanup can help).
--> 正在檢查事務
---> 軟體包 rhel-system-roles.noarch.0.1.0.1-4.el7_9 将被 安裝
--> 解決依賴關系完成

依賴關系解決

========================================================================================================================
 Package                           架構                   版本                             源                      大小
========================================================================================================================
正在安裝:
 rhel-system-roles                 noarch                 1.0.1-4.el7_9                    extras                 988 k

事務概要
========================================================================================================================
安裝  1 軟體包

總下載下傳量:988 k
安裝大小:4.8 M
Downloading packages:
rhel-system-roles-1.0.1-4.el7_9.noarch.rpm                                                       | 988 kB  00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  正在安裝    : rhel-system-roles-1.0.1-4.el7_9.noarch                                                              1/1
  驗證中      : rhel-system-roles-1.0.1-4.el7_9.noarch                                                              1/1

已安裝:
  rhel-system-roles.noarch 0:1.0.1-4.el7_9

完畢!
┌──[root@master]-[/home/liruilong/ansible]
└─$ su - liruilong
上一次登入:六 9月 11 13:16:23 CST 2021pts/2 上
[liruilong@master ~]$ cd /home/liruilong/ansible/
[liruilong@master ansible]$ ls
ansible.cfg  inventory
[liruilong@master ansible]$ cp -r /usr/share/ansible/roles/rhel-system-roles.timesync roles/           
[liruilong@master ansible]$ ls
ansible.cfg  inventory  roles  timesync.yml
[liruilong@master ansible]$ cat timesync.yml
- name: timesync
  hosts: all
  vars:
    - timesync_ntp_servers:
      - hostname: 192.168.1.10
        iburst: yes
  roles:
    - rhel-system-roles.timesync
[liruilong@master ansible]$           
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置

3. docker環境初始化

步驟
安裝docker
解除安裝防火牆
開啟路由轉發
修複版本防火牆BUG
重新開機docker服務,設定開機自啟

編寫 docker環境初始化的劇本

install_docker_playbook.yml

- name: install docker on node1,node2
  hosts: node1,node2
  tasks:
    - yum: name=docker state=absent
    - yum: name=docker state=present
    - yum: name=firewalld state=absent
    - shell: echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
    - shell: sysctl -p
    - shell: sed -i '18 i ExecStartPort=/sbin/iptables -P FORWARD ACCEPT' /lib/systemd/system/docker.service
    - shell: cat /lib/systemd/system/docker.service
    - shell: systemctl daemon-reload
    - service: name=docker state=restarted enabled=yes           

執行劇本

[liruilong@master ansible]$ cat install_docker_playbook.yml
- name: install docker on node1,node2
  hosts: node1,node2
  tasks:
    - yum: name=docker state=absent
    - yum: name=docker state=present
    - yum: name=firewalld state=absent
    - shell: echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
    - shell: sysctl -p
    - shell: sed -i '18 i ExecStartPort=/sbin/iptables -P FORWARD ACCEPT' /lib/systemd/system/docker.service
    - shell: cat /lib/systemd/system/docker.service
    - shell: systemctl daemon-reload
    - service: name=docker state=restarted enabled=yes
[liruilong@master ansible]$ ls
ansible.cfg  install_docker_check.yml  install_docker_playbook.yml  inventory  roles  timesync.yml
[liruilong@master ansible]$ ansible-playbook install_docker_playbook.yml           
docker環境初始化的劇本執行

install_docker_playbook.yml

從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置

然後,我們編寫一個檢查的劇本

install_docker_check.yml

,用來檢查docker的安裝情況

- name: install_docker-check
  hosts: node1,node2
  ignore_errors: yes
  tasks:
    - shell: docker info
      register: out
    - debug: msg="{{out}}"
    - shell: systemctl -all | grep  firewalld
      register: out1
    - debug: msg="{{out1}}"
    - shell: cat /etc/sysctl.conf
      register: out2
    - debug: msg="{{out2}}"
    - shell: cat /lib/systemd/system/docker.service
      register: out3
    - debug: msg="{{out3}}"
           
[liruilong@master ansible]$ ls
ansible.cfg  install_docker_check.yml  install_docker_playbook.yml  inventory  roles  timesync.yml
[liruilong@master ansible]$ cat install_docker_check.yml
- name: install_docker-check
  hosts: node1,node2
  ignore_errors: yes
  tasks:
    - shell: docker info
      register: out
    - debug: msg="{{out}}"
    - shell: systemctl -all | grep  firewalld
      register: out1
    - debug: msg="{{out1}}"
    - shell: cat /etc/sysctl.conf
      register: out2
    - debug: msg="{{out2}}"
    - shell: cat /lib/systemd/system/docker.service
      register: out3
    - debug: msg="{{out3}}"

[liruilong@master ansible]$ ansible-playbook install_docker_check.yml               
檢查的劇本執行

install_docker_check.yml

從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置

2. etcd 安裝

安裝etcd(鍵值型資料庫),在Kube-master上操作,建立配置網絡

使用 yum 方式安裝etcd
修改etcd的配置檔案,修改etcd監聽的用戶端位址,0.0.0.0 指監聽所有的主機
啟動服務,并設定開機自啟動

編寫ansible劇本

install_etcd_playbook.yml

- name: install etcd or master
  hosts: 127.0.0.1
  tasks:
    - yum: name=etcd state=present
    - lineinfile: path=/etc/etcd/etcd.conf  regexp=ETCD_LISTEN_CLIENT_URLS="http://localhost:2379" line=ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
    - shell: cat /etc/etcd/etcd.conf
      register: out
    - debug: msg="{{out}}"
    - service: name=etcd state=restarted enabled=yes
           
[liruilong@master ansible]$ ls
ansible.cfg               install_docker_playbook.yml  inventory  timesync.yml
install_docker_check.yml  install_etcd_playbook.yml    roles
[liruilong@master ansible]$ cat install_etcd_playbook.yml
- name: install etcd or master
  hosts: 127.0.0.1
  tasks:
    - yum: name=etcd state=present
    - lineinfile: path=/etc/etcd/etcd.conf  regexp=ETCD_LISTEN_CLIENT_URLS="http://localhost:2379" line=ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
    - shell: cat /etc/etcd/etcd.conf
      register: out
    - debug: msg="{{out}}"
    - service: name=etcd state=restarted enabled=yes

[liruilong@master ansible]$ ansible-playbook install_etcd_playbook.yml           
ansible劇本

install_etcd_playbook.yml

執行
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置

1. 建立配置網絡:10.254.0.0/16

建立配置網絡:10.254.0.0/16

etcdctl ls /

etcdctl mk /atomic.io/network/config '{"Network": "10.254.0.0/16", "Backend": {"Type":"vxlan"}} '

etcdctl get /atomic.io/network/config

[liruilong@master ansible]$ etcdctl ls /
[liruilong@master ansible]$ etcdctl mk /atomic.io/network/config '{"Network": "10.254.0.0/16", "Backend": {"Type":"vxlan"}} '
{"Network": "10.254.0.0/16", "Backend": {"Type":"vxlan"}}
[liruilong@master ansible]$ etcdctl ls /
/atomic.io
[liruilong@master ansible]$ etcdctl ls /atomic.io
/atomic.io/network
[liruilong@master ansible]$ etcdctl ls /atomic.io/network
/atomic.io/network/config
[liruilong@master ansible]$ etcdctl get /atomic.io/network/config
{"Network": "10.254.0.0/16", "Backend": {"Type":"vxlan"}}
[liruilong@master ansible]$           

3. flannel 安裝配置(k8s所有機器上操作)

flannel是一個網絡規劃服務,它的功能是讓k8s叢集中,不同節點主機建立的docker容器,都具有在叢集中唯一的虛拟IP位址。flannel 還可以在這些虛拟機IP位址之間建立一個覆寫網絡,通過這個覆寫網絡,實作不同主機内的容器互聯互通;嗯,類似一個vlan的作用。

kube-master 管理主機上沒有docker,隻需要安裝flannel,修改配置,啟動并設定開機自啟動即可。

1. ansible 主機清單添加 master節點

嗯,這裡因為master節點機需要裝包配置,是以我們在主機清單裡加了master節點

[liruilong@master ansible]$ sudo cat /etc/hosts
192.168.1.11  node2
192.168.1.9   node1
192.168.1.10  master
[liruilong@master ansible]$ ls
ansible.cfg               install_docker_playbook.yml  inventory  timesync.yml
install_docker_check.yml  install_etcd_playbook.yml    roles
[liruilong@master ansible]$ cat inventory
master
[nodes]
node1
node2

[liruilong@master ansible]$ ansible master -m ping
master | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}           

2. flannel 安裝配置

安裝flannel網絡軟體包
修改配置檔案

/etc/sysconfig/flanneld

啟動服務(

flannel

服務必須在

docker

服務之前啟動),記得要把master節點的端口開了,要不就關了防火牆
先啟動

flannel

,再啟動

docker

編寫劇本

install_flannel_playbook.yml

- name: install flannel or all
  hosts: all
  vars:
    group_node: nodes
  tasks:
    - yum:
        name: flannel
        state: present
    - lineinfile:
        path: /etc/sysconfig/flanneld
        regexp: FLANNEL_ETCD_ENDPOINTS="http://127.0.0.1:2379"
        line: FLANNEL_ETCD_ENDPOINTS="http://192.168.1.10:2379"
    - service:
        name: docker
        state: stopped
      when: group_node in group_names
    - service:
        name: flanneld
        state: restarted
        enabled: yes
    - service:
        name: docker
        state: restarted
      when: group_node in group_names           

執行劇本之前要把master的firewalld 關掉。也可以把2379端口放開

[liruilong@master ansible]$ su root
密碼:
┌──[root@master]-[/home/liruilong/ansible]
└─$ systemctl disable flanneld.service --now
Removed symlink /etc/systemd/system/multi-user.target.wants/flanneld.service.
Removed symlink /etc/systemd/system/docker.service.wants/flanneld.service.
┌──[root@master]-[/home/liruilong/ansible]
└─$ systemctl status flanneld.service
● flanneld.service - Flanneld overlay address etcd agent
   Loaded: loaded (/usr/lib/systemd/system/flanneld.service; disabled; vendor preset: disabled)
   Active: inactive (dead)

9月 12 18:34:24 master flanneld-start[50344]: I0912 18:34:24.046900   50344 manager.go:149] Using interface with name ens33 and address 192.168.1.10
9月 12 18:34:24 master flanneld-start[50344]: I0912 18:34:24.046958   50344 manager.go:166] Defaulting external address to interface address (192.168.1.10)
9月 12 18:34:24 master flanneld-start[50344]: I0912 18:34:24.056681   50344 local_manager.go:134] Found lease (10.254.68.0/24) for current IP (192..., reusing
9月 12 18:34:24 master flanneld-start[50344]: I0912 18:34:24.060343   50344 manager.go:250] Lease acquired: 10.254.68.0/24
9月 12 18:34:24 master flanneld-start[50344]: I0912 18:34:24.062427   50344 network.go:58] Watching for L3 misses
9月 12 18:34:24 master flanneld-start[50344]: I0912 18:34:24.062462   50344 network.go:66] Watching for new subnet leases
9月 12 18:34:24 master systemd[1]: Started Flanneld overlay address etcd agent.
9月 12 18:40:42 master systemd[1]: Stopping Flanneld overlay address etcd agent...
9月 12 18:40:42 master flanneld-start[50344]: I0912 18:40:42.194559   50344 main.go:172] Exiting...
9月 12 18:40:42 master systemd[1]: Stopped Flanneld overlay address etcd agent.
Hint: Some lines were ellipsized, use -l to show in full.
┌──[root@master]-[/home/liruilong/ansible]
└─$           
┌──[root@master]-[/home/liruilong/ansible]
└─$ cat install_flannel_playbook.yml
- name: install flannel or all
  hosts: all
  vars:
    group_node: nodes
  tasks:
    - yum:
        name: flannel
        state: present
    - lineinfile:
        path: /etc/sysconfig/flanneld
        regexp: FLANNEL_ETCD_ENDPOINTS="http://127.0.0.1:2379"
        line: FLANNEL_ETCD_ENDPOINTS="http://192.168.1.10:2379"
    - service:
        name: docker
        state: stopped
      when: group_node in group_names
    - service:
        name: flanneld
        state: restarted
        enabled: yes
    - service:
        name: docker
        state: restarted
      when: group_node in group_names


┌──[root@master]-[/home/liruilong/ansible]
└─$ ansible-playbook install_flannel_playbook.yml           
劇本

install_flannel_playbook.yml

從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置

3. 測試 flannel

這裡也可以使用ansible 的docker相關子產品處理,我們這裡為了友善直接用shell子產品

install_flannel_check.yml

列印node節點機的docker橋接網卡docker0
在node節點機基于centos鏡像運作容器,名字為主機名
列印鏡像id相關資訊
列印全部節點的flannel網卡資訊
- name: flannel config check
  hosts: all
  vars:
    nodes: nodes
  tasks:
    - block:
        - shell: ifconfig docker0 | head -2
          register: out
        - debug: msg="{{out}}"
        - shell: docker rm -f  {{inventory_hostname}}
        - shell: docker run -itd --name {{inventory_hostname}} centos
          register: out1
        - debug: msg="{{out1}}"
      when: nodes in group_names
    - shell: ifconfig flannel.1 | head -2
      register: out
    - debug: msg="{{out}}"           
[liruilong@master ansible]$ cat install_flannel_check.yml
- name: flannel config check
  hosts: all
  vars:
    nodes: nodes
  tasks:
    - block:
        - shell: ifconfig docker0 | head -2
          register: out
        - debug: msg="{{out}}"
        - shell: docker rm -f  {{inventory_hostname}}
        - shell: docker run -itd --name {{inventory_hostname}} centos
          register: out1
        - debug: msg="{{out1}}"
      when: nodes in group_names
    - shell: ifconfig flannel.1 | head -2
      register: out
    - debug: msg="{{out}}"
[liruilong@master ansible]$
[liruilong@master ansible]$ ansible-playbook install_flannel_check.yml

PLAY [flannel config check] *************************************************************************************************************************************************************************************

TASK [Gathering Facts] ******************************************************************************************************************************************************************************************
ok: [master]
ok: [node2]
ok: [node1]

TASK [shell] ****************************************************************************************************************************************************************************************************
skipping: [master]
changed: [node2]
changed: [node1]

TASK [debug] ****************************************************************************************************************************************************************************************************
skipping: [master]
ok: [node1] => {
    "msg": {
        "changed": true,
        "cmd": "ifconfig docker0 | head -2",
        "delta": "0:00:00.021769",
        "end": "2021-09-12 21:51:44.826682",
        "failed": false,
        "rc": 0,
        "start": "2021-09-12 21:51:44.804913",
        "stderr": "",
        "stderr_lines": [],
        "stdout": "docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450\n        inet 10.254.97.1  netmask 255.255.255.0  broadcast 0.0.0.0",
        "stdout_lines": [
            "docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450",
            "        inet 10.254.97.1  netmask 255.255.255.0  broadcast 0.0.0.0"
        ]
    }
}
ok: [node2] => {
    "msg": {
        "changed": true,
        "cmd": "ifconfig docker0 | head -2",
        "delta": "0:00:00.011223",
        "end": "2021-09-12 21:51:44.807988",
        "failed": false,
        "rc": 0,
        "start": "2021-09-12 21:51:44.796765",
        "stderr": "",
        "stderr_lines": [],
        "stdout": "docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450\n        inet 10.254.59.1  netmask 255.255.255.0  broadcast 0.0.0.0",
        "stdout_lines": [
            "docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450",
            "        inet 10.254.59.1  netmask 255.255.255.0  broadcast 0.0.0.0"
        ]
    }
}

TASK [shell] ****************************************************************************************************************************************************************************************************
skipping: [master]
changed: [node1]
changed: [node2]

TASK [shell] ****************************************************************************************************************************************************************************************************
skipping: [master]
changed: [node1]
changed: [node2]

TASK [debug] ****************************************************************************************************************************************************************************************************
skipping: [master]
ok: [node1] => {
    "msg": {
        "changed": true,
        "cmd": "docker run -itd --name node1 centos",
        "delta": "0:00:00.795119",
        "end": "2021-09-12 21:51:48.157221",
        "failed": false,
        "rc": 0,
        "start": "2021-09-12 21:51:47.362102",
        "stderr": "",
        "stderr_lines": [],
        "stdout": "1c0628dcb7e772640d9eb58179efc03533e796989f7a802e230f9ebc3012845a",
        "stdout_lines": [
            "1c0628dcb7e772640d9eb58179efc03533e796989f7a802e230f9ebc3012845a"
        ]
    }
}
ok: [node2] => {
    "msg": {
        "changed": true,
        "cmd": "docker run -itd --name node2 centos",
        "delta": "0:00:00.787663",
        "end": "2021-09-12 21:51:48.194065",
        "failed": false,
        "rc": 0,
        "start": "2021-09-12 21:51:47.406402",
        "stderr": "",
        "stderr_lines": [],
        "stdout": "1931d80f5bfffc23fef714a58ab5b009ed5e2182199b55038bb9b1ccc69ec271",
        "stdout_lines": [
            "1931d80f5bfffc23fef714a58ab5b009ed5e2182199b55038bb9b1ccc69ec271"
        ]
    }
}

TASK [shell] ****************************************************************************************************************************************************************************************************
changed: [master]
changed: [node2]
changed: [node1]

TASK [debug] ****************************************************************************************************************************************************************************************************
ok: [master] => {
    "msg": {
        "changed": true,
        "cmd": "ifconfig flannel.1 | head -2",
        "delta": "0:00:00.011813",
        "end": "2021-09-12 21:51:48.722196",
        "failed": false,
        "rc": 0,
        "start": "2021-09-12 21:51:48.710383",
        "stderr": "",
        "stderr_lines": [],
        "stdout": "flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450\n        inet 10.254.68.0  netmask 255.255.255.255  broadcast 0.0.0.0",
        "stdout_lines": [
            "flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450",
            "        inet 10.254.68.0  netmask 255.255.255.255  broadcast 0.0.0.0"
        ]
    }
}
ok: [node1] => {
    "msg": {
        "changed": true,
        "cmd": "ifconfig flannel.1 | head -2",
        "delta": "0:00:00.021717",
        "end": "2021-09-12 21:51:49.443800",
        "failed": false,
        "rc": 0,
        "start": "2021-09-12 21:51:49.422083",
        "stderr": "",
        "stderr_lines": [],
        "stdout": "flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450\n        inet 10.254.97.0  netmask 255.255.255.255  broadcast 0.0.0.0",
        "stdout_lines": [
            "flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450",
            "        inet 10.254.97.0  netmask 255.255.255.255  broadcast 0.0.0.0"
        ]
    }
}
ok: [node2] => {
    "msg": {
        "changed": true,
        "cmd": "ifconfig flannel.1 | head -2",
        "delta": "0:00:00.012259",
        "end": "2021-09-12 21:51:49.439005",
        "failed": false,
        "rc": 0,
        "start": "2021-09-12 21:51:49.426746",
        "stderr": "",
        "stderr_lines": [],
        "stdout": "flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450\n        inet 10.254.59.0  netmask 255.255.255.255  broadcast 0.0.0.0",
        "stdout_lines": [
            "flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450",
            "        inet 10.254.59.0  netmask 255.255.255.255  broadcast 0.0.0.0"
        ]
    }
}

PLAY RECAP ******************************************************************************************************************************************************************************************************
master                     : ok=3    changed=1    unreachable=0    failed=0    skipped=5    rescued=0    ignored=0
node1                      : ok=8    changed=4    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
node2                      : ok=8    changed=4    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

[liruilong@master ansible]$           

驗證node1上的centos容器能否ping通 node2上的centos容器

[liruilong@master ansible]$ ssh node1
Last login: Sun Sep 12 21:58:49 2021 from 192.168.1.10
[liruilong@node1 ~]$ sudo docker exec -it node1 /bin/bash
[root@1c0628dcb7e7 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
17: eth0@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
    link/ether 02:42:0a:fe:61:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.254.97.2/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:aff:fefe:6102/64 scope link
       valid_lft forever preferred_lft forever
[root@1c0628dcb7e7 /]# exit
exit
[liruilong@node1 ~]$ exit
登出
Connection to node1 closed.
[liruilong@master ansible]$ ssh node2
Last login: Sun Sep 12 21:51:49 2021 from 192.168.1.10
[liruilong@node2 ~]$  sudo docker exec -it node2 /bin/bash
[root@1931d80f5bff /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
15: eth0@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
    link/ether 02:42:0a:fe:3b:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.254.59.2/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:aff:fefe:3b02/64 scope link
       valid_lft forever preferred_lft forever
[root@1931d80f5bff /]# ping  10.254.97.2
PING 10.254.97.2 (10.254.97.2) 56(84) bytes of data.
64 bytes from 10.254.97.2: icmp_seq=1 ttl=62 time=99.3 ms
64 bytes from 10.254.97.2: icmp_seq=2 ttl=62 time=0.693 ms
64 bytes from 10.254.97.2: icmp_seq=3 ttl=62 time=97.6 ms
^C
--- 10.254.97.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 5ms
rtt min/avg/max/mdev = 0.693/65.879/99.337/46.100 ms
[root@1931d80f5bff /]#           

測試可以ping通,到這一步,我們配置了 flannel 網絡,實作不同機器間容器互聯互通

4. 安裝部署 kube-master

嗯,網絡配置好之後,我們要在master管理節點安裝配置相應的kube-master。先看下有沒有包

[liruilong@master ansible]$ yum list kubernetes-*
已加載插件:fastestmirror, langpacks
Determining fastest mirrors
 * base: mirrors.aliyun.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
可安裝的軟體包
kubernetes.x86_64                                                            1.5.2-0.7.git269f928.el7                                                    extras
kubernetes-ansible.noarch                                                    0.6.0-0.1.gitd65ebd5.el7                                                    epel
kubernetes-client.x86_64                                                     1.5.2-0.7.git269f928.el7                                                    extras
kubernetes-master.x86_64                                                     1.5.2-0.7.git269f928.el7                                                    extras
kubernetes-node.x86_64                                                       1.5.2-0.7.git269f928.el7                                                    extras
[liruilong@master ansible]$ ls /etc/yum.repos.d/           

嗯,如果有1.10的包,最好用 1.10 的,這裡我們隻有1.5 的就先用1.5 的試試,1.10 的yum源沒找到

關閉交換分區,selinux
配置k8s 的yum源
安裝k8s軟體包
修改全局配置檔案 /etc/kubernetes/config
修改master 配置檔案 /etc/kubernetes/apiserver
啟動服務
驗證服務 kuberctl get cs

install_kube-master_playbook.yml

- name: install  kube-master  or master
  hosts: master
  tasks:
    - shell: swapoff -a
    - replace:
        path: /etc/fstab
        regexp: "/dev/mapper/centos-swap"
        replace: "#/dev/mapper/centos-swap"
    - shell: cat /etc/fstab
      register: out
    - debug: msg="{{out}}"
    - shell: getenforce
      register: out
    - debug: msg="{{out}}"
    - shell: setenforce 0
      when: out.stdout != "Disabled"
    - replace:
        path: /etc/selinux/config
        regexp: "SELINUX=enforcing"
        replace: "SELINUX=disabled"
    - shell: cat /etc/selinux/config
      register: out
    - debug: msg="{{out}}"
    - yum_repository:
        name: Kubernetes
        description: K8s aliyun yum
        baseurl: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
        gpgcheck: yes
        gpgkey: https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
        repo_gpgcheck: yes
        enabled: yes
    - yum:
        name: kubernetes-master,kubernetes-client
        state:  absent
    - yum:
        name: kubernetes-master
        state: present
    - yum:
        name: kubernetes-client
        state: present
    - lineinfile:
        path: /etc/kubernetes/config
        regexp: KUBE_MASTER="--master=http://127.0.0.1:8080"
        line: KUBE_MASTER="--master=http://192.168.1.10:8080"
    - lineinfile:
        path: /etc/kubernetes/apiserver
        regexp: KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"
        line: KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
    - lineinfile:
        path: /etc/kubernetes/apiserver
        regexp: KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"
        line: KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.1.10:2379"
    - lineinfile:
        path: /etc/kubernetes/apiserver
        regexp: KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
        line:   KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
    - lineinfile:
        path: /etc/kubernetes/apiserver
        regexp: KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
        line:   KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
    - service:
        name: kube-apiserver
        state: restarted
        enabled: yes
    - service:
        name: kube-controller-manager
        state: restarted
        enabled: yes
    - service:
        name: kube-scheduler
        state: restarted
        enabled: yes
    - shell: kubectl get cs
      register: out
    - debug: msg="{{out}}"           
[liruilong@master ansible]$ ansible-playbook  install_kube-master_playbook.yml
............
TASK [debug] **************************************************************************************************************************************************
ok: [master] => {
    "msg": {
        "changed": true,
        "cmd": "kubectl get cs",
        "delta": "0:00:05.653524",
        "end": "2021-09-12 23:44:58.030756",
        "failed": false,
        "rc": 0,
        "start": "2021-09-12 23:44:52.377232",
        "stderr": "",
        "stderr_lines": [],
        "stdout": "NAME                 STATUS    MESSAGE             ERROR\nscheduler            Healthy   ok                  \ncontroller-manager   Healthy   ok                  \netcd-0               Healthy   {\"health\":\"true\"}   ",
        "stdout_lines": [
            "NAME                 STATUS    MESSAGE             ERROR",
            "scheduler            Healthy   ok                  ",
            "controller-manager   Healthy   ok                  ",
            "etcd-0               Healthy   {\"health\":\"true\"}   "
        ]
    }
}

PLAY RECAP ****************************************************************************************************************************************************
master                     : ok=13   changed=4    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

[liruilong@master ansible]$ cat install_kube-master_playbook.yml
- name: install  kube-master  or master
  hosts: master
  tasks:
    - shell: swapoff -a
    - replace:
        path: /etc/fstab
        regexp: "/dev/mapper/centos-swap"
        replace: "#/dev/mapper/centos-swap"
    - shell: cat /etc/fstab
      register: out
    - debug: msg="{{out}}"
    - shell: getenforce
      register: out
    - debug: msg="{{out}}"
    - shell: setenforce 0
      when: out.stdout != "Disabled"
    - replace:
        path: /etc/selinux/config
        regexp: "SELINUX=enforcing"
        replace: "SELINUX=disabled"
    - shell: cat /etc/selinux/config
      register: out
    - debug: msg="{{out}}"
    - yum_repository:
        name: Kubernetes
        description: K8s aliyun yum
        baseurl: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
        gpgcheck: yes
        gpgkey: https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
        repo_gpgcheck: yes
        enabled: yes
    - yum:
        name: kubernetes-master,kubernetes-client
        state:  absent
    - yum:
        name: kubernetes-master
        state: present
    - yum:
        name: kubernetes-client
        state: present
    - lineinfile:
        path: /etc/kubernetes/config
        regexp: KUBE_MASTER="--master=http://127.0.0.1:8080"
        line: KUBE_MASTER="--master=http://192.168.1.10:8080"
    - lineinfile:
        path: /etc/kubernetes/apiserver
        regexp: KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"
        line: KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
    - lineinfile:
        path: /etc/kubernetes/apiserver
        regexp: KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"
        line: KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.1.10:2379"
    - lineinfile:
        path: /etc/kubernetes/apiserver
        regexp: KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
        line:   KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
    - lineinfile:
        path: /etc/kubernetes/apiserver
        regexp: KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
        line:   KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
    - service:
        name: kube-apiserver
        state: restarted
        enabled: yes
    - service:
        name: kube-controller-manager
        state: restarted
        enabled: yes
    - service:
        name: kube-scheduler
        state: restarted
        enabled: yes
    - shell: kubectl get cs
      register: out
    - debug: msg="{{out}}"

[liruilong@master ansible]$           

5. 安裝部署 kube-node

管理節點安裝成功之後我們要部署相應的計算節點,kube-node 的安裝 ( 在所有node伺服器上部署 )

安裝k8s的node節點軟體包
修改kube-node 全局配置檔案 /etc/kubernetes/config
修改node 配置檔案 /etc/kubernetes/kubelet,這裡需要注意的是有一個基礎鏡像的配置,如果自己的鏡像庫最好配自己的
kubelet.kubeconfig 檔案生成
設定叢集:将生成的資訊,寫入到kubelet.kubeconfig檔案中
Pod 鏡像安裝
啟動服務并驗證

劇本編寫:

install_kube-node_playbook.yml

[liruilong@master ansible]$ cat 
- name: install kube-node or nodes
  hosts: nodes
  tasks:
    - shell: swapoff -a
    - replace:
        path: /etc/fstab
        regexp: "/dev/mapper/centos-swap"
        replace: "#/dev/mapper/centos-swap"
    - shell: cat /etc/fstab
      register: out
    - debug: msg="{{out}}"
    - shell: getenforce
      register: out
    - debug: msg="{{out}}"
    - shell: setenforce 0
      when: out.stdout != "Disabled"
    - replace:
        path: /etc/selinux/config
        regexp: "SELINUX=enforcing"
        replace: "SELINUX=disabled"
    - shell: cat /etc/selinux/config
      register: out
    - debug: msg="{{out}}"
    - yum_repository:
        name: Kubernetes
        description: K8s aliyun yum
        baseurl: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
        gpgcheck: yes
        gpgkey: https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
        repo_gpgcheck: yes
        enabled: yes
    - yum:
        name: kubernetes-node
        state: absent
    - yum:
        name: kubernetes-node
        state: present
    - lineinfile:
        path: /etc/kubernetes/config
        regexp: KUBE_MASTER="--master=http://127.0.0.1:8080"
        line:   KUBE_MASTER="--master=http://192.168.1.10:8080"
    - lineinfile:
        path: /etc/kubernetes/kubelet
        regexp: KUBELET_ADDRESS="--address=127.0.0.1"
        line:   KUBELET_ADDRESS="--address=0.0.0.0"
    - lineinfile:
        path: /etc/kubernetes/kubelet
        regexp: KUBELET_HOSTNAME="--hostname-override=127.0.0.1"
        line:   KUBELET_HOSTNAME="--hostname-override={{inventory_hostname}}"
    - lineinfile:
        path: /etc/kubernetes/kubelet
        regexp: KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080"
        line:   KUBELET_API_SERVER="--api-servers=http://192.168.1.10:8080"
    - lineinfile:
        path: /etc/kubernetes/kubelet
        regexp: KUBELET_ARGS=""
        line:   KUBELET_ARGS="--cgroup-driver=systemd   --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
    - shell: kubectl config set-cluster local --server="http://192.168.1.10:8080"
    - shell: kubectl config set-context --cluster="local" local
    - shell: kubectl config set current-context local
    - shell: kubectl config view
      register: out
    - debug: msg="{{out}}"
    - copy:
        dest: /etc/kubernetes/kubelet.kubeconfig
        content: "{{out.stdout}}"
        force: yes
    - shell: docker pull tianyebj/pod-infrastructure:latest
    - service:
        name: kubelet
        state: restarted
        enabled: yes
    - service:
        name: kube-proxy
        state: restarted
        enabled: yes

- name: service check
  hosts: master
  tasks:
    - shell: sleep 10
      async: 11
    - shell: kubectl get node
      register: out
    - debug: msg="{{out}}"           

install_kube-node_playbook.yml

[liruilong@master ansible]$ ansible-playbook install_kube-node_playbook.yml
........
...
TASK [debug] **************************************************************************************************************************************************************************************************
ok: [master] => {
    "msg": {
        "changed": true,
        "cmd": "kubectl get node",
        "delta": "0:00:00.579772",
        "end": "2021-09-15 02:00:34.829752",
        "failed": false,
        "rc": 0,
        "start": "2021-09-15 02:00:34.249980",
        "stderr": "",
        "stderr_lines": [],
        "stdout": "NAME      STATUS    AGE\nnode1     Ready     1d\nnode2     Ready     1d",
        "stdout_lines": [
            "NAME      STATUS    AGE",
            "node1     Ready     1d",
            "node2     Ready     1d"
        ]
    }
}

PLAY RECAP ****************************************************************************************************************************************************************************************************
master                     : ok=4    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
node1                      : ok=27   changed=19   unreachable=0    failed=0    skipped=1    rescued=0    ignored=0
node2                      : ok=27   changed=19   unreachable=0    failed=0    skipped=1    rescued=0    ignored=0

[liruilong@master ansible]$           
從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置

6. 安裝部署 kube-dashboard

dashboard 鏡像安裝:kubernetes-dashboard 是 kubernetes 的web管理面闆.這裡的話一定要和K8s的版本對應,包括配置檔案

[liruilong@master ansible]$ ansible node1 -m shell -a 'docker search kubernetes-dashboard'
[liruilong@master ansible]$ ansible node1 -m shell -a 'docker pull  docker.io/rainf/kubernetes-dashboard-amd64'           

kube-dashboard.yaml 檔案,修改dashboard的yaml檔案,在kube-master上操作

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  labels:
    app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kubernetes-dashboard
  template:
    metadata:
      labels:
        app: kubernetes-dashboard
      # Comment the following annotation if Dashboard must not be deployed on master
      annotations:
        scheduler.alpha.kubernetes.io/tolerations: |
          [
            {
              "key": "dedicated",
              "operator": "Equal",
              "value": "master",
              "effect": "NoSchedule"
            }
          ]
    spec:
      containers:
      - name: kubernetes-dashboard
        image:  docker.io/rainf/kubernetes-dashboard-amd64      #預設的鏡像是使用google的,這裡改成docker倉庫的
        imagePullPolicy: Always
        ports:
        - containerPort: 9090
          protocol: TCP
        args:
          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          - --apiserver-host=http://192.168.1.10:8080    #注意這裡是api的位址
        livenessProbe:
          httpGet:
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30
---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 9090
    nodePort: 30090
  selector:
    app: kubernetes-dashboard           

根據yaml檔案,建立dashboard容器,在kube-master上操作

[liruilong@master ansible]$ vim kube-dashboard.yaml
[liruilong@master ansible]$ kubectl create -f kubernetes-dashboard.yaml
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created
[liruilong@master ansible]$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE
kube-system   kubernetes-dashboard-1953799730-jjdfj   1/1       Running   0          6s
[liruilong@master ansible]$           

看一下在那個節點上,然後通路試試

[liruilong@master ansible]$ ansible nodes -a  "docker ps"
node2 | CHANGED | rc=0 >>
CONTAINER ID        IMAGE                                                        COMMAND                  CREATED             STATUS              PORTS               NAMES
14433d421746        docker.io/rainf/kubernetes-dashboard-amd64                   "/dashboard --port..."   10 minutes ago      Up 10 minutes                           k8s_kubernetes-dashboard.c82dac6b_kubernetes-dashboard-1953799730-jjdfj_kube-system_ea2ec370-1594-11ec-bbb1-000c294efe34_9c65bb2a
afc4d4a56eab        registry.access.redhat.com/rhel7/pod-infrastructure:latest   "/usr/bin/pod"           10 minutes ago      Up 10 minutes                           k8s_POD.28c50bab_kubernetes-dashboard-1953799730-jjdfj_kube-system_ea2ec370-1594-11ec-bbb1-000c294efe34_6851b7ee
node1 | CHANGED | rc=0 >>
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[liruilong@master ansible]$           

在 node2上,即可以通過

http://192.168.1.11:30090/

通路,我們測試一下

從零搭建Linux+Docker+Ansible+kubernetes 學習環境(1*Master+3*Node)一,Linux 系統安裝二,Ansible安裝配置三,Docker、K8s相關包安裝配置

後記

嗯,到這裡,就完成了全部的

Linux+Docker+Ansible+K8S

學習環境搭建。k8s的搭建方式有些落後,但是剛開始學習,慢慢來,接下來就進行愉快的 K8S學習吧。