天天看點

使用 Vagrant + VirtualBox 虛拟機搭建TiDB v5.4 實驗環境

# 執行個體環境配置資訊

硬體資訊: Intel i7(8c) + 16G 記憶體 + 1T SSD

軟體:Oracle VM Virtual Box 6.1.26 + Vagrant 2.2.16

ISO:CentOS-7.9-x86_64-DVD-2009

TiDB版本:TiDB V5.4

虛拟機VM數量:5個

各個VM配置:Cpu:1c , Memory:2G 硬碟50G

各個虛拟機節點資訊:

| 元件 | 虛拟機名稱 | 機器名稱 | IP位址 | 數量 | | ------------ | ------------ | ------------ | -------------- | -- | | pd | tidb-pd | tidb-pd | 192.168.56.160 | 1 | | altermanager | tidb-pd | tidb-pd | 192.168.56.160 | | | prometheus | tidb-pd | tidb-pd | 192.168.56.160 | | | grafana | tidb-pd | tidb-pd | 192.168.56.160 | | | tidb-server | tidb-server | tidb-tidb | 192.168.56.161 | 1 | | tikv1 | tidb-tikv1 | tidb-tikv1 | 192.168.56.162 | 1 | | tikv2 | tidb-tikv2 | tidb-tikv2 | 192.168.56.163 | 1 | | tiflash | tidb-tiflash | tidb-tiflash | 192.168.56.164 | 1 |

各元件網絡端口配置要求

元件 預設端口 說明
TiDB 4000 應用及 DBA 工具通路通信端口
TiDB 10080 TiDB 狀态資訊上報通信端口
TiKV 20160 TiKV 通信端口
TiKV 20180 TiKV 狀态資訊上報通信端口
PD 2379 提供 TiDB 和 PD 通信端口
PD 2380 PD 叢集節點間通信端口
TiFlash 9000 TiFlash TCP 服務端口
TiFlash 8123 TiFlash HTTP 服務端口
TiFlash 3930 TiFlash RAFT 服務和 Coprocessor 服務端口
TiFlash 20170 TiFlash Proxy 服務端口
TiFlash 20292 Prometheus 拉取 TiFlash Proxy metrics 端口
TiFlash 8234 Prometheus 拉取 TiFlash metrics 端口
Pump 8250 Pump 通信端口
Drainer 8249 Drainer 通信端口
CDC 8300 CDC 通信接口
Prometheus 9090 Prometheus 服務通信端口
Node_exporter 9100 TiDB 叢集每個節點的系統資訊上報通信端口
Blackbox_exporter 9115 Blackbox_exporter 通信端口,用于 TiDB 叢集端口監控
Grafana 3000 Web 監控服務對外服務和用戶端(浏覽器)通路端口
Alertmanager 9093 告警 web 服務端口
Alertmanager 9094 告警通信端口

Windows 10 下安裝VirtualBox和Vagrant

軟體下載下傳位址

Oracle VM VirtualBox下載下傳網址:​​https://www.virtualbox.org/wiki/Downloads​​

Vagrant下載下傳網址:​​https://www.vagrantup.com/downloads​​

Vagrant Box檔案位址:​​https://app.vagrantup.com/boxes/search?utf8=%E2%9C%93&sort=downloads&provider=&q=centos7​​

安裝VritualBox Oracle VM

  • 下載下傳安裝檔案
使用 Vagrant + VirtualBox 虛拟機搭建TiDB v5.4 實驗環境

VirtualBox 是一款開源的虛拟機軟體,和VMWare是同類型的軟體,用于在目前的電腦上建立虛拟機。

VirtualBox 6.1.34 下載下傳位址 ​​https://download.virtualbox.org/virtualbox/6.1.34/VirtualBox-6.1.34a-150636-Win.exe​​

VirtualBox 6.1.34 Oracle VM VirtualBox 擴充包下載下傳位址 ​​https://download.virtualbox.org/virtualbox/6.1.34/Oracle_VM_VirtualBox_Extension_Pack-6.1.34.vbox-extpack​​

  • 安裝VirtualBox
  • 輕按兩下下載下傳好的VirtualBox-6.1.34a-150636-Win.exe檔案進行安裝,
  • 使用 Vagrant + VirtualBox 虛拟機搭建TiDB v5.4 實驗環境
  • 點選“下一步”
  • 使用 Vagrant + VirtualBox 虛拟機搭建TiDB v5.4 實驗環境
  • 設定安裝位置,點選“下一步”
使用 Vagrant + VirtualBox 虛拟機搭建TiDB v5.4 實驗環境

點選“下一步”

使用 Vagrant + VirtualBox 虛拟機搭建TiDB v5.4 實驗環境

點選”是“

使用 Vagrant + VirtualBox 虛拟機搭建TiDB v5.4 實驗環境

點選”安裝"

使用 Vagrant + VirtualBox 虛拟機搭建TiDB v5.4 實驗環境

點選“完成”。

安裝過程非常簡單,根據提示點選一下就完成對VirtualBox的安裝。

  • 安裝VirtualBox 擴充包
  • 輕按兩下下載下傳的“Oracle_VM_VirtualBox_Extension_Pack-6.1.34.vbox-extpack” 擴充封包件,根據提示進行安裝。

修改VirtualBox 配置資訊

  • 修改虛拟機預設存放路徑

點選菜單“管理”-->“全局設定” 修改 "預設虛拟電腦位置:" 的值為 g:\ovm_machine

使用 Vagrant + VirtualBox 虛拟機搭建TiDB v5.4 實驗環境
  • 添加網卡管理
  • 菜單“管理”-->“主機網絡管理器”,點選“建立”
  • 使用 Vagrant + VirtualBox 虛拟機搭建TiDB v5.4 實驗環境

安裝Vagrant

Vagrant 2.2.19 Windows版本下載下傳位址 ​​https://releases.hashicorp.com/vagrant/2.2.19/vagrant_2.2.19_x86_64.msi​​

輕按兩下“vagrant_2.2.19_x86_64.msi"進行安裝

使用 Vagrant + VirtualBox 虛拟機搭建TiDB v5.4 實驗環境

點選”Next“

使用 Vagrant + VirtualBox 虛拟機搭建TiDB v5.4 實驗環境

點選“複選框”,點選”Next”

使用 Vagrant + VirtualBox 虛拟機搭建TiDB v5.4 實驗環境

設定安裝路徑,點選”Next“

使用 Vagrant + VirtualBox 虛拟機搭建TiDB v5.4 實驗環境

點選”Install"。

使用 Vagrant + VirtualBox 虛拟機搭建TiDB v5.4 實驗環境

點選“Finish”完成安裝。

Vagrant設定Path環境變量

點選“此電腦”右鍵選“屬性”,點“進階系統設定”,

使用 Vagrant + VirtualBox 虛拟機搭建TiDB v5.4 實驗環境

在調出視窗中點選“進階”标簽欄,點選“環境變量”

使用 Vagrant + VirtualBox 虛拟機搭建TiDB v5.4 實驗環境

選擇系統變量的“Path”,點選“編輯”,新增“G:\HashiCorp\Vagrant\bin”變量值。

檢視Vagrant安裝版本

打開cmd視窗,輸入vagrant -v

使用 Vagrant + VirtualBox 虛拟機搭建TiDB v5.4 實驗環境

vagrant 使用

vagrant建立虛拟機

查找虛拟鏡像

線上查找需要的box,官方網址:​​https://app.vagrantup.com/boxes/search​​ 搜尋centos7虛拟機box。

使用 Vagrant + VirtualBox 虛拟機搭建TiDB v5.4 實驗環境

線上安裝

使用 Vagrant + VirtualBox 虛拟機搭建TiDB v5.4 實驗環境
#PS G:\HashiCorp\vagrant_vbox_data\centos7_test> pwd

Path
----
G:\HashiCorp\vagrant_vbox_data\centos7_test

---初始化Vagrantfile檔案
PS G:\HashiCorp\vagrant_vbox_data\centos7_test> vagrant init generic/centos7
A `Vagrantfile` has been placed in this directory. You are now
ready to `vagrant up` your first virtual environment! Please read
the comments in the Vagrantfile as well as documentation on
`vagrantup.com` for more information on using Vagrant.
PS G:\HashiCorp\vagrant_vbox_data\centos7_test> dir


    目錄: G:\HashiCorp\vagrant_vbox_data\centos7_test


Mode                 LastWriteTime         Length Name
----                 -------------         ------ ----
-a----        2022/06/04     15:16           3091 Vagrantfile


PS G:\HashiCorp\vagrant_vbox_data\centos7_test>vagrant up

      

備注: 使用vagrant建立虛拟機後,預設建立了vagrant使用者,密碼是vagrant。 root使用者密碼也是vagrant。

vagrant 指令

描述 指令 描述 指令
在初始化完的檔案夾内啟動虛拟機 vagrant up 查找虛拟機的運作狀态 vagrant status
ssh登入啟動的虛拟機 vagrant ssh 挂起啟動的虛拟機 vagrant suspend
喚醒虛拟機 vagrant resume 重新開機 虛拟機 vagrant reload
關閉虛拟機 vagrant halt 删除目前虛拟機 vagrant destroy
在終端裡對開發環境進行打包 vagrant package 修改檔案重新開機(相當于先 halt,再 up) vagrant reload

box管理指令

描述 指令 描述 指令
檢視本地box清單 vagrant box list 添加box到清單 vagrant box add name url
從box清單移除 vagrant box remove name 輸出用于ssh連接配接的一些資訊 vagrant ssh-config

安裝TiDB過程使用shell檔案

## 檔案存放路徑

Vagrantfile 配置檔案及shell檔案存放路徑

目錄: G:\HashiCorp\vagrant-master\TiDB-5.4


Mode                 LastWriteTime         Length Name
----                 -------------         ------ ----
d-----        2022/06/16     17:24                .vagrant
d-----        2022/06/16     17:12                shared_scripts
-a----        2022/06/16     17:29           1938 Vagrantfile

PS G:\HashiCorp\vagrant-master\TiDB-5.4> tree /F
卷 SSD 的檔案夾 PATH 清單
卷序列号為 E22C-4CB0
G:.
│ Vagrantfile
│
└─shared_scripts
        root_setup.sh
        setup.sh
        shell_init_os.sh
        tiup_deploy.sh
       

備注:

  • shared_scripts 目錄存放虛拟機初始化的系統配置腳本。
  1. setup.sh:Vagrantfile 調用shell檔案進行系統配置,此腳本内容是執行root_setup.sh
  2. root_setup.sh:設定主機名與sshd配置,調用shell_init_os.sh 腳本
  3. shell_init_os.sh:對安裝tidb前進行作業系統進行配置。
  4. tiup_deploy.sh:安裝tiup工具軟體
  • Vagrantfile 檔案是vagrant 的虛拟機配置檔案

setup.sh 檔案内容

#/bin/bash
sudo bash -c 'sh /vagrant_scripts/root_setup.sh'      

root_setup.sh檔案内容

#/bin/bash
if [ -f /vagrant_config/install.env ]; then
    . /vagrant_config/install.env
fi

#設定代理
echo "******************************************************************************"
echo "set http proxy." `date`
echo "******************************************************************************"
if [ "$HTTP_PROXY" != "" ]; then
    echo "http_proxy=http://${HTTP_PROXY}" >> /etc/profile
    echo "https_proxy=http://${HTTP_PROXY}" >> /etc/profile
    echo "export http_proxy https_proxy" >> /etc/profile
    source /etc/profile
fi

#安裝package 
yum install -y wget net-tools sshpass

#設定PS1
export LS_COLORS='no=00:fi=00:di=01;33;40:ln=01;36;40:'
export PS1="\[\033[01;35m\][\[\033[00m\]\[\033[01;32m\]\u@\h\[\033[00m\] \[\033[01;34m\]\w\[\033[00m\]\[\033[01;35m\]]\[\033[00m\]\$ "
echo "alias l='ls -lrtha'" >>/root/.bashrc
#echo "alias vi=vim" >>/root/.bashrc
source /root/.bashrc

#修改root密碼
if [ "$ROOT_PASSWORD" == "" ]; then
    ROOT_PASSWORD="rootpasswd"
fi

echo "******************************************************************************"
echo "Set root password and change ownership." `date`
echo "******************************************************************************"
echo -e "${ROOT_PASSWORD}\n${ROOT_PASSWORD}" | passwd

#設定時區
timedatectl set-timezone Asia/Shanghai

#關閉firewalld
systemctl stop firewalld.service
systemctl disable firewalld.service

#設定selinux 
sed -i "s?SELINUX=enforcing?SELINUX=disabled?" /etc/selinux/config
setenforce  0

#設定sshd_config
echo "******************************************************************************"
echo "Set sshd service and disable firewalld service." `date`
echo "******************************************************************************"
sed -i "s?^#PermitRootLogin yes?PermitRootLogin yes?" /etc/ssh/sshd_config
sed -i "s?^#PasswordAuthentication yes?PasswordAuthentication yes?" /etc/ssh/sshd_config
sed -i "s?^PasswordAuthentication no?#PasswordAuthentication no?" /etc/ssh/sshd_config
sed -i '/StrictHostKeyChecking/s/^#//; /StrictHostKeyChecking/s/ask/no/' /etc/ssh/ssh_config
systemctl restart sshd.service

#設定主機名
if [ "$PUBLIC_SUBNET" != "" ]; then
    IP_NET=`echo $PUBLIC_SUBNET |cut -d"." -f1,2,3`
    IPADDR=`ip addr |grep $IP_NET |awk -F"/" '{print $1}'|awk -F" " '{print $2}'`
    PRIF=`grep $IPADDR /vagrant_config/install.env |awk -F"_" '{print $1}'`
    if [ "$PRIF" != "" ]; then
        HOSTNAME=`grep $PRIF"_HOSTNAME" /vagrant_config/install.env |awk -F"=" '{print $2}'`
        hostnamectl set-hostname $HOSTNAME

        #設定/etc/hosts
        CNT=`grep $IPADDR /etc/hosts|wc -l `
        if [ "$CNT" == "0" ]; then
            echo "$IPADDR   $HOSTNAME">> /etc/hosts
        fi
    fi
fi

#初化始系統配置資訊
if [ -f /vagrant_scripts/shell_init_os.sh ]; then
    sh /vagrant_scripts/shell_init_os.sh
fi      

shell_init_os.sh檔案内容

#/bin/bash
#1.檢測及關閉系統 swap
echo "vm.swappiness = 0">> /etc/sysctl.conf
swapoff -a && swapon -a
sysctl -p

#2.檢測及關閉目标部署機器的防火牆
#關閉firewalld
systemctl stop firewalld.service
systemctl disable firewalld.service

#設定selinux 
sed -i "s?SELINUX=enforcing?SELINUX=disabled?" /etc/selinux/config
setenforce  0

#3.檢測及安裝 NTP 服務
yum -y install numactl 
yum -y install ntp ntpdate 

#設定NTP
systemctl status ntpd.service
systemctl start ntpd.service 
systemctl enable ntpd.service
ntpstat

#4.檢查和配置作業系統優化參數
#關閉THP和NUMA
RESULT=`grep "GRUB_CMDLINE_LINUX" /etc/default/grub |grep "transparent_hugepage"`
if [ "$RESULT" == "" ]; then
    \cp /etc/default/grub /etc/default/grub.bak
    sed -i 's#quiet#quiet transparent_hugepage=never numa=off#g' /etc/default/grub
    grub2-mkconfig -o /boot/grub2/grub.cfg
    if [ -f /boot/efi/EFI/redhat/grub.cfg ]; then
        grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
    fi
fi 

#關閉透明大頁
if [ -d /sys/kernel/mm/transparent_hugepage ]; then
    thp_path=/sys/kernel/mm/transparent_hugepage
elif [ -d /sys/kernel/mm/redhat_transparent_hugepage ]; then
    thp_path=/sys/kernel/mm/redhat_transparent_hugepage
fi
echo "echo 'never' > ${thp_path}/enabled" >>  /etc/rc.d/rc.local
echo "echo 'never' > ${thp_path}/defrag"  >>  /etc/rc.d/rc.local    
echo 'never' > ${thp_path}/enabled
echo 'never' > ${thp_path}/defrag   
chmod +x /etc/rc.d/rc.local

#建立 CPU 節能政策配置服務。
 
#啟動irqbalance服務
systemctl start irqbalance 
systemctl enable irqbalance

#執行以下指令修改 sysctl 參數。
echo "fs.file-max = 1000000">> /etc/sysctl.conf
echo "net.core.somaxconn = 32768">> /etc/sysctl.conf
echo "net.ipv4.tcp_tw_recycle = 0">> /etc/sysctl.conf
echo "net.ipv4.tcp_syncookies = 0">> /etc/sysctl.conf
echo "vm.overcommit_memory = 1">> /etc/sysctl.conf
sysctl -p

#執行以下指令配置使用者的 limits.conf 檔案
cat << EOF >>/etc/security/limits.conf
tidb           soft    nofile          1000000
tidb           hard    nofile          1000000
tidb           soft    stack          32768
tidb           hard    stack          32768
EOF

#建立tidb使用者
if [ "$TIDB_PASSWORD" == "" ]; then
    TIDB_PASSWORD="tidbpasswd"
fi
TIDB_PWD=`echo "$TIDB_PASSWORD" |openssl passwd -stdin`
useradd tidb -p "$TIDB_PWD" -m

#将tidb加入sudo
echo "tidb ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers      

tiup_deploy.sh檔案内容

#/bin/bash

if [ -f /home/vagrant/Vagrantfile ]; then
  for siteip in  `cat /home/vagrant/Vagrantfile |grep ":eth1 =>" |awk -F"\"" '{print $2}'`; do     
    ping -c1 -W1 ${siteip} &> /dev/null
    if [ "$?" == "0" ]; then
      echo "$siteip is UP"
    else
      echo "$siteip is DOWN"
      exit -1
    fi
    
    if [ -f /root/.ssh/known_hosts ]; then
      sed -i '/${siteip}/d' /root/.ssh/known_hosts
    fi    
  done
fi

#設定ssh免密 
if [ "$ROOT_PASSWORD" == "" ]; then
  ROOT_PASSWORD="rootpasswd"
fi

rm -f ~/.ssh/id_rsa && ssh-keygen -q -t rsa -N '' -f ~/.ssh/id_rsa <<<y >/dev/null 2>&1
 
for ipaddr in `cat /home/vagrant/Vagrantfile |grep ":eth1 =>" |awk -F"\"" '{print $2}'`; do
    sshpass -p $ROOT_PASSWORD ssh-copy-id $ipaddr
done

#下載下傳tidb工具
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh 

#生效tiup環境變量
source ~/.bash_profile

#安裝TiUP cluster元件
tiup cluster

#更新TiUP Cluster到最新的版本
tiup update --self && tiup update cluster 

#檢視TiUP Cluster的版本
echo "view tiup cluster version"
tiup --binary cluster


#生成tidb拓撲檔案
cat > ~/topology.yaml<<EOF
global:
  user: "tidb"
  ssh_port: 22
  deploy_dir: "/tidb-deploy"
  data_dir: "/tidb-data"
  arch: "amd64"

monitored:
  node_exporter_port: 9100
  blackbox_exporter_port: 9115


pd_servers:
  - host: 192.168.56.160
 

tidb_servers:
  - host: 192.168.56.161

tikv_servers:
  - host: 192.168.56.162
  - host: 192.168.56.163
  
tiflash_servers:
  - host: 192.168.56.164

monitoring_servers:
  - host: 192.168.56.160

grafana_servers:
  - host: 192.168.56.160

alertmanager_servers:
  - host: 192.168.56.160
EOF      

建立使用Vagrantfile配置檔案

建立Vagrantfile配置檔案,boxes 項是配置虛拟機的IP位址,主機名,記憶體,cpu。

boxes = [
    {
        :name => "tidb-pd",
        :eth1 => "192.168.56.160",
        :mem => "2048",
        :cpu => "1",
        :sshport => 22230
    },
    {
        :name => "tidb-server",
        :eth1 => "192.168.56.161",
        :mem => "2048",
        :cpu => "1",
        :sshport => 22231
    },
    {
        :name => "tidb-tikv1",
        :eth1 => "192.168.56.162",
        :mem => "2048",
        :cpu => "1",
        :sshport => 22232
    },
    {
        :name => "tidb-tikv2",
        :eth1 => "192.168.56.163",
        :mem => "2048",
        :cpu => "1",
        :sshport => 22233
    },
    {
        :name => "tidb-tiflash",
        :eth1 => "192.168.56.164",
        :mem => "2048",
        :cpu => "1",
        :sshport => 22234
    }
]
Vagrant.configure(2) do |config|
    config.vm.box = "generic/centos7"
    Encoding.default_external = 'UTF-8'
    config.vm.synced_folder ".", "/home/vagrant"
    #config.vm.synced_folder "./config", "/vagrant_config"
    config.vm.synced_folder "./shared_scripts", "/vagrant_scripts"
   
    
    boxes.each do |opts|
        config.vm.define opts[:name] do |config|
            config.vm.hostname = opts[:name]
            config.vm.network "private_network", ip: opts[:eth1]
            config.vm.network "forwarded_port", guest: 22, host: 2222, id: "ssh", disabled: "true"
            config.vm.network "forwarded_port", guest: 22, host: opts[:sshport]
            #config.ssh.username = "root"
            #config.ssh.password = "root"
            #config.ssh.port=opts[:sshport]
            #config.ssh.insert_key = false
            #config.vm.synced_folder ".", "/vagrant", type: "rsync" 
            config.vm.provider "vmware_fusion" do |v|
                v.vmx["memsize"] = opts[:mem]
                v.vmx["numvcpus"] = opts[:cpu]
            end
            config.vm.provider "virtualbox" do |v|
                v.memory = opts[:mem];
                v.cpus = opts[:cpu];
                v.name = opts[:name];
                v.customize ['storageattach', :id, '--storagectl', "IDE Controller", '--port', '1', '--device', '0','--type', 'dvddrive', '--medium', 'G:\HashiCorp\repo_vbox\CentOS7\CentOS-7.9-x86_64-DVD-2009.iso']
            end
        end
    end
    
    
    config.vm.provision "shell", inline: <<-SHELL
        sh /vagrant_scripts/setup.sh
    SHELL
  
end
      

執行vagrant 建立虛拟機

在powershell或cmd視窗執行vagrant up 建立虛拟機,如下是其中一個虛拟機建立的輸出記錄

G:\HashiCorp\vagrant_vbox_data\TiDB-5.4>vagrant up
==> tidb-tiflash: Importing base box 'generic/centos7'...
==> tidb-tiflash: Matching MAC address for NAT networking...
==> tidb-tiflash: Checking if box 'generic/centos7' version '3.6.10' is up to date...
==> tidb-tiflash: Setting the name of the VM: tidb-tiflash
==> tidb-tiflash: Clearing any previously set network interfaces...
==> tidb-tiflash: Preparing network interfaces based on configuration...
    tidb-tiflash: Adapter 1: nat
    tidb-tiflash: Adapter 2: hostonly
==> tidb-tiflash: Forwarding ports...
    tidb-tiflash: 22 (guest) => 22234 (host) (adapter 1)
==> tidb-tiflash: Running 'pre-boot' VM customizations...
==> tidb-tiflash: Booting VM...
==> tidb-tiflash: Waiting for machine to boot. This may take a few minutes...
    tidb-tiflash: SSH address: 127.0.0.1:22234
    tidb-tiflash: SSH username: vagrant
    tidb-tiflash: SSH auth method: private key
    tidb-tiflash:
    tidb-tiflash: Vagrant insecure key detected. Vagrant will automatically replace
    tidb-tiflash: this with a newly generated keypair for better security.
    tidb-tiflash:
    tidb-tiflash: Inserting generated public key within guest...
    tidb-tiflash: Removing insecure key from the guest if it's present...
    tidb-tiflash: Key inserted! Disconnecting and reconnecting using new SSH key...
==> tidb-tiflash: Machine booted and ready!
==> tidb-tiflash: Checking for guest additions in VM...
    tidb-tiflash: The guest additions on this VM do not match the installed version of
    tidb-tiflash: VirtualBox! In most cases this is fine, but in rare cases it can
    tidb-tiflash: prevent things such as shared folders from working properly. If you see
    tidb-tiflash: shared folder errors, please make sure the guest additions within the
    tidb-tiflash: virtual machine match the version of VirtualBox you have installed on
    tidb-tiflash: your host and reload your VM.
    tidb-tiflash:
    tidb-tiflash: Guest Additions Version: 5.2.44
    tidb-tiflash: VirtualBox Version: 6.1
==> tidb-tiflash: Setting hostname...
==> tidb-tiflash: Configuring and enabling network interfaces...
==> tidb-tiflash: Mounting shared folders...
    tidb-tiflash: /home/vagrant => G:/HashiCorp/vagrant_vbox_data/TiDB-5.4
    tidb-tiflash: /vagrant_scripts => G:/HashiCorp/vagrant_vbox_data/TiDB-5.4/shared_scripts
==> tidb-tiflash: Running provisioner: shell...
    tidb-tiflash: Running: inline script
    tidb-tiflash: ******************************************************************************
    tidb-tiflash: set http proxy. Thu Jun 16 09:48:05 UTC 2022
    tidb-tiflash: ******************************************************************************
    tidb-tiflash: Loaded plugins: fastestmirror
    tidb-tiflash: Determining fastest mirrors
    tidb-tiflash:  * base: mirrors.ustc.edu.cn
    tidb-tiflash:  * epel: mirrors.bfsu.edu.cn
    tidb-tiflash:  * extras: mirrors.ustc.edu.cn
    tidb-tiflash:  * updates: mirrors.ustc.edu.cn
    tidb-tiflash: Package wget-1.14-18.el7_6.1.x86_64 already installed and latest version
    tidb-tiflash: Package net-tools-2.0-0.25.20131004git.el7.x86_64 already installed and latest version
    tidb-tiflash: Resolving Dependencies
    tidb-tiflash: --> Running transaction check
    tidb-tiflash: ---> Package sshpass.x86_64 0:1.06-2.el7 will be installed
    tidb-tiflash: --> Finished Dependency Resolution
    tidb-tiflash:
    tidb-tiflash: Dependencies Resolved
    tidb-tiflash:
    tidb-tiflash: ================================================================================
    tidb-tiflash:  Package           Arch             Version              Repository        Size
    tidb-tiflash: ================================================================================
    tidb-tiflash: Installing:
    tidb-tiflash:  sshpass           x86_64           1.06-2.el7           extras            21 k
    tidb-tiflash:
    tidb-tiflash: Transaction Summary
    tidb-tiflash: ================================================================================
    tidb-tiflash: Install  1 Package
    tidb-tiflash:
    tidb-tiflash: Total download size: 21 k
    tidb-tiflash: Installed size: 38 k
    tidb-tiflash: Downloading packages:
    tidb-tiflash: Running transaction check
    tidb-tiflash: Running transaction test
    tidb-tiflash: Transaction test succeeded
    tidb-tiflash: Running transaction
    tidb-tiflash:   Installing : sshpass-1.06-2.el7.x86_64                                    1/1
    tidb-tiflash:   Verifying  : sshpass-1.06-2.el7.x86_64                                    1/1
    tidb-tiflash:
    tidb-tiflash: Installed:
    tidb-tiflash:   sshpass.x86_64 0:1.06-2.el7
    tidb-tiflash:
    tidb-tiflash: Complete!
    tidb-tiflash: ******************************************************************************
    tidb-tiflash: Set root password and change ownership. Thu Jun 16 09:49:49 UTC 2022
    tidb-tiflash: ******************************************************************************
    tidb-tiflash: New password: BAD PASSWORD: The password contains the user name in some form
    tidb-tiflash: Changing password for user root.
    tidb-tiflash: passwd: all authentication tokens updated successfully.
    tidb-tiflash: Retype new password: Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
    tidb-tiflash: Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
    tidb-tiflash: ******************************************************************************
    tidb-tiflash: Set sshd service and disable firewalld service. Thu Jun 16 17:49:50 CST 2022
    tidb-tiflash: ******************************************************************************
    tidb-tiflash: net.ipv6.conf.all.disable_ipv6 = 1
    tidb-tiflash: vm.swappiness = 0
    tidb-tiflash: Loaded plugins: fastestmirror
    tidb-tiflash: Loading mirror speeds from cached hostfile
    tidb-tiflash:  * base: mirrors.ustc.edu.cn
    tidb-tiflash:  * epel: mirrors.bfsu.edu.cn
    tidb-tiflash:  * extras: mirrors.ustc.edu.cn
    tidb-tiflash:  * updates: mirrors.ustc.edu.cn
    tidb-tiflash: Resolving Dependencies
    tidb-tiflash: --> Running transaction check
    tidb-tiflash: ---> Package numactl.x86_64 0:2.0.12-5.el7 will be installed
    tidb-tiflash: --> Finished Dependency Resolution
    tidb-tiflash:
    tidb-tiflash: Dependencies Resolved
    tidb-tiflash:
    tidb-tiflash: ================================================================================
    tidb-tiflash:  Package           Arch             Version                Repository      Size
    tidb-tiflash: ================================================================================
    tidb-tiflash: Installing:
    tidb-tiflash:  numactl           x86_64           2.0.12-5.el7           base            66 k
    tidb-tiflash:
    tidb-tiflash: Transaction Summary
    tidb-tiflash: ================================================================================
    tidb-tiflash: Install  1 Package
    tidb-tiflash:
    tidb-tiflash: Total download size: 66 k
    tidb-tiflash: Installed size: 141 k
    tidb-tiflash: Downloading packages:
    tidb-tiflash: Running transaction check
    tidb-tiflash: Running transaction test
    tidb-tiflash: Transaction test succeeded
    tidb-tiflash: Running transaction
    tidb-tiflash:   Installing : numactl-2.0.12-5.el7.x86_64                                  1/1
    tidb-tiflash:   Verifying  : numactl-2.0.12-5.el7.x86_64                                  1/1
    tidb-tiflash:
    tidb-tiflash: Installed:
    tidb-tiflash:   numactl.x86_64 0:2.0.12-5.el7
    tidb-tiflash:
    tidb-tiflash: Complete!
    tidb-tiflash: Loaded plugins: fastestmirror
    tidb-tiflash: Loading mirror speeds from cached hostfile
    tidb-tiflash:  * base: mirrors.ustc.edu.cn
    tidb-tiflash:  * epel: mirrors.bfsu.edu.cn
    tidb-tiflash:  * extras: mirrors.ustc.edu.cn
    tidb-tiflash:  * updates: mirrors.ustc.edu.cn
    tidb-tiflash: Resolving Dependencies
    tidb-tiflash: --> Running transaction check
    tidb-tiflash: ---> Package ntp.x86_64 0:4.2.6p5-29.el7.centos.2 will be installed
    tidb-tiflash: --> Processing Dependency: libopts.so.25()(64bit) for package: ntp-4.2.6p5-29.el7.centos.2.x86_64
    tidb-tiflash: ---> Package ntpdate.x86_64 0:4.2.6p5-29.el7.centos.2 will be installed
    tidb-tiflash: --> Running transaction check
    tidb-tiflash: ---> Package autogen-libopts.x86_64 0:5.18-5.el7 will be installed
    tidb-tiflash: --> Finished Dependency Resolution
    tidb-tiflash:
    tidb-tiflash: Dependencies Resolved
    tidb-tiflash:
    tidb-tiflash: ================================================================================
    tidb-tiflash:  Package              Arch        Version                       Repository
    tidb-tiflash:                                                                            Size
    tidb-tiflash: ================================================================================
    tidb-tiflash: Installing:
    tidb-tiflash:  ntp                  x86_64      4.2.6p5-29.el7.centos.2       base      549 k
    tidb-tiflash:  ntpdate              x86_64      4.2.6p5-29.el7.centos.2       base       87 k
    tidb-tiflash: Installing for dependencies:
    tidb-tiflash:  autogen-libopts      x86_64      5.18-5.el7                    base       66 k
    tidb-tiflash:
    tidb-tiflash: Transaction Summary
    tidb-tiflash: ================================================================================
    tidb-tiflash: Install  2 Packages (+1 Dependent package)
    tidb-tiflash:
    tidb-tiflash: Total download size: 701 k
    tidb-tiflash: Installed size: 1.6 M
    tidb-tiflash: Downloading packages:
    tidb-tiflash: --------------------------------------------------------------------------------
    tidb-tiflash: Total                                              309 kB/s | 701 kB  00:02
    tidb-tiflash: Running transaction check
    tidb-tiflash: Running transaction test
    tidb-tiflash: Transaction test succeeded
    tidb-tiflash: Running transaction
    tidb-tiflash:   Installing : autogen-libopts-5.18-5.el7.x86_64                            1/3
    tidb-tiflash:   Installing : ntpdate-4.2.6p5-29.el7.centos.2.x86_64                       2/3
    tidb-tiflash:   Installing : ntp-4.2.6p5-29.el7.centos.2.x86_64                           3/3
    tidb-tiflash:   Verifying  : ntpdate-4.2.6p5-29.el7.centos.2.x86_64                       1/3
    tidb-tiflash:   Verifying  : ntp-4.2.6p5-29.el7.centos.2.x86_64                           2/3
    tidb-tiflash:   Verifying  : autogen-libopts-5.18-5.el7.x86_64                            3/3
    tidb-tiflash:
    tidb-tiflash: Installed:
    tidb-tiflash:   ntp.x86_64 0:4.2.6p5-29.el7.centos.2 ntpdate.x86_64 0:4.2.6p5-29.el7.centos.2
    tidb-tiflash:
    tidb-tiflash: Dependency Installed:
    tidb-tiflash:   autogen-libopts.x86_64 0:5.18-5.el7
    tidb-tiflash:
    tidb-tiflash: Complete!
    tidb-tiflash: ● ntpd.service - Network Time Service
    tidb-tiflash:    Loaded: loaded (/usr/lib/systemd/system/ntpd.service; disabled; vendor preset: disabled)
    tidb-tiflash:    Active: inactive (dead)
    tidb-tiflash: Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.
    tidb-tiflash: unsynchronised
    tidb-tiflash:   time server re-starting
    tidb-tiflash:    polling server every 8 s
    tidb-tiflash: Generating grub configuration file ...
    tidb-tiflash: Found linux image: /boot/vmlinuz-3.10.0-1160.59.1.el7.x86_64
    tidb-tiflash: Found initrd image: /boot/initramfs-3.10.0-1160.59.1.el7.x86_64.img
    tidb-tiflash: Found linux image: /boot/vmlinuz-0-rescue-319af63f75e64c3395b38885010692bf
    tidb-tiflash: Found initrd image: /boot/initramfs-0-rescue-319af63f75e64c3395b38885010692bf.img
    tidb-tiflash: done
    tidb-tiflash: net.ipv6.conf.all.disable_ipv6 = 1
    tidb-tiflash: vm.swappiness = 0
    tidb-tiflash: fs.file-max = 1000000
    tidb-tiflash: net.core.somaxconn = 32768
    tidb-tiflash: net.ipv4.tcp_tw_recycle = 0
    tidb-tiflash: net.ipv4.tcp_syncookies = 0
    tidb-tiflash: vm.overcommit_memory = 1      

登入tidb-pd虛拟機,安裝tiup工具

使用root使用者登入,執行tiup_deploy.sh腳本 安裝tiup工具

[root@tidb-pd shared_scripts]$ sh tiup_deploy.sh
192.168.56.160 is UP
192.168.56.161 is UP
192.168.56.162 is UP
192.168.56.163 is UP
192.168.56.164 is UP
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '192.168.56.160'"
and check to make sure that only the key(s) you wanted were added.

/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '192.168.56.161'"
and check to make sure that only the key(s) you wanted were added.

/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '192.168.56.162'"
and check to make sure that only the key(s) you wanted were added.

/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '192.168.56.163'"
and check to make sure that only the key(s) you wanted were added.

/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '192.168.56.164'"
and check to make sure that only the key(s) you wanted were added.

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 6968k  100 6968k    0     0  1514k      0  0:00:04  0:00:04 --:--:-- 1514k
WARN: adding root certificate via internet: https://tiup-mirrors.pingcap.com/root.json
You can revoke this by remove /root/.tiup/bin/7b8e153f2e2d0928.root.json
Successfully set mirror to https://tiup-mirrors.pingcap.com
Detected shell: bash
Shell profile:  /root/.bash_profile
/root/.bash_profile has been modified to add tiup to PATH
open a new terminal or source /root/.bash_profile to use it
Installed path: /root/.tiup/bin/tiup
===============================================
Have a try:     tiup playground
===============================================
tiup is checking updates for component cluster ...timeout!
The component `cluster` version  is not installed; downloading from repository.
download https://tiup-mirrors.pingcap.com/cluster-v1.10.2-linux-amd64.tar.gz 8.28 MiB / 8.28 MiB 100.00% 2.48 MiB/s
Starting component `cluster`: /root/.tiup/components/cluster/v1.10.2/tiup-cluster
Deploy a TiDB cluster for production

Usage:
  tiup cluster [command]

Available Commands:
  check       Perform preflight checks for the cluster.
  deploy      Deploy a cluster for production
  start       Start a TiDB cluster
  stop        Stop a TiDB cluster
  restart     Restart a TiDB cluster
  scale-in    Scale in a TiDB cluster
  scale-out   Scale out a TiDB cluster
  destroy     Destroy a specified cluster
  clean       (EXPERIMENTAL) Cleanup a specified cluster
  upgrade     Upgrade a specified TiDB cluster
  display     Display information of a TiDB cluster
  prune       Destroy and remove instances that is in tombstone state
  list        List all clusters
  audit       Show audit log of cluster operation
  import      Import an exist TiDB cluster from TiDB-Ansible
  edit-config Edit TiDB cluster config
  show-config Show TiDB cluster config
  reload      Reload a TiDB cluster's config and restart if needed
  patch       Replace the remote package with a specified package and restart the service
  rename      Rename the cluster
  enable      Enable a TiDB cluster automatically at boot
  disable     Disable automatic enabling of TiDB clusters at boot
  replay      Replay previous operation and skip successed steps
  template    Print topology template
  tls         Enable/Disable TLS between TiDB components
  meta        backup/restore meta information
  help        Help about any command
  completion  Generate the autocompletion script for the specified shell

Flags:
  -c, --concurrency int     max number of parallel tasks allowed (default 5)
      --format string       (EXPERIMENTAL) The format of output, available values are [default, json] (default "default")
  -h, --help                help for tiup
      --ssh string          (EXPERIMENTAL) The executor type: 'builtin', 'system', 'none'.
      --ssh-timeout uint    Timeout in seconds to connect host via SSH, ignored for operations that don't need an SSH connection. (default 5)
  -v, --version             version for tiup
      --wait-timeout uint   Timeout in seconds to wait for an operation to complete, ignored for operations that don't fit. (default 120)
  -y, --yes                 Skip all confirmations and assumes 'yes'

Use "tiup cluster help [command]" for more information about a command.
download https://tiup-mirrors.pingcap.com/tiup-v1.10.2-linux-amd64.tar.gz 6.81 MiB / 6.81 MiB 100.00% 3.53 MiB/s
Updated successfully!
component cluster version v1.10.2 is already installed
Updated successfully!
/root/.tiup/components/cluster/v1.10.2/tiup-cluster      

初始化叢集拓撲檔案

在執行tish_deploy.sh 腳本後,生成了 /home/tidb/topology.yaml 叢集拓撲檔案

[tidb@tidb-pd ~]$ cat topology.yaml
global:
  user: "tidb"
  ssh_port: 22
  deploy_dir: "/tidb-deploy"
  data_dir: "/tidb-data"
  arch: "amd64"

monitored:
  node_exporter_port: 9100
  blackbox_exporter_port: 9115


pd_servers:
  - host: 192.168.56.160


tidb_servers:
  - host: 192.168.56.161

tikv_servers:
  - host: 192.168.56.162
  - host: 192.168.56.163

tiflash_servers:
  - host: 192.168.56.164

monitoring_servers:
  - host:

grafana_servers:
  - host: 192.168.56.160

alertmanager_servers:
  - host: 192.168.56.160      

執行部署指令

  • 檢查叢集存在的潛在風險:
# tiup cluster check ./topology.yaml
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.10.2/tiup-cluster check ./topology.yaml





+ Detect CPU Arch Name
  - Detecting node 192.168.56.160 Arch info ... Done
  - Detecting node 192.168.56.162 Arch info ... Done
  - Detecting node 192.168.56.163 Arch info ... Done
  - Detecting node 192.168.56.161 Arch info ... Done
  - Detecting node 192.168.56.164 Arch info ... Done





+ Detect CPU OS Name
  - Detecting node 192.168.56.160 OS info ... Done
  - Detecting node 192.168.56.162 OS info ... Done
  - Detecting node 192.168.56.163 OS info ... Done
  - Detecting node 192.168.56.161 OS info ... Done
  - Detecting node 192.168.56.164 OS info ... Done
+ Download necessary tools
  - Downloading check tools for linux/amd64 ... Done
+ Collect basic system information
+ Collect basic system information
+ Collect basic system information
+ Collect basic system information
+ Collect basic system information
+ Collect basic system information
  - Getting system info of 192.168.56.164:22 ... Done
  - Getting system info of 192.168.56.160:22 ... Done
  - Getting system info of 192.168.56.162:22 ... Done
  - Getting system info of 192.168.56.163:22 ... Done
  - Getting system info of 192.168.56.161:22 ... Done
+ Check time zone
  - Checking node 192.168.56.164 ... Done
  - Checking node 192.168.56.160 ... Done
  - Checking node 192.168.56.162 ... Done
  - Checking node 192.168.56.163 ... Done
  - Checking node 192.168.56.161 ... Done
+ Check system requirements
+ Check system requirements
+ Check system requirements
+ Check system requirements
+ Check system requirements
+ Check system requirements
+ Check system requirements
+ Check system requirements
+ Check system requirements
+ Check system requirements
+ Check system requirements
+ Check system requirements
+ Check system requirements
+ Check system requirements
+ Check system requirements
+ Check system requirements
  - Checking node 192.168.56.164 ... Done
  - Checking node 192.168.56.160 ... Done
  - Checking node 192.168.56.162 ... Done
  - Checking node 192.168.56.163 ... Done
  - Checking node 192.168.56.161 ... Done
  - Checking node 192.168.56.160 ... Done
  - Checking node 192.168.56.160 ... Done
  - Checking node 192.168.56.160 ... Done
+ Cleanup check files
  - Cleanup check files on 192.168.56.164:22 ... Done
  - Cleanup check files on 192.168.56.160:22 ... Done
  - Cleanup check files on 192.168.56.162:22 ... Done
  - Cleanup check files on 192.168.56.163:22 ... Done
  - Cleanup check files on 192.168.56.161:22 ... Done
  - Cleanup check files on 192.168.56.160:22 ... Done
  - Cleanup check files on 192.168.56.160:22 ... Done
  - Cleanup check files on 192.168.56.160:22 ... Done
Node            Check         Result  Message
----            -----         ------  -------
192.168.56.161  timezone      Pass    time zone is the same as the first PD machine: Asia/Shanghai
192.168.56.161  os-version    Pass    OS is CentOS Linux 7 (Core) 7.9.2009
192.168.56.161  cpu-cores     Pass    number of CPU cores / threads: 1
192.168.56.161  cpu-governor  Warn    Unable to determine current CPU frequency governor policy
192.168.56.161  swap          Warn    swap is enabled, please disable it for best performance
192.168.56.161  network       Pass    network speed of eth0 is 1000MB
192.168.56.161  network       Pass    network speed of eth1 is 1000MB
192.168.56.161  thp           Pass    THP is disabled
192.168.56.161  command       Pass    numactl: policy: default
192.168.56.161  memory        Pass    memory size is 0MB
192.168.56.161  selinux       Pass    SELinux is disabled
192.168.56.161  service       Fail    service irqbalance is not running
192.168.56.164  memory        Pass    memory size is 0MB
192.168.56.164  network       Pass    network speed of eth0 is 1000MB
192.168.56.164  network       Pass    network speed of eth1 is 1000MB
192.168.56.164  disk          Warn    mount point / does not have 'noatime' option set
192.168.56.164  service       Fail    service irqbalance is not running
192.168.56.164  command       Pass    numactl: policy: default
192.168.56.164  timezone      Pass    time zone is the same as the first PD machine: Asia/Shanghai
192.168.56.164  cpu-governor  Warn    Unable to determine current CPU frequency governor policy
192.168.56.164  swap          Warn    swap is enabled, please disable it for best performance
192.168.56.164  selinux       Pass    SELinux is disabled
192.168.56.164  thp           Pass    THP is disabled
192.168.56.164  os-version    Pass    OS is CentOS Linux 7 (Core) 7.9.2009
192.168.56.164  cpu-cores     Pass    number of CPU cores / threads: 1
192.168.56.160  cpu-governor  Warn    Unable to determine current CPU frequency governor policy
192.168.56.160  memory        Pass    memory size is 0MB
192.168.56.160  network       Pass    network speed of eth0 is 1000MB
192.168.56.160  network       Pass    network speed of eth1 is 1000MB
192.168.56.160  selinux       Pass    SELinux is disabled
192.168.56.160  thp           Pass    THP is disabled
192.168.56.160  os-version    Pass    OS is CentOS Linux 7 (Core) 7.9.2009
192.168.56.160  cpu-cores     Pass    number of CPU cores / threads: 1
192.168.56.160  service       Fail    service irqbalance is not running
192.168.56.160  command       Pass    numactl: policy: default
192.168.56.160  swap          Warn    swap is enabled, please disable it for best performance
192.168.56.160  disk          Warn    mount point / does not have 'noatime' option set
192.168.56.162  os-version    Pass    OS is CentOS Linux 7 (Core) 7.9.2009
192.168.56.162  swap          Warn    swap is enabled, please disable it for best performance
192.168.56.162  network       Pass    network speed of eth0 is 1000MB
192.168.56.162  network       Pass    network speed of eth1 is 1000MB
192.168.56.162  disk          Warn    mount point / does not have 'noatime' option set
192.168.56.162  service       Fail    service irqbalance is not running
192.168.56.162  command       Pass    numactl: policy: default
192.168.56.162  timezone      Pass    time zone is the same as the first PD machine: Asia/Shanghai
192.168.56.162  cpu-cores     Pass    number of CPU cores / threads: 1
192.168.56.162  cpu-governor  Warn    Unable to determine current CPU frequency governor policy
192.168.56.162  memory        Pass    memory size is 0MB
192.168.56.162  selinux       Pass    SELinux is disabled
192.168.56.162  thp           Pass    THP is disabled
192.168.56.163  selinux       Pass    SELinux is disabled
192.168.56.163  thp           Pass    THP is disabled
192.168.56.163  cpu-governor  Warn    Unable to determine current CPU frequency governor policy
192.168.56.163  disk          Warn    mount point / does not have 'noatime' option set
192.168.56.163  cpu-cores     Pass    number of CPU cores / threads: 1
192.168.56.163  swap          Warn    swap is enabled, please disable it for best performance
192.168.56.163  memory        Pass    memory size is 0MB
192.168.56.163  network       Pass    network speed of eth0 is 1000MB
192.168.56.163  network       Pass    network speed of eth1 is 1000MB
192.168.56.163  service       Fail    service irqbalance is not running
192.168.56.163  command       Pass    numactl: policy: default
192.168.56.163  timezone      Pass    time zone is the same as the first PD machine: Asia/Shanghai
192.168.56.163  os-version    Pass    OS is CentOS Linux 7 (Core) 7.9.2009      

<!---->

  • 自動修複叢集存在的潛在風險:
# tiup cluster check ~/topology.yaml --apply --user root      
  • 部署 TiDB 叢集:
# tiup cluster deploy tidb-test v5.4.1 ./topology.yaml --user root      

以上部署示例中:

  • ​tidb-test​

    ​ 為部署的叢集名稱。
  • ​v5.4.1​

    ​​ 為部署的叢集版本,可以通過執行​

    ​tiup list tidb​

    ​ 來檢視 TiUP 支援的最新可用版本。
  • 初始化配置檔案為​

    ​topology.yaml​

    ​。
  • ​--user root​

    ​ 表示通過 root 使用者登入到目标主機完成叢集部署,該使用者需要有 ssh 到目标機器的權限,并且在目标機器有 sudo 權限。也可以用其他有 ssh 和 sudo 權限的使用者完成部署。
  • [-i] 及 [-p] 為可選項,如果已經配置免密登入目标機,則不需填寫。否則選擇其一即可,[-i] 為可登入到目标機的 root 使用者(或 --user 指定的其他使用者)的私鑰,也可使用 [-p] 互動式輸入該使用者的密碼。

預期日志結尾輸出 ​

​Deployed cluster​

​​tidb-test​

​successfully​

​ 關鍵詞,表示部署成功。

檢視 TiUP 管理的叢集情況

# tiup cluster list      

TiUP 支援管理多個 TiDB 叢集,該指令會輸出目前通過 TiUP cluster 管理的所有叢集資訊,包括叢集名稱、部署使用者、版本、密鑰資訊等。

檢查部署的 TiDB 叢集情況

# tiup cluster display tidb-test      

啟動叢集

安全啟動是 TiUP cluster 從 v1.9.0 起引入的一種新的啟動方式,采用該方式啟動資料庫可以提高資料庫安全性。推薦使用安全啟動。

安全啟動後,TiUP 會自動生成 TiDB root 使用者的密碼,并在指令行界面傳回密碼。

注意:

  • 使用安全啟動方式後,不能通過無密碼的 root 使用者登入資料庫,你需要記錄指令行傳回的密碼進行後續操作。
  • 該自動生成的密碼隻會傳回一次,如果沒有記錄或者忘記該密碼,請參照​​忘記 root 密碼​​修改密碼。

方式一:安全啟動

# tiup cluster start tidb-test --init
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.10.2/tiup-cluster start tidb-test --init
Starting cluster tidb-test...
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.56.164
+ [Parallel] - UserSSH: user=tidb, host=192.168.56.160
+ [Parallel] - UserSSH: user=tidb, host=192.168.56.160
+ [Parallel] - UserSSH: user=tidb, host=192.168.56.160
+ [Parallel] - UserSSH: user=tidb, host=192.168.56.160
+ [Parallel] - UserSSH: user=tidb, host=192.168.56.162
+ [Parallel] - UserSSH: user=tidb, host=192.168.56.163
+ [Parallel] - UserSSH: user=tidb, host=192.168.56.161
+ [ Serial ] - StartCluster
Starting component pd
        Starting instance 192.168.56.160:2379
        Start instance 192.168.56.160:2379 success
Starting component tikv
        Starting instance 192.168.56.163:20160
        Starting instance 192.168.56.162:20160
        Start instance 192.168.56.163:20160 success
        Start instance 192.168.56.162:20160 success
Starting component tidb
        Starting instance 192.168.56.161:4000
        Start instance 192.168.56.161:4000 success
Starting component tiflash
        Starting instance 192.168.56.164:9000
        Start instance 192.168.56.164:9000 success
Starting component prometheus
        Starting instance 192.168.56.160:9090
        Start instance 192.168.56.160:9090 success
Starting component grafana
        Starting instance 192.168.56.160:3000
        Start instance 192.168.56.160:3000 success
Starting component alertmanager
        Starting instance 192.168.56.160:9093
        Start instance 192.168.56.160:9093 success
Starting component node_exporter
        Starting instance 192.168.56.163
        Starting instance 192.168.56.161
        Starting instance 192.168.56.164
        Starting instance 192.168.56.160
        Starting instance 192.168.56.162
        Start 192.168.56.161 success
        Start 192.168.56.162 success
        Start 192.168.56.163 success
        Start 192.168.56.160 success
        Start 192.168.56.164 success
Starting component blackbox_exporter
        Starting instance 192.168.56.163
        Starting instance 192.168.56.161
        Starting instance 192.168.56.164
        Starting instance 192.168.56.160
        Starting instance 192.168.56.162
        Start 192.168.56.163 success
        Start 192.168.56.162 success
        Start 192.168.56.161 success
        Start 192.168.56.164 success
        Start 192.168.56.160 success
+ [ Serial ] - UpdateTopology: cluster=tidb-test
Started cluster `tidb-test` successfully
The root password of TiDB database has been changed.
The new password is: '45s6W&_w9!1KcB^aH8'.
Copy and record it to somewhere safe, it is only displayed once, and will not be stored.
The generated password can NOT be get and shown again.      

預期結果如下,表示啟動成功。

Started cluster `tidb-test` successfully.
The root password of TiDB database has been changed.
The new password is: 'y_+3Hwp=*AWz8971s6'.
Copy and record it to somewhere safe, it is only displayed once, and will not be stored.
The generated password can NOT be got again in future.      

方式二:普通啟動

# tiup cluster start tidb-test      

預期結果輸出 ​

​Started cluster​

​​tidb-test​

​successfully​

​,表示啟動成功。使用普通啟動方式後,可通過無密碼的 root 使用者登入資料庫。

驗證叢集運作狀态

# tiup cluster display tidb-test
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.10.2/tiup-cluster display tidb-test
Cluster type:       tidb
Cluster name:       tidb-test
Cluster version:    v5.4.1
Deploy user:        tidb
SSH type:           builtin
Dashboard URL:      http://192.168.56.160:2379/dashboard
Grafana URL:        http://192.168.56.160:3000
ID                    Role          Host            Ports                            OS/Arch       Status   Data Dir                      Deploy Dir
--                    ----          ----            -----                            -------       ------   --------                      ----------
192.168.56.160:9093   alertmanager  192.168.56.160  9093/9094                        linux/x86_64  Up       /tidb-data/alertmanager-9093  /tidb-deploy/alertmanager-9093
192.168.56.160:3000   grafana       192.168.56.160  3000                             linux/x86_64  Up       -                             /tidb-deploy/grafana-3000
192.168.56.160:2379   pd            192.168.56.160  2379/2380                        linux/x86_64  Up|L|UI  /tidb-data/pd-2379            /tidb-deploy/pd-2379
192.168.56.160:9090   prometheus    192.168.56.160  9090/12020                       linux/x86_64  Up       /tidb-data/prometheus-9090    /tidb-deploy/prometheus-9090
192.168.56.161:4000   tidb          192.168.56.161  4000/10080                       linux/x86_64  Up       -                             /tidb-deploy/tidb-4000
192.168.56.164:9000   tiflash       192.168.56.164  9000/8123/3930/20170/20292/8234  linux/x86_64  Up       /tidb-data/tiflash-9000       /tidb-deploy/tiflash-9000
192.168.56.162:20160  tikv          192.168.56.162  20160/20180                      linux/x86_64  Up       /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
192.168.56.163:20160  tikv          192.168.56.163  20160/20180                      linux/x86_64  Up       /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
Total nodes: 8      

繼續閱讀