天天看点

minio笔记02--基于swarm搭建minio集群

minio笔记02--基于swarm搭建minio集群

  • ​​1 介绍​​
  • ​​2 初始化swarm集群​​
  • ​​3 搭建minio集群​​
  • ​​4 说明​​

1 介绍

minio 有多种搭建方式,​​minio笔记01–部署与测试minio​​已经介绍了单机上搭建测试集群和常见使用方式,本文将进一步介绍如何基于swarm 搭建多机集群。

2 初始化swarm集群

  1. 初始化 master 节点

    master节点执行init,初始化集群

# docker swarm init      
  1. 添加 worker节点:

    在work节点执行join,加入集群

在每个work上依次执行该命令:
# docker swarm join --token SWMTKN-1-61sdyxdby3ofx0s3p4x44hmyahfrcvijcxf7v2fgbhf3yd007c-5d5lt1vhkr28aw4koith8ya8j 192.168.2.131:237      
  1. 集群节点初始状态:
  2. minio笔记02--基于swarm搭建minio集群
  3. 新建测试服务
# docker service create --name nginx --network host nginx:1.19.6 
扩容到9个副本:
# docker service scale nginx=4 -d
默认情况下master节点也是worker,因此也会部署服务;但是可以通过drain是master只充当管理节点,不部署服务:
# docker node update --availability drain kmaster
kmaster

# docker node ls # 此时状态为Drain,master节点上的nginx飘到其它节点上了。
ID                            HOSTNAME                          STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
x6krw762e277z3cg9jcty17rh *   kmaster    Ready     Drain          Leader           20.10.1
cv2qe94em477zz5kx92a9qt1m     knode01    Ready     Active                          20.10.1

测试完毕,删除服务,nginx服务
# docker service rm nginx 

笔者此处刚刚好4个节点,因此需要设置master节点可用;线上环境可以专门用3个低配机器当作master,不部署其它服务。
# docker node update --availability active kmaster      

3 搭建minio集群

  1. 创建secret_key 和 access_ke
echo "minio" | docker secret create access_key -
echo "minio123" |      
  1. 对节点打标签,以将minio实例调度到不同节点上
docker node update --label-add minio1=true x6krw762e277z3cg9jcty17rh
docker node update --label-add minio2=true cv2qe94em477zz5kx92a9qt1m
docker node update --label-add minio3=true utp02b0firo331snlz3jvgexy
docker node update --label-add minio4=true jl0wocdk31hik8xlgkovaoz44
注意:打标签必须使用node的ID来标识节点,否则会失败      
  1. 依次在4个节点 创建/home/minio/data 目录
# adduser --home /home/minio minio
# su - minio
$ mkdir      
  1. 下载并修改配置文件
# wget https://raw.githubusercontent.com/minio/minio/master/docs/orchestration/docker-swarm/docker-compose-secrets.yaml
# vim docker-compose-secrets.yaml
version: '3.7'

services:
  minio1:
    image: minio/minio:RELEASE.2021-01-16T02-19-44Z
    hostname: minio1
    volumes:
      - /home/minio/data:/export
    ports:
      - "9001:9000"
    networks:
      - minio_distributed
    deploy:
      restart_policy:
        delay: 10s
        max_attempts: 10
        window: 60s
      placement:
        constraints:
          - node.labels.minio1==true
    command: server http://minio{1...4}/export
    secrets:
      - secret_key
      - access_key
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3

  minio2:
    image: minio/minio:RELEASE.2021-01-16T02-19-44Z
    hostname: minio2
    volumes:
      - /home/minio/data:/export
    ports:
      - "9002:9000"
    networks:
      - minio_distributed
    deploy:
      restart_policy:
        delay: 10s
        max_attempts: 10
        window: 60s
      placement:
        constraints:
          - node.labels.minio2==true
    command: server http://minio{1...4}/export
    secrets:
      - secret_key
      - access_key
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3

  minio3:
    image: minio/minio:RELEASE.2021-01-16T02-19-44Z
    hostname: minio3
    volumes:
      - /home/minio/data:/export
    ports:
      - "9003:9000"
    networks:
      - minio_distributed
    deploy:
      restart_policy:
        delay: 10s
        max_attempts: 10
        window: 60s
      placement:
        constraints:
          - node.labels.minio3==true
    command: server http://minio{1...4}/export
    secrets:
      - secret_key
      - access_key
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3

  minio4:
    image: minio/minio:RELEASE.2021-01-16T02-19-44Z
    hostname: minio4
    volumes:
      - /home/minio/data:/export
    ports:
      - "9004:9000"
    networks:
      - minio_distributed
    deploy:
      restart_policy:
        delay: 10s
        max_attempts: 10
        window: 60s
      placement:
        constraints:
          - node.labels.minio4==true
    command: server http://minio{1...4}/export
    secrets:
      - secret_key
      - access_key
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3

networks:
  minio_distributed:
    driver: overlay

secrets:
  secret_key:
    external: true
  access_key:
    external: true      
  1. 启动集群
# docker stack deploy --compose-file=docker-compose-secrets.yaml minio_stack
Creating network minio_stack_minio_distributed
Creating service minio_stack_minio2
Creating service minio_stack_minio3
Creating service minio_stack_minio4
Creating service      
  1. 服务创建成功后如下:
  2. minio笔记02--基于swarm搭建minio集群
  3. 在master节点添加集群信息
# mc alias set minio http://localhost:9001 minio minio123
Added `minio` successfully
# mc mb test01
Bucket created successfully `test01`.
# mc ls minio 
[2021-01-22 17:00:33 CST]      
  1. 删除集群
在master上删除minio_stack
# docker stack rm minio_stack
在每个work节点执行清除卷的功能
# docker volume prune      
  1. minio 集群信息:
  2. 访问minio集群:

    ​​​http://ip:9001/minio/​​

4 说明

  1. 软件环境

    笔者系统为ubuntu18.04

    docker 20.10.1

    minio镜像:minio/minio:RELEASE.2021-01-16T02-19-44Z

继续阅读