一、GlusterFS基础环境的介绍
1、关于GlusterFS文件系统和架构的介绍
http://jingyan.baidu.com/article/046a7b3ef65250f9c27fa9d9.html
2、实验的目的
a. 利用多台性能较低并且老旧的服务器,实现企业的云盘功能
b. GlusterFS服务端和客户端的部署和配置
c. 实现GlusterFS多节点的负载均衡功能
3、测试环境说明
操作系统:CentOS 6.7 X64
内核版本:2.6.32-573.el6.x86_64
软件版本:glusterfs 3.7.10
使用4台服务器创卷GlusterFS的DHT功能,客户端win10使用samba进行连接配置
二、GlusterFS服务端的配置(server01)
1、GlusterFS无中心化概念,很多关于GlusterFS的配置仅需要在其中一台主机设置
2、配置NTP服务器同步(这里也可以在crontab脚本里面,添加一个定时任务)
<code>[root@server01 ~]</code><code># ntpdate -u 10.203.10.20</code>
<code>18 Apr 14:16:15 ntpdate[2700]: adjust </code><code>time</code> <code>server 10.203.10.20 offset 0.008930 sec</code>
<code>[root@server01 ~]</code><code># hwclock -w</code>
3、查看hosts表的记录
<code>[root@server01 ~]</code><code># cat /etc/hosts</code>
<code>127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4</code>
<code>::1 localhost localhost.localdomain localhost6 localhost6.localdomain6</code>
<code>192.168.1.11server01</code>
<code>192.168.1.12server02</code>
<code>192.168.1.13server03</code>
<code>192.168.1.14server04</code>
4、单独添加一块磁盘作为共享卷使用(这里也可以配置LVM)
<code>[root@server01 ~]</code><code># fdisk /dev/sdb</code>
<code>Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel</code>
<code>Building a new DOS disklabel with disk identifier 0x0ef88f22.</code>
<code>Changes will remain </code><code>in</code> <code>memory only, </code><code>until</code> <code>you decide to write them.</code>
<code>After that, of course, the previous content won't be recoverable.</code>
<code>Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)</code>
<code>WARNING: DOS-compatible mode is deprecated. It's strongly recommended to</code>
<code> </code><code>switch off the mode (</code><code>command</code> <code>'c'</code><code>) and change display </code><code>units</code> <code>to</code>
<code> </code><code>sectors (</code><code>command</code> <code>'u'</code><code>).</code>
<code>Command (m </code><code>for</code> <code>help): p</code>
<code>Disk </code><code>/dev/sdb</code><code>: 21.5 GB, 21474836480 bytes</code>
<code>255 heads, 63 sectors</code><code>/track</code><code>, 2610 cylinders</code>
<code>Units = cylinders of 16065 * 512 = 8225280 bytes</code>
<code>Sector size (logical</code><code>/physical</code><code>): 512 bytes / 512 bytes</code>
<code>I</code><code>/O</code> <code>size (minimum</code><code>/optimal</code><code>): 512 bytes / 512 bytes</code>
<code>Disk identifier: 0x0ef88f22</code>
<code> </code><code>Device Boot Start End Blocks Id System</code>
<code>Command (m </code><code>for</code> <code>help): n</code>
<code>Command action</code>
<code> </code><code>e extended</code>
<code> </code><code>p primary partition (1-4)</code>
<code>p</code>
<code>Partition number (1-4): 1</code>
<code>First cylinder (1-2610, default 1): </code>
<code>Using default value 1</code>
<code>Last cylinder, +cylinders or +size{K,M,G} (1-2610, default 2610): </code>
<code>Using default value 2610</code>
<code>Command (m </code><code>for</code> <code>help): w</code>
<code>The partition table has been altered!</code>
<code>Calling ioctl() to re-</code><code>read</code> <code>partition table.</code>
<code>Syncing disks.</code>
<code>[root@server01 ~]</code><code># partx /dev/sdb</code>
<code># 1: 63- 41929649 ( 41929587 sectors, 21467 MB)</code>
<code># 2: 0- -1 ( 0 sectors, 0 MB)</code>
<code># 3: 0- -1 ( 0 sectors, 0 MB)</code>
<code># 4: 0- -1 ( 0 sectors, 0 MB)</code>
<code>[root@server01 ~]</code><code># fdisk -l |grep /dev/sdb</code>
<code>/dev/sdb1</code> <code>1 2610 20964793+ 83 Linux</code>
<code>[root@server01 ~]</code><code># mkfs.ext4 /dev/sdb1</code>
<code>mke2fs 1.41.12 (17-May-2010)</code>
<code>Filesystem label=</code>
<code>OS </code><code>type</code><code>: Linux</code>
<code>Block size=4096 (log=2)</code>
<code>Fragment size=4096 (log=2)</code>
<code>Stride=0 blocks, Stripe width=0 blocks</code>
<code>1310720 inodes, 5241198 blocks</code>
<code>262059 blocks (5.00%) reserved </code><code>for</code> <code>the super user</code>
<code>First data block=0</code>
<code>Maximum filesystem blocks=4294967296</code>
<code>160 block </code><code>groups</code>
<code>32768 blocks per group, 32768 fragments per group</code>
<code>8192 inodes per group</code>
<code>Superblock backups stored on blocks: </code>
<code>32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, </code>
<code>4096000</code>
<code>Writing inode tables: </code><code>done</code>
<code>Creating journal (32768 blocks): </code><code>done</code>
<code>Writing superblocks and filesystem accounting information: </code><code>done</code>
<code>This filesystem will be automatically checked every 36 mounts or</code>
<code>180 days, whichever comes first. Use tune2fs -c or -i to override.</code>
5、创建挂载目录,并设置开机挂载
<code>[root@server01 ~]</code><code># mkdir -p /glusterfs-xfs-mount</code>
<code>[root@server01 ~]</code><code># mount /dev/sdb1 /glusterfs-xfs-mount/</code>
<code>[root@server01 ~]</code><code># df -h</code>
<code>Filesystem Size Used Avail Use% Mounted on</code>
<code>/dev/sda3</code> <code>193G 7.2G 176G 4% /</code>
<code>tmpfs 932M 0 932M 0% </code><code>/dev/shm</code>
<code>/dev/sda1</code> <code>190M 41M 139M 23% </code><code>/boot</code>
<code>/dev/sdb1</code> <code>20G 44M 19G 1% </code><code>/glusterfs-xfs-mount</code>
6、修改自动挂载
<code>[root@server01 ~]</code><code># echo '/dev/sdb1 /glusterfs-xfs-mount xfs defaults 0 0' >> /etc/fstab</code>
7、添加外部源
<code>[root@server01 ~]</code><code># cd /etc/yum.repos.d/</code>
<code>[root@server01 yum.repos.d]</code><code># wget http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo</code>
8、安装glusterfs服务端软件,并启动服务
<code>[root@server01 yum.repos.d]</code><code># yum -y install glusterfs-server</code>
<code>[root@server01 yum.repos.d]</code><code># /etc/init.d/glusterd start</code>
<code>Starting glusterd: [ OK ]</code>
<code>[root@server01 yum.repos.d]</code><code># chkconfig glusterd on</code>
<code>[root@server01 yum.repos.d]</code><code># chkconfig --list glusterd</code>
<code>glusterd 0:off1:off2:on3:on4:on5:on6:off</code>
<code>[root@server01 yum.repos.d]</code><code># df -h</code>
9、添加集群对象(server02以及server03)
<code>[root@server01 yum.repos.d]</code><code># gluster peer status</code>
<code>Number of Peers: 0</code>
<code>[root@server01 yum.repos.d]</code><code># gluster peer probe server02</code>
<code>peer probe: success. </code>
<code>Number of Peers: 1</code>
<code>Hostname: server02</code>
<code>Uuid: c58d0715-32ff-4962-90d9-4275fa65793a</code>
<code>State: Peer </code><code>in</code> <code>Cluster (Connected)</code>
<code>[root@server01 yum.repos.d]</code><code># gluster peer probe server03</code>
<code>Number of Peers: 2</code>
<code>Hostname: server03</code>
<code>Uuid: 5110d0af-fdd9-4c82-b716-991cf0601b53</code>
10、创建Gluster Volume
<code>[root@server01 yum.repos.d]</code><code># gluster volume create dht-volume01 server01:/glusterfs-xfs-mount server02:/gluste</code>
<code>rfs-xfs-</code><code>mount</code> <code>server03:</code><code>/glusterfs-xfs-mount</code>
<code>volume create: dht-volume01: failed: The brick server01:</code><code>/glusterfs-xfs-mount</code> <code>is a </code><code>mount</code> <code>point. Please create a</code>
<code> </code><code>sub-directory under the </code><code>mount</code> <code>point and use that as the brick directory. Or use </code><code>'force'</code> <code>at the end of the com</code>
<code>mand </code><code>if</code> <code>you want to override this behavior.</code>
<code>[root@server01 yum.repos.d]</code><code># echo $?</code>
<code>1</code>
<code>rfs-xfs-</code><code>mount</code> <code>server03:</code><code>/glusterfs-xfs-mount</code> <code>force</code>
<code>volume create: dht-volume01: success: please start the volume to access data</code>
<code>[root@server01 yum.repos.d]</code><code># gluster volume start dht-volume01</code>
<code>volume start: dht-volume01: success</code>
<code>[root@server01 yum.repos.d]</code><code># gluster volume status</code>
<code>Status of volume: dht-volume01</code>
<code>Gluster process TCP Port RDMA Port Online Pid</code>
<code>------------------------------------------------------------------------------</code>
<code>Brick server01:</code><code>/glusterfs-xfs-mount</code> <code>49152 0 Y 2948 </code>
<code>Brick server02:</code><code>/glusterfs-xfs-mount</code> <code>49152 0 Y 2910 </code>
<code>Brick server03:</code><code>/glusterfs-xfs-mount</code> <code>49152 0 Y 11966</code>
<code>NFS Server on localhost N</code><code>/A</code> <code>N</code><code>/A</code> <code>N N</code><code>/A</code>
<code>NFS Server on server02 N</code><code>/A</code> <code>N</code><code>/A</code> <code>N N</code><code>/A</code>
<code>NFS Server on server03 N</code><code>/A</code> <code>N</code><code>/A</code> <code>N N</code><code>/A</code>
<code> </code>
<code>Task Status of Volume dht-volume01</code>
<code>There are no active volume tasks</code>
11、测试写入一个512M的文件
<code>[root@server01 yum.repos.d]</code><code># cd /glusterfs-xfs-mount/</code>
<code>[root@server01 glusterfs-xfs-</code><code>mount</code><code>]</code><code># dd if=/dev/zero of=test.img bs=1M count=512</code>
<code>512+0 records </code><code>in</code>
<code>512+0 records out</code>
<code>536870912 bytes (537 MB) copied, 5.20376 s, 103 MB</code><code>/s</code>
<code>[root@server01 glusterfs-xfs-</code><code>mount</code><code>]</code><code># ls</code>
<code>lost+found </code><code>test</code><code>.img</code>
三、GlusterFS服务端的配置(server02与server03上的配置类似)
1、配置时间同步,这里的ntp服务器IP地址为10.203.10.20
<code>[root@server02 ~]</code><code># ntpdate -u 10.203.10.20</code>
<code>18 Apr 14:27:58 ntpdate[2712]: adjust </code><code>time</code> <code>server 10.203.10.20 offset -0.085282 sec</code>
<code>[root@server02 ~]</code><code># hwclock -w</code>
2、查看host表的信息
<code>[root@server02 ~]</code><code># cat /etc/hosts</code>
3、在本地设置一块单独的盘,组成GlusterFS卷的一部分
<code>[root@server02 ~]</code><code># fdisk /dev/sdb</code>
<code>Building a new DOS disklabel with disk identifier 0x927b5e72.</code>
<code>Disk identifier: 0x927b5e72</code>
4、更新磁盘分区表信息,使磁盘分区和格式化
<code>[root@server02 ~]</code><code># partx /dev/sdb</code>
<code>[root@server02 ~]</code><code># fdisk -l|grep /dev/sdb</code>
<code>[root@server02 ~]</code><code># mkfs.ext4 /dev/sdb1</code>
<code>This filesystem will be automatically checked every 21 mounts or</code>
<code>[root@server02 ~]</code><code># df -h</code>
<code>/dev/sda3</code> <code>193G 7.3G 176G 4% /</code>
5、创建挂载目录,并配置挂载信息
<code>[root@server02 ~]</code><code># mkdir -p /glusterfs-xfs-mount</code>
<code>[root@server02 ~]</code><code># mount /dev/sdb1 /glusterfs-xfs-mount/</code>
<code>[root@server02 ~]</code><code># echo '/dev/sdb1 /glusterfs-xfs-mount xfs defaults 0 0' >> /etc/fstab</code>
6、配置yum源,并安装GlusterFS服务端软件
<code>[root@server02 ~]</code><code># cd /etc/yum.repos.d/</code>
<code>[root@server02 yum.repos.d]</code><code># wget http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glust</code>
<code>erfs-epel.repo</code>
<code>--2016-04-18 14:32:22-- http:</code><code>//download</code><code>.gluster.org</code><code>/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel</code><code>.</code>
<code>repo</code>
<code>Resolving download.gluster.org... 23.253.208.221, 2001:4801:7824:104:be76:4eff:fe10:23d8</code>
<code>Connecting to download.gluster.org|23.253.208.221|:80... connected.</code>
<code>HTTP request sent, awaiting response... 200 OK</code>
<code>Length: 1049 (1.0K)</code>
<code>Saving to: “glusterfs-epel.repo”</code>
<code>100%[==============================================================>] 1,049 --.-K</code><code>/s</code> <code>in</code> <code>0s </code>
<code>2016-04-18 14:32:23 (36.4 MB</code><code>/s</code><code>) - “glusterfs-epel.repo” saved [1049</code><code>/1049</code><code>]</code>
<code>[root@server02 yum.repos.d]</code><code># rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noa</code>
<code>rch.rpm</code>
<code>Retrieving http:</code><code>//dl</code><code>.fedoraproject.org</code><code>/pub/epel/6/x86_64/epel-release-6-8</code><code>.noarch.rpm</code>
<code>warning: </code><code>/var/tmp/rpm-tmp</code><code>.gaJCKd: Header V3 RSA</code><code>/SHA256</code> <code>Signature, key ID 0608b895: NOKEY</code>
<code>Preparing... </code><code>########################################### [100%]</code>
<code> </code><code>1:epel-release </code><code>########################################### [100%]</code>
<code>[root@server02 yum.repos.d]</code><code># yum -y install glusterfs-server</code>
7、启动glusterd服务
<code>[root@server02 yum.repos.d]</code><code># /etc/init.d/glusterd start</code>
<code>[root@server02 yum.repos.d]</code><code># chkconfig glusterd on</code>
<code>[root@server02 yum.repos.d]</code><code># chkconfig --list glusterd</code>
8、查看gluster集群节点和卷的信息
<code>[root@server02 yum.repos.d]</code><code># gluster peer status</code>
<code>Hostname: server01</code>
<code>Uuid: e90a3b54-5a9d-4e57-b502-86f9aad8b576</code>
<code>[root@server02 yum.repos.d]</code><code># gluster volume status</code>
<code>NFS Server on localhost 2049 0 Y 2932 </code>
<code>NFS Server on server01 2049 0 Y 2968 </code>
<code>NFS Server on server03 2049 0 Y 11986</code>
<code>[root@server02 yum.repos.d]</code><code># ll /glusterfs-xfs-mount/</code>
<code>total 16</code>
<code>drwx------ 2 root root 16384 Apr 18 14:29 lost+found</code>
<code>[root@server02 yum.repos.d]</code><code># cd /glusterfs-xfs-mount/</code>
<code>[root@server02 glusterfs-xfs-</code><code>mount</code><code>]</code><code># dd if=/dev/zero of=server02.img bs=1M count=512</code>
<code>536870912 bytes (537 MB) copied, 5.85478 s, 91.7 MB</code><code>/s</code>
<code>[root@server02 glusterfs-xfs-</code><code>mount</code><code>]</code><code># ls</code>
<code>lost+found server02.img</code>
由于我这里配置的是DHT的模式,所以集群在server01与server02的数据信息是不同的,除非人为写入相同文件
四、手动添加和删除brick卷的卷节点的信息(任意一个gluster服务端节点上的操作都可以)
1、添加server04节点
<code>[root@server01 glusterfs-xfs-</code><code>mount</code><code>]</code><code># gluster peer probe server04</code>
<code>[root@server01 glusterfs-xfs-</code><code>mount</code><code>]</code><code># gluster peer status</code>
<code>Number of Peers: 3</code>
<code>Hostname: server04</code>
<code>Uuid: d653b5c2-dac4-428c-bf6f-eea393adbb16</code>
2、添加一个节点卷的信息到dht-volume的brick中
<code>[root@server01 glusterfs-xfs-</code><code>mount</code><code>]</code><code># gluster volume add-brick</code>
<code>Usage: volume add-brick <VOLNAME> [<stripe|replica> <COUNT>] <NEW-BRICK> ... [force]</code>
<code>[root@server01 glusterfs-xfs-</code><code>mount</code><code>]</code><code># gluster volume add-brick dht-volume01 server04:/glusterfs-xfs-mount</code>
<code>volume add-brick: failed: Pre Validation failed on server04. The brick server04:</code><code>/glusterfs-xfs-mount</code> <code>is a moun</code>
<code>t point. Please create a sub-directory under the </code><code>mount</code> <code>point and use that as the brick directory. Or use 'forc</code>
<code>e' at the end of the </code><code>command</code> <code>if</code> <code>you want to override this behavior.</code>
<code>[root@server01 glusterfs-xfs-</code><code>mount</code><code>]</code><code># gluster volume add-brick dht-volume01 server04:/glusterfs-xfs-mount force</code>
<code>volume add-brick: success</code>
<code>[root@server01 glusterfs-xfs-</code><code>mount</code><code>]</code><code># gluster volume status</code>
<code>Brick server04:</code><code>/glusterfs-xfs-mount</code> <code>49152 0 Y 2925 </code>
<code>NFS Server on localhost 2049 0 Y 3258 </code>
<code>NFS Server on server02 2049 0 Y 3107 </code>
<code>NFS Server on server03 2049 0 Y 12284</code>
<code>NFS Server on server04 2049 0 Y 2945 </code>
3、从brick中移除一个节点卷
<code>[root@server01 ~]</code><code># gluster volume remove-brick dht-volume01 server04:/glusterfs-xfs-mount/ </code>
<code>Usage: volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> ... <start|stop|status|commit|force></code>
<code>[root@server01 ~]</code><code># gluster volume remove-brick dht-volume01 server04:/glusterfs-xfs-mount/ commit</code>
<code>Removing brick(s) can result </code><code>in</code> <code>data loss. Do you want to Continue? (y</code><code>/n</code><code>) y</code>
<code>volume remove-brick commit: success</code>
<code>Check the removed bricks to ensure all files are migrated.</code>
<code>If files with data are found on the brick path, copy them via a gluster </code><code>mount</code> <code>point before re-purposing the re</code>
<code>moved brick. </code>
<code>[root@server01 ~]</code><code># gluster volume status</code>
<code>NFS Server on localhost 2049 0 Y 3336 </code>
<code>NFS Server on server02 2049 0 Y 3146 </code>
<code>NFS Server on server04 2049 0 Y 2991 </code>
<code>NFS Server on server03 2049 0 Y 12323</code>
五、配置均衡分布(rebalance)
<code>[root@server01 ~]</code><code># gluster volume rebalance dht-volume01</code>
<code>Usage: volume rebalance <VOLNAME> {{fix-layout start} | {start [force]|stop|status}}</code>
<code>[root@server01 ~]</code><code># gluster volume rebalance dht-volume01 fix-layout start</code>
<code>volume rebalance: dht-volume01: success: Rebalance on dht-volume01 has been started successfully. Use rebalanc</code>
<code>e status </code><code>command</code> <code>to check status of the rebalance process.</code>
<code>ID: 6ce8fd86-dd1e-4ce3-bb44-82532b5055dd</code>
<code>[root@server01 ~]</code><code># gluster volume rebalance dht-volume01 fix-layout status</code>
<code>[root@server01 ~]</code><code># gluster</code>
<code>gluster> volume rebalance dht-volume01 status</code>
<code> </code><code>Node Rebalanced-files size scanned failures skip</code>
<code>ped status run </code><code>time</code> <code>in</code> <code>h:m:s</code>
<code> </code><code>--------- ----------- ----------- ----------- ----------- --------</code>
<code>--- ------------ --------------</code>
<code> </code><code>localhost 0 0Bytes 0 0 </code>
<code> </code><code>0 fix-layout completed 0:0:0</code>
<code> </code><code>server02 0 0Bytes 0 0 </code>
<code> </code><code>server03 0 0Bytes 0 0 </code>
<code>volume rebalance: dht-volume01: success</code>
<code>[root@server01 glusterfs-xfs-</code><code>mount</code><code>]</code><code># cd /glusterfs-xfs-mount/</code>
<code>[root@server01 glusterfs-xfs-</code><code>mount</code><code>]</code><code># gluster volume set dht-volume01 nfs.disable on</code>
<code>volume </code><code>set</code><code>: success</code>
<code>Task : Rebalance </code>
<code>ID : 6ce8fd86-dd1e-4ce3-bb44-82532b5055dd</code>
<code>Status : completed </code>
<code>[root@server01 glusterfs-xfs-</code><code>mount</code><code>]</code><code># touch {1..100}.txt</code>
<code>100.txt 12.txt 16.txt 20.txt 25.txt 3.txt 55.txt 65.txt 70.txt 75.txt 92.txt lost+found</code>
<code>10.txt 14.txt 18.txt 21.txt 2.txt 43.txt 57.txt 67.txt 71.txt 77.txt client2.iso </code><code>test</code><code>.img</code>
<code>11.txt 15.txt 1.txt 22.txt 30.txt 47.txt 61.txt 6.txt 72.txt 88.txt client.iso</code>
<code>13.txt 23.txt 28.txt 34.txt 39.txt 46.txt 52.txt 66.txt 76.txt 81.txt 8.txt 93.txt</code>
<code>17.txt 26.txt 29.txt 35.txt 41.txt 4.txt 58.txt 68.txt 79.txt 83.txt 90.txt 94.txt</code>
<code>19.txt 27.txt 33.txt 37.txt 42.txt 51.txt 62.txt 73.txt 80.txt 86.txt 91.txt 97.txt</code>
<code>[root@server03 ~]</code><code># cd /glusterfs-xfs-mount/</code>
<code>[root@server03 glusterfs-xfs-</code><code>mount</code><code>]</code><code># ls</code>
<code>24.txt 36.txt 44.txt 49.txt 54.txt 5.txt 64.txt 78.txt 84.txt 89.txt 98.txt lost+found</code>
<code>31.txt 38.txt 45.txt 50.txt 56.txt 60.txt 69.txt 7.txt 85.txt 95.txt 99.txt server03.img</code>
<code>32.txt 40.txt 48.txt 53.txt 59.txt 63.txt 74.txt 82.txt 87.txt 96.txt 9.txt server04.iso</code>
<code>[root@client01 ~]</code><code># cd /glusterFS-mount/</code>
<code>[root@client01 glusterFS-</code><code>mount</code><code>]</code><code># ls</code>
<code>100.txt 18.txt 26.txt 34.txt 42.txt 50.txt 59.txt 67.txt 75.txt 83.txt 91.txt 9.txt</code>
<code>10.txt 19.txt 27.txt 35.txt 43.txt 51.txt 5.txt 68.txt 76.txt 84.txt 92.txt client2.iso</code>
<code>11.txt 1.txt 28.txt 36.txt 44.txt 52.txt 60.txt 69.txt 77.txt 85.txt 93.txt client.iso</code>
<code>12.txt 20.txt 29.txt 37.txt 45.txt 53.txt 61.txt 6.txt 78.txt 86.txt 94.txt lost+found</code>
<code>13.txt 21.txt 2.txt 38.txt 46.txt 54.txt 62.txt 70.txt 79.txt 87.txt 95.txt server03.img</code>
<code>14.txt 22.txt 30.txt 39.txt 47.txt 55.txt 63.txt 71.txt 7.txt 88.txt 96.txt server04.iso</code>
<code>15.txt 23.txt 31.txt 3.txt 48.txt 56.txt 64.txt 72.txt 80.txt 89.txt 97.txt </code><code>test</code><code>.img</code>
<code>16.txt 24.txt 32.txt 40.txt 49.txt 57.txt 65.txt 73.txt 81.txt 8.txt 98.txt</code>
<code>17.txt 25.txt 33.txt 41.txt 4.txt 58.txt 66.txt 74.txt 82.txt 90.txt 99.txt</code>
<code>文件均衡分布的功能实现了</code>
六、配置gluster客户端
<code>[root@client01 ~]</code><code># cat /etc/hosts</code>
<code>[root@client01 ~]</code><code># mkdir -p /glusterFS-mount</code>
<code>[root@client01 ~]</code><code># mount -t glusterfs server01:/dht-volume01 /glusterFS-mount/</code>
<code>[root@client01 ~]</code><code># df -h</code>
<code>Filesystem Size Used Avail Use% Mounted on</code>
<code>/dev/sda3</code> <code>193G 7.6G 176G 5% /</code>
<code>tmpfs 932M 76K 932M 1% </code><code>/dev/shm</code>
<code>/dev/sda1</code> <code>190M 41M 139M 23% </code><code>/boot</code>
<code>server01:</code><code>/dht-volume01</code>
<code> </code><code>59G 1.7G 55G 3% </code><code>/glusterFS-mount</code>
<code>[root@client01 glusterFS-</code><code>mount</code><code>]</code><code># LS</code>
<code>-</code><code>bash</code><code>: LS: </code><code>command</code> <code>not found</code>
<code>lost+found server02.img server03.img </code><code>test</code><code>.img</code>
<code>[root@client01 glusterFS-</code><code>mount</code><code>]</code><code># dd if=/dev/zero of=client.iso bs=1M count=123</code>
<code>123+0 records </code><code>in</code>
<code>123+0 records out</code>
<code>128974848 bytes (129 MB) copied, 1.52512 s, 84.6 MB</code><code>/s</code>
<code>client.iso lost+found server02.img server03.img </code><code>test</code><code>.img</code>
<code>[root@client01 glusterFS-</code><code>mount</code><code>]</code><code># dd if=/dev/zero of=client2.iso bs=1M count=456</code>
<code>456+0 records </code><code>in</code>
<code>456+0 records out</code>
<code>478150656 bytes (478 MB) copied, 8.76784 s, 54.5 MB</code><code>/s</code>
<code>client2.iso client.iso lost+found server02.img server03.img </code><code>test</code><code>.img</code>
<code>[root@client01 glusterFS-</code><code>mount</code><code>]</code><code># df -h</code>
<code>/dev/sda3</code> <code>193G 7.2G 176G 4% /</code>
<code> </code><code>40G 1.7G 36G 5% </code><code>/glusterFS-mount</code>
<code>tmpfs 932M 80K 932M 1% </code><code>/dev/shm</code>
<code> </code><code>59G 2.2G 54G 4% </code><code>/glusterFS-mount</code>
<code>[root@client01 glusterFS-</code><code>mount</code><code>]</code><code># cd ~</code>
<code>[root@client01 ~]</code><code># mount -a</code>
<code>Mount failed. Please check the log </code><code>file</code> <code>for</code> <code>more</code> <code>details.</code>
<code>[root@client01 ~]</code><code># ls</code>
<code>anaconda-ks.cfg Documents </code><code>install</code><code>.log Music Public Videos</code>
<code>Desktop Downloads </code><code>install</code><code>.log.syslog Pictures Templates</code>
<code> </code><code>79G 2.3G 72G 4% </code><code>/glusterFS-mount</code>
<code>通过容量差别可以发现,节点的卷已经添加成功</code>
七、在挂载glusterFS的客户机的目录下,使用samba分享给windows机器使用
1、samba服务的安装
<code>[root@client01 ~]</code><code># yum -y install samba</code>
<code>[root@client01 ~]</code><code># /etc/init.d/smb restart</code>
<code>Shutting down SMB services: [ OK ]</code>
<code>Starting SMB services: [ OK ]</code>
<code>[root@client01 ~]</code><code># /etc/init.d/nmb restart</code>
<code>Shutting down NMB services: [ OK ]</code>
<code>Starting NMB services: [ OK ]</code>
<code>[root@client01 ~]</code><code># chkconfig smb on</code>
<code>[root@client01 ~]</code><code># chkconfig nmb on</code>
<code>[root@client01 ~]</code><code># vim /etc/samba/smb.conf </code>
<code>配置文件如下:</code>
<code> </code><code>workgroup = WORKGROUP (工作组)</code>
<code> </code><code>server string = Samba Server Version %</code><code>v</code> <code>(显示版本)</code>
<code> </code><code>hosts allow = 127. 192.168.1. 10.10.10. (允许登陆的主机)</code>
<code> </code><code>log </code><code>file</code> <code>= </code><code>/var/log/samba/log</code><code>.%m (日志存放的地方)</code>
<code> </code><code>max log size = 50 (最大的日志数量)</code>
<code> </code><code>security = user (samba验证的级别)</code>
<code> </code><code>passdb backend = tdbsam</code>
<code>[云盘测试平台]</code>
<code> </code><code>comment = yunpan</code>
<code> </code><code>browseable = </code><code>yes</code>
<code> </code><code>writable = </code><code>yes</code>
<code> </code><code>public = </code><code>yes</code>
<code> </code><code>path = </code><code>/glusterFS-mount</code>
<code> </code><code>valid </code><code>users</code> <code>= wanlong</code>
2、samba用户的配置
<code>[root@client01 ~]</code><code># adduser jifang01 -s /sbin/nologin </code>
<code>[root@client01 ~]</code><code># id jifang01uid=501(jifang01) gid=501(jifang01) groups=501(jifang01)</code>
<code>[root@client01 ~]</code><code># smbpasswd -a jifang01</code>
<code>New SMB password:</code>
<code>Retype new SMB password:</code>
<code>Added user jifang01.</code>
3、设置本地文件夹权限
<code>tmpfs 932M 72K 932M 1% </code><code>/dev/shm</code>
<code> </code><code>79G 1.8G 73G 3% </code><code>/glusterFS-mount</code>
<code>[root@client01 ~]</code><code># chmod 777 /glusterFS-mount/ -R</code>
4、在windows服务端映射网络驱动器后,进行验证
<a href="http://s5.51cto.com/wyfs02/M00/7F/2D/wKiom1cV0eTw86jWAAAoKjgBVv8736.png" target="_blank"></a>
<a href="http://s5.51cto.com/wyfs02/M01/7F/2D/wKiom1cV0eSgN_mnAABQjzCaHW8235.png" target="_blank"></a>
<a href="http://s5.51cto.com/wyfs02/M01/7F/2B/wKioL1cV0qPyOrowAABAHp3dml8375.png" target="_blank"></a>
<a href="http://s5.51cto.com/wyfs02/M02/7F/2D/wKiom1cV0eWgloo-AAAyL27ME3E212.png" target="_blank"></a>
本文转自 冰冻vs西瓜 51CTO博客,原文链接:http://blog.51cto.com/molewan/1765181,如需转载请自行联系原作者