使用rpm搭建Glusterfs集群步骤
一、环境准备
1、准备搭建glusterfs所需的rpm包
centos-release-gluster6-1.0-1.el7.centos.noarch.rpm
centos-release-storage-common-2-2.el7.centos.noarch.rpm
epel-release-6-8.noarch.rpm
epel-release-7-11.noarch.rpm
glusterfs-6.9-1.el7.x86_64.rpm
glusterfs-api-6.9-1.el7.x86_64.rpm
glusterfs-cli-6.9-1.el7.x86_64.rpm
glusterfs-client-xlators-6.9-1.el7.x86_64.rpm
glusterfs-fuse-6.9-1.el7.x86_64.rpm
glusterfs-libs-6.9-1.el7.x86_64.rpm
glusterfs-server-6.9-1.el7.x86_64.rpm
userspace-rcu-0.10.0-3.el7.x86_64.rpm
2、分别上传包到主机
192.168.105.71
192.168.105.72
192.168.105.73
192.168.105.74
3、编辑/etc/hosts文件
#所有节点保持一致的host即可,以node1节点为例
#绑定hosts不是必须的,后续组件受信任池也可以使用ip形式
[xgz@node1 ~]$sudo vim /etc/hosts
192.168.105.71 node1
192.168.105.72 node2
192.168.105.73 node3
192.168.105.74 node4
其它三个节点同样操作
4、检查防火墙状态若活跃状态需要关闭
[xgz@node1 ~]$ systemctl status firewalld.service
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
5、时间同步
node1开启chrony服务端功能
node1添加以下操作:
[xgz@node1 ~]$ sudo vim /etc/chrony.conf
allow 0.0.0.0/0 #添加允许网段
# Listen for commands only on localhost.
bindcmdaddress 127.0.0.1
bindcmdaddress ::1
# Serve time even if not synchronized to any NTP server.
local stratum 10 #取消注释
[xgz@node1 ~]$ sudo systemctl restart chronyd.service
[xgz@node1 ~]$ sudo systemctl enable chronyd.service
node2、node3、node4添加以下配置(以node2为例):
[xgz@node2 ~]$ sudo vim /etc/chrony.conf
#server 1.rhel.pool.ntp.org iburst
#server 2.rhel.pool.ntp.org iburst
#server 3.rhel.pool.ntp.org iburst
server 192.168.105.71 iburst
重启chrony服务
[xgz@node2 ~]$ sudo systemctl restart chronyd.service
[xgz@node2 ~]$ sudo systemctl enable chronyd.service
查看时间同步情况
[xgz@node2 ~]$ sudo chronyc sources -v
二、安装配置集群
1、安装包(四台节点都要安装以node1为例)
centos-release-gluster6-1.0-1.el7.centos.noarch.rpm
centos-release-storage-common-2-2.el7.centos.noarch.rpm
epel-release-6-8.noarch.rpm
epel-release-7-11.noarch.rpm
glusterfs-6.9-1.el7.x86_64.rpm
glusterfs-api-6.9-1.el7.x86_64.rpm
glusterfs-cli-6.9-1.el7.x86_64.rpm
glusterfs-client-xlators-6.9-1.el7.x86_64.rpm
glusterfs-fuse-6.9-1.el7.x86_64.rpm
glusterfs-libs-6.9-1.el7.x86_64.rpm
glusterfs-server-6.9-1.el7.x86_64.rpm
userspace-rcu-0.10.0-3.el7.x86_64.rpm
[xgz@node1 gluster]$ sudo rpm -ivh glusterfs-*.rpm --force --nodeps
[xgz@node1 gluster]$sudo rpm -ivh centos-release-gluster6-1.0-1.el7.centos.noarch.rpm --nodeps --force
[xgz@node1 gluster]$sudo rpm -ivh centos-release-storage-common-2-2.el7.centos.noarch.rpm --nodeps --force
[xgz@node1 gluster]$sudo rpm -ivh epel-release-7-11.noarch.rpm --nodeps --force
[xgz@node1 gluster]$sudo rpm -ivh userspace-rcu-0.10.0-3.el7.x86_64.rpm --nodeps --force
2、创建软连接
[xgz@node1 gluster]$ Cd /usr/lib64/security;ln -s pam_tally2.so pam_tally.so
3、启动服务
[xgz@node1 ~]$sudo systemctl start glusterd.service
[xgz@node1 ~]$sudo systemctl enable glusterd.service
[xgz@node1 ~]$sudo systemctl status glusterd.service
4、组建立受信存储池
[xgz@node1 ~]$ sudo gluster peer probe node2
peer probe: success.
[xgz@node1 ~]$ sudo gluster peer probe node3
peer probe: success.
[xgz@node1 ~]$ sudo gluster peer probe node4
peer probe: success.
[xgz@node1 ~]$ sudo gluster peer status #查看节点状态
Number of Peers: 3
Hostname: node2
Uuid: 8a73defa-2e09-4cf5-bee4-ea50ff7e8793
State: Peer in Cluster (Connected)
Hostname: node3
Uuid: 8f59a7fc-620e-46a7-8e01-229b32e6cbd3
State: Peer in Cluster (Connected)
Hostname: node4
Uuid: a32f7c7b-c3d1-47ec-8649-7616a8d089b7
State: Peer in Cluster (Connected)
三、创建分布式存储卷
1、创建卷分布式复制卷
[xgz@node1 ~]$ sudo gluster volume create gfsvolume replica 2 transport tcp node1:/data10 node2:/data10 node3:/data10 node4:/data10 force
volume create: gfsvolume: success: please start the volume to access data
2、启动分布卷
[xgz@node1 ~]$ sudo gluster volume list
gfsvolume
[xgz@node1 ~]$ sudo gluster volume start gfsvolume
volume start: gfsvolume: success
查看分布卷状态
[xgz@node1 ~]$ sudo gluster volume status
Status of volume: gfsvolume
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick node1:/data10 49152 0 Y 22652
Brick node2:/data10 49152 0 Y 2321
Brick node3:/data10 49152 0 Y 16370
Brick node4:/data10 49152 0 Y 3999
Self-heal Daemon on localhost N/A N/A Y 22673
Self-heal Daemon on node4 N/A N/A Y 4026
Self-heal Daemon on node2 N/A N/A Y 2347
Self-heal Daemon on node3 N/A N/A Y 16397
Task Status of Volume gfsvolume
------------------------------------------------------------------------------
There are no active volume tasks
3、分布卷挂载使用
[xgz@node1 ~]$ sudo mkdir /GFS1
[xgz@node1 ~]$ sudo mount -t glusterfs node1:gfsvolume /GFS1/
4、测试
[xgz@node1 ~]$cd /GFS1/; sudo touch test{1..100}
node1、node2、node3、node4 查看/data10/目录文件生成情况