heartbeat2使用crm借助gui界面配置httpd服务
1. 服务大体资源构架:
1) 两台httpd服务,外加一台nfs共享存储服务。
2) 地址分配:
node1.playground.com(httpd)192.168.253.133 node2.playground.com(httpd)192.168.253.134 node3.playground.com(nfs服务器)192.168.253.135 node1和node2上面的vip定为192.168.253.100
2. node1和node2上面安装httpd服务; 使用默认DocumentRoot = /var/www/html
3. 在node3上面配置nfs服务,
# yum install nfs # vim /etc/exports /var/www/share192.168.253.133/24(rw)192.168.253.134/24(rw) # echo "web from NFS" > /var/www/share/index.html
4. 在node1和node2上面安装heartbeat2和gui-组件
# yum install perl-TimeDate net-snmp-libs libnet PyXML # rpm -ivh heartbeat-2.1.4-12.el6.x86_64.rpm heartbeat-pils-2.1.4-12.el6.x86_64.rpm heartbeat-stonith-2.1.4-12.el6.x86_64.rpm 安装heartbeat-gui # yum install pygtk2-libglade # rpm -ivh heartbeat-gui-2.1.4-12.el6.x86_64.rpm
5. 修改node1上面的配置文件,并且进行同步
# vim /etc/ha.d/authkeys auth 2 #1 crc 2 sha1 8499636794b07630af98 #3 md5 Hello!
# vim /etc/ha.d/ha.cf logfile /var/log/ha-log keepalive 2 deadtime 15 warntime 10 udpport 694 mcast eth0 225.0.130.1 694 1 0 auto_failback on node node1.playground.com node node2.playground.com ping 192.168.253.2 crm on ## 开启crm管理器,此时不再使用haresource来进行资源管理
使用次脚本进行各节点配置文件的同步,并且自动配置authkeys 600权限 # /usr/lib64/heartbeat/ha_propagate Propagating HA configuration files to node node2.playground.com. ha.cf 100% 10KB 10.4KB/s 00:00 authkeys 100% 660 0.6KB/s 00:00
6. 配置gui管理用户,启动heartbeat接口
# tail -1 /etc/passwd hacluster:x:495:495:heartbeat user:/var/lib/heartbeat/cores/hacluster:/sbin/nologin # passwd hacluster ## 修改hacluster密码,用来以后通过图形界面管理heartbeat # hb_gui & # 进入图形界面,进入后点击connect按钮,输入之前设置的管理密码即可开始配置。
7. 配置资源组
在Resources右键添加资源组webservice
在webservice组上面按照顺序添加如下资源
1) webip 选择 ip addr2 type, 添加parameter如下
ip192.168.253.100 niceth0 cidr_netmask24 iflabel0
2) storage资源: 选择FileSystem type, 添加parameter如下
device 192.168.253.135:/var/www/share directory /var/www/html fstype nfs
3) webserver: 选择httpd即可
然后右键单击资源组,运行,可以看到三个资源按顺序启动运行在node2上。 客户端的网页访问也是正常的
heartbeat2使用crm借助gui界面配置mysql服务
1. 服务大体资源构架:
1) 两台mysqld服务,外加一台nfs共享存储服务。
2) 地址分配:
node1.playground.com(mysqld)192.168.253.133 node2.playground.com(mysqld)192.168.253.134 node3.playground.com(nfs服务器)192.168.253.135 node1和node2上面的vip定为192.168.253.100
2. 在node1和node2上安装以二进制格式安装mysql服务器。安装方法不赘述了,之前都搞过好多遍了。主要一点,要指定组ID和用户ID
## node1,node2,node3 都要创建相同组ID和用户ID的mysqld的运行用户 # groupadd -g 600 mysql # useradd -u 600 -g 600 mysql # grep mysql /etc/passwd mysql:x:600:600::/home/mysql:/bin/bash
3. 在node3上配置nfs
# yum install nfs # vim /etc/exports /var/www/share192.168.253.133/24(rw,no root squash)192.168.253.134/24(rw) # echo "web from NFS" > /var/www/share/index.html # setfacl -m u:mysql:rwx /var/www/share # su mysql # mkdir binlog mysqldata relaylog
4. 在node1上,挂载data目录到,nfs服务器上,初始化数据库。
# mount -t nfs 192.168.253.135:/var/www/share /data 然后使用初始化脚本,初始化数据库 配置文件中设置了,初始化位置,所以直接运行初始化脚本即可。 log-bin=/data/binlog/master2-bin datadir=/data/mysqldata user=mysql 两个节点分别挂载测试后没问题,即可进行下面操作。
5. node1 和 node2上面安装heartbeat2 和 图形化接口
# yum install perl-TimeDate net-snmp-libs libnet PyXML # rpm -ivh heartbeat-2.1.4-12.el6.x86_64.rpm heartbeat-pils-2.1.4-12.el6.x86_64.rpm heartbeat-stonith-2.1.4-12.el6.x86_64.rpm 安装heartbeat-gui # yum install pygtk2-libglade # rpm -ivh heartbeat-gui-2.1.4-12.el6.x86_64.rpm
6. 修改node1上面的配置文件,并且进行同步
# vim /etc/ha.d/authkeys auth 2 #1 crc 2 sha1 8499636794b07630af98 #3 md5 Hello! # vim /etc/ha.d/ha.cf logfile /var/log/ha-log keepalive 2 deadtime 15 warntime 10 udpport 694 mcast eth0 225.0.130.1 694 1 0 auto_failback on node node1.playground.com node node2.playground.com ping 192.168.253.2 crm on ## 开启crm管理器,此时不再使用haresource来进行资源管理
# /usr/lib64/heartbeat/ha_propagate Propagating HA configuration files to node node2.playground.com. ha.cf 100% 10KB 10.4KB/s 00:00 authkeys
7. 启动node1, node2 heartbeat服务
# service heartbeat start # ssh node2.playground.com 'service heartbeat start'
8. 配置hb_gui用户运行密码,并启动hb_gui界面配置HA
# tail -1 /etc/passwd hacluster:x:495:495:heartbeat user:/var/lib/heartbeat/cores/hacluster:/sbin/nologin # passwd hacluster ## 修改hacluster密码,用来以后通过图形界面管理heartbeat # hb_gui & # 进入图形界面,进入后点击connect按钮,输入之前设置的管理密码即可开始配置。
9. 配置资源
在Resources右键添加资源组mysqlservice 组里按顺序添加一下资源: mysql_ip: 资源类型: ipaddr2 属性: ip: 192.168.253.100 nic: eth0 cidr_netmask 24 iflabel0 mysql_storage: 资源类型: Filesystem 属性: device: 192.168.253.135:/var/www/share directory: /data fstype: nfs mysql_service 资源类型: mysqld 然后可以选择启动资源即可。
10. 授权一个远程可远程登录的用户,远程登录由vip输出的mysql服务
MariaDB [(none)]> GRANT ALL ON *.* TO 'tester'@'192.168.%.%' IDENTIFIED BY 'test'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> FLUSH PRIVILEGES ; Query OK, 0 rows affected (0.00 sec) 在node3上面尝试远程登录 # mysql -utester -h192.168.253.100 -ptest Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 4 Server version: 10.1.9-MariaDB-log MariaDB Server Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]>
heartbeat2使用crm借助gui界面配置lvs DR web服务,对director做高可用
1. 服务器地址分配:
node1.playground.com192.168.253.133 作为director1 node2.playground.com192.168.253.134作为director2 node3.playground.com192.168.253.135作为RealServer1 node4.playground.com192.168.253.136作为RealServer2 VIP192.168.253.100 此时要把前四个地址和主机名同步到各主机,并且按照分配配置主机名。
2. 先配置两台realserver
RS1,RS2配置一样,如下 # echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce # echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce # echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore # echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore # ifconfig lo:0 192.168.253.100 netmask 255.255.255.255 broadcast 192.168.253.100 up # route add -host 192.168.253.100 dev lo:0
3. 配置web服务和测试页
RS1:node3 # echo "web from RS1" > /var/www/html/index.html # echo "OK" > /var/www/html/.test.html RS2:node4 # echo "web from RS2" > /var/www/html/index.html # echo "OK" > /var/www/html/.test.html .test.html页作为,RS健康状况监测标记。
4. 按照上面相同方法在node1,node2上面安装图形接口和heartbeat,配置文件配置,设定图形接口运行用户密码也和以上的相同,这里不赘述
5. 还要在node1和node2上面安装heartbeat-ldirectord用配置ipvs规则外加控制RealServer的健康状况
# yum install heartbeat-ldirectord-2.1.4-12.el6.x86_64.rpm
6. 配置ldirectord
# cp /usr/share/doc/heartbeat-ldirectord-2.1.4/ldirectord.cf /etc/ha.d/ # vim /etc/ha.d/ldirectord.cf # Global Directives checktimeout=3 checkinterval=1 fallback=127.0.0.1:80 autoreload=yes logfile="/var/log/ldirectord.log" quiescent=yes # Sample for an http virtual service virtual=192.168.253.100:80 real=192.168.253.135:80 gate real=192.168.253.136:80 gate fallback=127.0.0.1:80 gate service=http request=".test.html" receive="OK" scheduler=rr 这里的service = http的意思是,使用哪种方式对后端RS进行健康状态检测 request 是用于检测的测试页 receive 请求测试页返回什么值 之前我们已经提供了检测测试页,返回值应该是"OK" 配置好后,需要把这个配置文件在两台主机上同步
7. 在两个director上面,配置httpd服务,并且添加一个failover页, 当后端RS全都不响应的时候,显示"maintance on going"
8. 启动图形界面开始配置。
# service heartbeat start ; ssh node2.playground.com 'service heartbeat start' # hb_gui & 添加组: lvs_httpd Ordered: true Collocated: true 添加资源: ipvsvip ip: 192.168.253.100 nic: eth0 cidr_netmask:32 iflabel: 0 broadcast: 192.168.253.100 添加资源: ipvsldirector ldirectord/usr/sbin/ldirectord configfile/etc/ha.d/ldirectord.cf ## 配置完后就可以启动资源组了
7. 一些测试。
两台RS都正常时候,按照算法轮换 两台RS都关闭时候,转到maintanance on going!
原创文章,作者:以马内利,如若转载,请注明出处:http://www.178linux.com/10990
评论列表(1条)
向你学习,有个问题想问一下,我运行hb_gui &报错,是该如何解决。。安装了Xmanager的。[root@node2 ~]# hb_gui &
[1] 2384
[root@node2 ~]# Traceback (most recent call last):
File “/usr/bin/hb_gui”, line 41, in
import gtk, gtk.glade, gobject
File “/usr/lib64/python2.6/site-packages/gtk-2.0/gtk/__init__.py”, line 64, in
_init()
File “/usr/lib64/python2.6/site-packages/gtk-2.0/gtk/__init__.py”, line 52, in _init
_gtk.init_check()
RuntimeError: could not open display