一、实验图
二、准备实验环境:
1)确保sql服务器之间可以基于主机名通信
[root@SQL1 ~]# vim /etc/hosts 172.16.2.13 SQL1.linux.com SQL1 172.16.2.14 SQL2.linux.com SQL2
[root@SQL2 ~]# vim /etc/hosts 172.16.2.13 SQL1.linux.com SQL1 172.16.2.14 SQL2.linux.com SQL2
2)确保时间同步
[root@SQL1 ~]# crontab -e */2 * * * * /usr/sbin/ntpdate 172.16.2.15
[root@SQL2 ~]# crontab -e */2 * * * * /usr/sbin/ntpdate 172.16.2.15
3)确保可以基于ssh秘钥通信
[root@SQL1 ~]# ssh-keygen -P '' [root@SQL1 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@172.16.2.14 root@172.16.2.14's password:
[root@SQL2 ~]# ssh-keygen -P '' [root@SQL2 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@172.16.2.13 root@172.16.2.13's password:
4)测试
[root@SQL1 ~]# date; ssh SQL2 'date'; Wed Jul 1 10:59:40 CST 2015 Wed Jul 1 10:59:40 CST 2015
[root@SQL2 ~]# date; ssh SQL1 'date' Wed Jul 1 11:00:32 CST 2015 Wed Jul 1 11:00:33 CST 2015
二、安装corosync;pacemaker
安装corosync
[root@SQL1 ~]# yum -y install corosync [root@SQL2 ~]# yum -y install corosync
安装pacemaker
[root@SQL1 ~]# yum -y install pacemaker [root@SQL2 ~]# yum -y install pacemaker
配置corosync
[root@SQL1 ~]# cp /etc/corosync/corosync.conf.example /etc/corosync/corosync.conf [root@SQL1 ~]# vim /etc/corosync/corosync.conf compatibility: whitetank \\兼容旧版本的corosync totem { \\定义心跳信息传递信息 version: 2 \\corosync版本2 secauth: on \\开启秘钥认证;默认是关闭 threads: 0 \\定义启动几个线程传递心跳信息 interface { ringnumber: 0 \\其实号 bindnetaddr: 172.16.2.0 \\绑定在那个网络地址,注意是网络地址;不是ip地址 mcastaddr: 235.250.10.10 \\定义组播地址 mcastport: 5405 \\定义组播地址端口号 ttl: 1 } } logging { fileline: off \\默认即可 to_stderr: no \\是否将错误日志输出到终端,默认为no,关闭 to_logfile: yes \\启用日志文件 logfile: /var/log/cluster/corosync.log \\日志文件存放位置 debug: off \\是否开启debug日志信息 timestamp: on \\是否开启日志记录时间戳,默认为开启状态;会产生IO logger_subsys { subsys: AMF debug: off } } service { \\定义pacemaker服务 ver: 0 name: pacemaker } aisexec{ \\定义运行用户和组 user: root group: root }
生成秘钥
[root@SQL1 ~]# corosync-keygen \\此时需要输入数据,产生随机数,建议下载文件以便快速产生随机数
将配置文件和秘钥复制给SQL2
[root@SQL1 ~]# scp -p /etc/corosync/{authkey,corosync.conf} SQL2://etc/corosync/
启动corosync
[root@SQL1 ~]# service corosync start [root@SQL2 ~]# service corosync start
查看日志确保corosync正常启动
[root@SQL1 ~]# grep -e "Corosync Cluster Engien" -e "configuration file" /var/log/cluster/corosync.log Jul 01 11:04:26 corosync [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'
[root@SQL1 ~]# grep TOTEM /var/log/cluster/corosync.log Jul 01 11:04:26 corosync [TOTEM ] Initializing transport (UDP/IP Multicast). Jul 01 11:04:26 corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0). Jul 01 11:04:26 corosync [TOTEM ] The network interface [172.16.2.13] is now up. Jul 01 11:04:26 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed. Jul 01 11:04:42 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
此错误日志信息可以忽略 root@SQL1 ~]# grep ERROR /var/log/cluster/corosync.log Jul 01 11:04:26 corosync [pcmk ] ERROR: process_ais_conf: You have configured a cluster using the Pacemaker plugin for Corosync. The plugin is not supported in this environment and will be removed very soon. Jul 01 11:04:26 corosync [pcmk ] ERROR: process_ais_conf: Please see Chapter 8 of 'Clusters from Scratch' (http://www.clusterlabs.org/doc) for details on using Pacemaker with CMAN Jul 01 11:04:50 [3996] SQL1.linux.com pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues. Jul 01 11:04:50 [3996] SQL1.linux.com pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues. Jul 01 11:04:50 [3996] SQL1.linux.com pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.
[root@SQL1 ~]# grep pcmk_startup /var/log/cluster/corosync.log Jul 01 11:04:26 corosync [pcmk ] info: pcmk_startup: CRM: Initialized Jul 01 11:04:26 corosync [pcmk ] Logging: Initialized pcmk_startup Jul 01 11:04:26 corosync [pcmk ] info: pcmk_startup: Maximum core file size is: 18446744073709551615 Jul 01 11:04:26 corosync [pcmk ] info: pcmk_startup: Service: 9 Jul 01 11:04:26 corosync [pcmk ] info: pcmk_startup: Local hostname: SQL1.linux.com
安装crmsh(配置yum源:http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/)
[root@SQL1 ~]# vim /etc/yum.repos.d/crmsh.repo [crmsh] name=crmsh baseurl=http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/ enabled=1 gpgcheck=0 [root@SQL1 ~]# yum -y install crmsh
三、安装配置iscsi:提前在iscsi服务端准备好一块磁盘,用于iscsi使用
安装server端:(172.16.2.12)
[root@iscsi ~]# yum -y install scsi-target-utils root@iscsi ~]# service tgtd start \\启动服务 Starting SCSI target daemon: [ OK ]
安装client端
[root@SQL1 ~]# yum -y install iscsi-initiator-utils [root@SQL1 ~]# service iscsi start \\用于发现iscsi设备的脚本 [root@SQL1 ~]# service iscsid start \\iscsi服务启动脚本 [root@SQL2 ~]# yum -y install iscsi-initiator-utils [root@SQL2 ~]# service iscsi start [root@SQL2 ~]# service iscsid start
服务端配置:
两类方式
第一类:编辑/etc/tgt/targets.conf,编辑配置文件生成的iscsi target系统重启之后不会丢失
第二类:使用tgtadm全命令工具创建;此命令工具配置的iscsi target系统重启之后会丢失
在这里选择使用tgtadm命令行工具:
[root@iscsi ~]# tgtadm -L iscsi -m target -o new -t 1 -T iqn.2015-07.com.mylinux:t1 \\创建target;命令帮助tgtadm -h [root@iscsi ~]# tgtadm -L iscsi -m target -o show \\查看创建的target Target 1: iqn.2015-07.com.mylinux:t1 \\target名称,以及标识号1 System information: \\系统信息 Driver: iscsi State: ready I_T nexus information: LUN information: LUN: 0 \\逻辑单元号;0默认保留 Type: controller SCSI ID: IET 00010000 SCSI SN: beaf10 Size: 0 MB, Block size: 1 Online: Yes Removable media: No \\移除 Prevent removal: No \\阻止 Readonly: No Backing store type: null \\块设备类型 Backing store path: None \\提供块设备的位置 Backing store flags: \\块设备标记 Account information: \\授权用户访问 ACL information: \\授权ip地址段访问 [root@iscsi ~]# tgtadm -L iscsi -m logicalunit -o new -t 1 -l 1 -b /dev/sdb1 \\在target中添加磁盘设备 [root@iscsi ~]# tgtadm -L iscsi -m target -o bind -t 1 -I 172.16.2.0/24 \\给target授权,默认不允许任何用户访问 [root@iscsi ~]# tgtadm -L iscsi -m target -o show \\再次查看target信息 Target 1: iqn.2015-07.com.mylinux:t1 System information: Driver: iscsi State: ready I_T nexus information: LUN information: LUN: 0 Type: controller SCSI ID: IET 00010000 SCSI SN: beaf10 Size: 0 MB, Block size: 1 Online: Yes Removable media: No Prevent removal: No Readonly: No Backing store type: null Backing store path: None Backing store flags: LUN: 1 \\逻辑单元号 Type: disk SCSI ID: IET 00010001 SCSI SN: beaf11 Size: 10742 MB, Block size: 512 Online: Yes Removable media: No Prevent removal: No Readonly: No Backing store type: rdwr \\类型 Backing store path: /dev/sdb1 \\已添加的设备 Backing store flags: Account information: ACL information: 172.16.2.0/24 \\授权信息
iscsi客户端配置
[root@SQL1 ~]# echo "InitiatorName=`iscsi-iname -p iqn.2015-07.com.sql1`" > /etc/iscsi/initiatorname.iscsi \\重新命名 [root@SQL1 ~]# cat /etc/iscsi/initiatorname.iscsi \\查看iscsi名称 InitiatorName=iqn.2015-07.com.sql1:97b0de58129 \\具体名称
[root@SQL2 ~]# echo "InitiatorName=`iscsi-iname -p iqn.2015-07.com.sql2`" > /etc/iscsi/initiatorname.iscsi [root@SQL2 ~]# cat /etc/iscsi/initiatorname.iscsi InitiatorName=iqn.2015-07.com.sql2:313bbc508b59
在客户端查找iscsi设备
[root@SQL1 ~]# iscsiadm -m discovery -t st -p 172.16.2.12 \\iscsiadm具体命令请查看帮助手册 Starting iscsid: [ OK ] 172.16.2.12:3260,1 iqn.2015-07.com.mylinux:t1 \\已经找到iscsi提供的设备
[root@SQL2 ~]# iscsiadm -m discovery -t st -p 172.16.2.12 Starting iscsid: [ OK ] 172.16.2.12:3260,1 iqn.2015-07.com.mylinux:t1
登录查找到的设备
[root@SQL1 ~]# iscsiadm -m node -T iqn.2015-07.com.mylinux:t1 -p 172.16.2.12 -l Logging in to [iface: default, target: iqn.2015-07.com.mylinux:t1, portal: 172.16.2.12,3260] (multiple) Login to [iface: default, target: iqn.2015-07.com.mylinux:t1, portal: 172.16.2.12,3260] successful. 使用fdisk -l可以查到本地已经多了一块磁盘设备 [root@SQL1 ~]# fdisk -l | grep "/dev/sd[a-z]" Disk /dev/sda: 42.9 GB, 42949672960 bytes /dev/sda1 * 1 64 512000 83 Linux /dev/sda2 64 5222 41430016 8e Linux LVM Disk /dev/sdb: 10.7 GB, 10742183424 bytes \\此设备就是iscsi服务端提供的块设备 对此设备块分区格式化结果如下: [root@SQL1 ~]# fdisk -l | grep "/dev/sd[a-z][0-9]" /dev/sda1 * 1 64 512000 83 Linux /dev/sda2 64 5222 41430016 8e Linux LVM /dev/sdb1 1 10244 10489840 83 Linux \\分区格式化后的结果
就先到这里,欲知后事如何请看下集分晓
原创文章,作者:马行空,如若转载,请注明出处:http://www.178linux.com/5883
评论列表(3条)
赞
@tars:谢谢
请问有没有遇到这种情况
[root@lab2 ~]# crm
crm(live)# conERROR: running cibadmin -Ql: Could not establish cib_rw connection: Connection refused (111)
Signon to CIB failed: Transport endpoint is not connected
Init failed, could not perform requested operations