105-MHA

一.MHA简介

1.1.简介

        MHA(Master HA ) 是一款开源的 MySQL高可用程序,它为MYSQL 主从复制架构提供了automating master failover

主节点自动迁移功能.MHA在监控到master节点故障时候,会自动提升拥有的数据最近进于主节点的其他从节点为主节点,

并且在此期间,会通过其他节点获取额外的信息来避免数据不一致性的问题.MHA也提供master节点的在线切换功能,

即按需切换master/slave节点.

1.2.MHA服务角色

        MHA Manager (管理节点): 通用单独部署一台机器专门用于管理一个或者多个master/slaver集群,每个master/slave
集群成为一个application;
        MHA  node(数据节点) : 运行在每台mysql 服务器上(master或slave或manager),它通过监控具备解析和清理logs

功能的脚本来实现以及加速故障转移

        MHA结构示意图:
                                105-MHA

                              105-MHA
 

1.3.MHA故障转移步骤

         当Mysql 集群中的master 节点发生故障时,MHA会自动将其他slave节点按一下步骤提升为master,并自动转移和补齐缺失数据.
          master切换/故障转移示意图:105-MHA

             1.当master故障时候,会查找其他slave节点中,数据最接近于master的latest-slave节点,提升为master
             2.查找latest-slave中缺失的数据部分,从其他slave中获取,以还原为dead-master原拥有的完整数据. 

1.4 MHA组件

 1.4.1 Manager节点工具程序:

            – masterha_check_ssh: MHA依赖的SSH环境检测工具(检测节点之间的SSH通信状况)
            – masterha_check_repl: Mysql 复制环境检测工具;
            – masterha_manager: MHA服务主程序;
            – masterha_check_status: MHA运行状态探测工具;
            – masterha_master_monitor: Mysql master 节点可用性检测工具;
            – masterha _master_switch: master节点切换工具;
            – masterha_conf_host: 添加或删除配置的节点;
            – masterha_stop; 关闭MHA服务的工具;

1.4.2 Node节点工具程序:

            – save_binary_logs: 用于保存和复制master的二进制日志专用工具;
            – apply_diff_relay_logs: 用于识别差异的中继日志时间,并应用于其他slave节点;
            – filter_mysqlbinlog: 去除不必要的roolback事件(MHA已经不再使用此工具)
            – purge_relay_logs: 清楚中继日志(不会阻塞SQL线程);

1.4.3 其定义拓展:

            – secondary_check_script:  通过多条网络路由检测master的可用性;
            – shutdown_script: 强制关闭master节点;
            – master_ip_failover_script:  更新application 使用的masterip;
            – report_script : 用于发送报告;
            – init_conf_load_script: 用于加载初始化配置参数;
            – master_ip_online_change_script: 更新master节点Ip地址;

二.MHA环境要求

2.1 Mysql replication 环境准备

2.1.1 MHA mysql 复制环境要求:

            1.各节点需要开启二进制日志以及中继日志;
                   log-bin=master-bin
                    relay-log=relay-bin
            2.全部从节点需要开启只读模式,并关闭relay_log_purge
                    read-only=1
                    relay_log_purge=0
            3. 各节点ID在集群中必须唯一,不可以冲突
                    server_id=#
            4. MHA需要基于mysql 主从复制模式下工作,因此启用MHA之前,需要确保各 mysql 主从复制节点工作正常
            5. 确保从节点上slave IO,SQL线程工作正常;
            6. 所有mysql节点需要授权拥有管理权限的用户,以供任意节点间的互相访问
                mysql> GRANT ALL ON *.* TO ‘mhaadmin’@’IP_ADDR’ IDENTIFIED BY ‘mhappss’;
            7. 在manager-node 节点 上创建公钥私钥,并将其复制到全部节点上,以确保节点之间可以基于相同的key通信

2.1.2 MHA初始化以及配置文件

            Manager 节点需要为每个受监控的master/slave集群提供一个专用的配置文件,而所有的master/slave集群,也
            可以共享一个全局的配置.全局配置文件默认为/etc/masterha_default.cnf(此为可选项).
            若只监控一组master/slave集群,可以直接通过application 的配置来提供各服务器的默认配置信息,同时,每个
            application的配置文件路径为自定.
            

三. MHA 实例

3.1.环境说明:

                        105-MHA

                 mastet-node1 : 10.1.249.184
                 slave-node2 : 10.1.252.218
                 slave-node4 :  10.1.249.70
                manager-node:  10.1.249.83
 

3.2.mysql 主从复制环境创建

3.2.1 master-node1配置:

        vim /etc/my.cnf

  1. [mysqld]
  2. datadir=/var/lib/mysql
  3. socket=/var/lib/mysql/mysql.sock
  4. # Disabling symbolic-links is recommended to prevent assorted security risks
  5. symbolic-links=0
  6. # Settings user and group are ignored when systemd is used.
  7. # If you need to run mysqld under a different user or group,
  8. # customize your systemd unit file for mariadb according to the
  9. # instructions in http://fedoraproject.org/wiki/Systemd
  10. skip_name_resolve=ON
  11. innodb_file_per_table=ON
  12. server_id=1
  13. relay-log=relay-bin
  14. log-bin=master-bin
             启动mariadb并进入mysql环境,
             查询binlog文件名以及binlog position
  1. MariaDB [(none)]> GRANT ALL ON *.* TO 'mhaadmin'@'10.1.%.%' IDENTIFIED BY '000000';
  2. MariaDB [(none)]> GRANT REPLICATION CLIENT,REPLICATION SLAVE ON *.* TO 'repluser'@'10.1.%.%' IDENTIFIED BY '000000';
  3. MariaDB [(none)]> SHOW MASTER STATUS;
  4. +-------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +-------------------+----------+--------------+------------------+ | master-bin.000003 | 465 | | | +-------------------+----------+--------------+------------------+ 1 row in set (0.00 sec)

3.2.2 slave-node 配置:

        vim /etc/my.cnf

  1. [mysqld]
  2. datadir=/var/lib/mysql
  3. socket=/var/lib/mysql/mysql.sock
  4. # Disabling symbolic-links is recommended to prevent assorted security risks
  5. symbolic-links=0
  6. # Settings user and group are ignored when systemd is used.
  7. # If you need to run mysqld under a different user or group,
  8. # customize your systemd unit file for mariadb according to the
  9. # instructions in http://fedoraproject.org/wiki/Systemd
  10. skip_name_resolve=ON
  11. innodb_file_per_table=ON
  12. server_id=2
  13. relay-log=relay-bin
  14. log-bin=master-bin
  15. relay_log_purge=0
  16. read_only=1
            启动mariadb并进入mysql环境:
  1. MariaDB [(none)]> CHANGE MASTER TO MASTER_HOST='10.1.249.184',MASTER_USER='repluser',MASTER_PASSWORD='000000',MASTER_LOG_FILE='master-bin.000003',MASTER_LOG_POS=465;
  2. MariaDB [(none)]> GRANT ALL ON *.* TO 'mhaadmin'@'10.1.%.%' IDENTIFIED BY '000000';
  3. MariaDB [(none)]> START SLAVE;
  4. MariaDB [(none)]> SHOW SLAVE STATUS\G;
    *************************** 1. row ***************************
    Slave_IO_State: Waiting for master to send event
    Master_Host: 10.1.249.184
    Master_User: mhaadmin
    Master_Port: 3306
    Connect_Retry: 60
    Master_Log_File: master-bin.000003
    Read_Master_Log_Pos: 633
    Relay_Log_File: relay-bin.000002
    Relay_Log_Pos: 698
    Relay_Master_Log_File: master-bin.000003
    Slave_IO_Running: Yes
    Slave_SQL_Running: Yes
    Replicate_Do_DB:
    Replicate_Ignore_DB:
    Replicate_Do_Table:
    Replicate_Ignore_Table:
    Replicate_Wild_Do_Table:
    Replicate_Wild_Ignore_Table:
    Last_Errno: 0
    Last_Error:
    Skip_Counter: 0
    Exec_Master_Log_Pos: 633
    Relay_Log_Space: 986
    Until_Condition: None
    Until_Log_File:
    Until_Log_Pos: 0
    Master_SSL_Allowed: No
    Master_SSL_CA_File:
    Master_SSL_CA_Path:
    Master_SSL_Cert:
    Master_SSL_Cipher:
    Master_SSL_Key:
    Seconds_Behind_Master: 0
    Master_SSL_Verify_Server_Cert: No
    Last_IO_Errno: 0
    Last_IO_Error:
    Last_SQL_Errno: 0
    Last_SQL_Error:
    Replicate_Ignore_Server_Ids:
    Master_Server_Id: 1
    1 row in set (0.00 sec)


3.2.3 主从复制测试:

            在master-node1创建任意数据库,测试同步状态;
  1. MariaDB [(none)]> SHOW DATABASES;
  2. +--------------------+
  3. | Database |
  4. +--------------------+
  5. | information_schema |
  6. | mysql |
  7. | performance_schema |
  8. | test |
  9. | test2 |
  10. +--------------------+
  11. 5 rows in set (0.00 sec)
  12. MariaDB [(none)]>
            在任意从节点查看同步情况,确保同步成功;

3.2.4 manager-node 配置:

            1. 创建ssh通信公钥:
  1. #创建私钥
  2. [root@node5 .ssh]# ssh-keygen -t rsa -P ''
  3. Generating public/private rsa key pair.Enter file in which to save the key (/root/.ssh/id_rsa): Your identification has been saved in /root/.ssh/id_rsa.Your public key has been saved in /root/.ssh/id_rsa.pub.The key fingerprint is:7f:d0:79:0b:ad:30:fa:0e:61:0d:4c:13:dc:60:1f:20 root@node5The key's randomart image is:+--[ RSA 2048]----+| E.B=. || =.o.. || o . || o . o || S = + o || . + + + . || o . o . || o . || .o |+-----------------+
  4. #添加公钥到认证信息存放文件中
  5. [root@node5 .ssh]# cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys 

  6. #修改认证文件权限
  7. [root@node5 .ssh]# chmod go= /root/.ssh/authorized_keys

  8. #复制认证文件等到其他各节点
  9. [root@node5 ~]# scp /root/.ssh/authorized_keys id_rsa 10.1.249.70:/root/.ssh/
  10. [root@node5 ~]# scp /root/.ssh/authorized_keys id_rsa 10.1.252.218:/root/.ssh/
  11. [root@node5 ~]# scp /root/.ssh/authorized_keys id_rsa 10.1.249.184:/root/.ssh/
  12. #复制完以后需要逐个登录,测试manager-node可否使用ssh登录其他全部节点
  13. 注意:最好先清除全部节点/root/.ssh/下的文件再复制认证文件


                2. 安装MHA

                    MHA官方有提供rpm格式安装包,可以自行前往下载;CentOS7 中可以直接使用el6的程序包,
                    此外,MHA manager 和 MHA node 程序包版本不强制要求保持一致;

                    #官方源中无rpm,需要自行下载rpm包安装

                    

                   manager-node :
                                    yum install ./mha4mysql-manager-0.56-0.el6.noarch.rpm mha4mysql-node-0.56-                                                                                         0.el6.noarch.rpm -y   

                                注意: mha4mysql-manager与mha4mysql-node两个包需要同时安装,否则会安装失败

                    其他所有节点:

                                    yum install ./mha4mysql-node-0.56-0.el6.noarch.rpm
    

3.2.5 初始化MHA

                1.创建配置文件,(此处使用application为集群提供默认配置,路径为/etc/masterha/app1.cnf)

                   #需要手动创建/etc/masterh/app1目录以及/data/masterha/app1/
  1. vim /etc/masterha/app1.cnf
  2. [server default]
  3. user=mhaadmin
  4. password=000000
  5. manager_workdir=/data/masterha/app1
  6. manager_log=/data/masterha/app1/manager.log
  7. remote_workdir=/data/masterha/app1
  8. ssh_user=root
  9. repl_user=repluser
  10. repl_password=replpass
  11. ping_interval=1
  12. [server1]
  13. hostname=10.1.249.184
  14. candidate_master=1
  15. #ssh_port=22022
  16. [server2]
  17. hostname=10.1.252.218
  18. candidate_master=1
  19. #ssh_port=22022
  20. [server3]
  21. hostname=10.1.249.70
  22. #ssh_port=22022
  23. #no_master=1

               2. 修改全部节点的/etc/hosts文件:

  1. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  2. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
  3. 10.1.249.83 node5.com node5
  4. 10.1.249.184 node1.com node1
  5. 10.1.252.218 node2.com node2
  6. 10.1.249.70 node4.com node4
             

               3.测试各节点间ssh通信配置:

  1. [root@node5 .ssh]# masterha_check_ssh --conf=/etc/masterha/app1.cnf
  2. Sun Nov 27 17:24:20 2016 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
  3. Sun Nov 27 17:24:20 2016 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
  4. Sun Nov 27 17:24:20 2016 - [info] Reading server configuration from /etc/masterha/app1.cnf..
  5. Sun Nov 27 17:24:20 2016 - [info] Starting SSH connection tests..
  6. Sun Nov 27 17:24:21 2016 - [debug]
  7. Sun Nov 27 17:24:20 2016 - [debug] Connecting via SSH from root@10.1.249.184(10.1.249.184:22) to root@10.1.252.218(10.1.252.218:22)..
  8. Sun Nov 27 17:24:21 2016 - [debug] ok.
  9. Sun Nov 27 17:24:21 2016 - [debug] Connecting via SSH from root@10.1.249.184(10.1.249.184:22) to root@10.1.249.70(10.1.249.70:22)..
  10. Warning: Permanently added '10.1.249.70' (ECDSA) to the list of known hosts.
  11. Sun Nov 27 17:24:21 2016 - [debug] ok.
  12. Sun Nov 27 17:24:22 2016 - [debug]
  13. Sun Nov 27 17:24:21 2016 - [debug] Connecting via SSH from root@10.1.252.218(10.1.252.218:22) to root@10.1.249.184(10.1.249.184:22)..
  14. Sun Nov 27 17:24:21 2016 - [debug] ok.
  15. Sun Nov 27 17:24:21 2016 - [debug] Connecting via SSH from root@10.1.252.218(10.1.252.218:22) to root@10.1.249.70(10.1.249.70:22)..
  16. Sun Nov 27 17:24:22 2016 - [debug] ok.
  17. Sun Nov 27 17:24:22 2016 - [debug]
  18. Sun Nov 27 17:24:21 2016 - [debug] Connecting via SSH from root@10.1.249.70(10.1.249.70:22) to root@10.1.249.184(10.1.249.184:22)..
  19. Sun Nov 27 17:24:22 2016 - [debug] ok.
  20. Sun Nov 27 17:24:22 2016 - [debug] Connecting via SSH from root@10.1.249.70(10.1.249.70:22) to root@10.1.252.218(10.1.252.218:22)..
  21. Sun Nov 27 17:24:22 2016 - [debug] ok.
  22. Sun Nov 27 17:24:22 2016 - [info] All SSH connection tests passed successfully.

               4. 检查MHA管理的mysql集群主从复制以及连接配置等是否正常:

  1. [root@node5 ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf
  2. ........
  3. ......
  4. ......
  5. Sun Nov 27 19:46:05 2016 - [info] Checking replication health on 10.1.252.218..Sun Nov 27 19:46:05 2016 - [info] ok.Sun Nov 27 19:46:05 2016 - [info] Checking replication health on 10.1.249.70..Sun Nov 27 19:46:06 2016 - [info] ok.Sun Nov 27 19:46:06 2016 - [warning] master_ip_failover_script is not defined.Sun Nov 27 19:46:06 2016 - [warning] shutdown_script is not defined.Sun Nov 27 19:46:06 2016 - [info] Got exit code 0 (Not master dead).MySQL Replication Health is OK.

                5. 启动MHA,并测试主节点工作状况

  1. [root@node5 .ssh]# nohup masterha_manager --conf=/etc/masterha/app1.cnf > /data/masterha/app1/manager.log 2>&1&
  2. [1] 7852
  3. #nohup:后台运行masterha_manager,否则当终端关闭时候也会将masterha_manager关闭
  4. #后续管道作用: 将启动信息等导入指定的日志文件中

  5. #检查主节点状态:
  6. [root@node5 .ssh]# masterha_check_status --conf=/etc/masterha/app1.cnf
  7. app1 (pid:7852) is running(0:PING_OK), master:10.1.249.184
  8. #若主节点或集群工作不正常,则会显示"app1 is stopped,......"等类似信息
                  6. 若要停止MHA ,可使用masterha_stop 命令
  1. [root@node5 .ssh]# masterha_stop --conf=/etc/masterha/app1.cnf
  2. [root@node5 .ssh]# Stopped app1 successfully.

3.3 故障转移测试

3.3.1 主节点故障转移

           1. 在master节点关闭mariadb服务

  1. [root@node1 ~]# ps aux | grep mysql
  2. mysql 23655 0.0 0.0 113252 456 ? Ss 12:53 0:00 /bin/sh /usr/bin/mysqld_safe --basedir=/usr
  3. mysql 23872 0.0 12.4 1102560 80756 ? Sl 12:53 0:11 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --log-error=/var/log/mariadb/mariadb.log --pid-file=/var/run/mariadb/mariadb.pid --socket=/var/lib/mysql/mysql.sock
  4. root 30281 0.0 0.1 112644 952 pts/0 R+ 20:16 0:00 grep --color=auto mysql
  5. [root@node1 ~]#
  6. [root@node1 ~]#
  7. [root@node1 ~]# killall mysqld mysqld_safe
            2. 在manager-node 上查看/data/masterha/app1/manager.log
  1. vim /data/masterha/app1/manager.log
  2. ......................
  3. Sun Nov 27 20:19:06 2016 - [info] Master is down!
  4. Sun Nov 27 20:19:06 2016 - [info] Terminating monitoring script.
  5. Sun Nov 27 20:19:06 2016 - [info] Got exit code 20 (Master dead).
  6. Sun Nov 27 20:19:06 2016 - [info] MHA::MasterFailover version 0.56.
  7. Sun Nov 27 20:19:06 2016 - [info] Starting master failover.
  8. Sun Nov 27 20:19:06 2016 - [info]
  9. Sun Nov 27 20:19:06 2016 - [info] * Phase 1: Configuration Check Phase..
  10. Sun Nov 27 20:19:06 2016 - [info]
  11. Sun Nov 27 20:19:06 2016 - [info] GTID failover mode = 0
  12. Sun Nov 27 20:19:06 2016 - [info] Dead Servers:
  13. Sun Nov 27 20:19:06 2016 - [info] 10.1.249.184(10.1.249.184:3306)
  14. Sun Nov 27 20:19:06 2016 - [info] Checking master reachability via MySQL(double check)...
  15. Sun Nov 27 20:19:06 2016 - [info] ok.
  16. Sun Nov 27 20:19:06 2016 - [info] Alive Servers:
  17. Sun Nov 27 20:19:06 2016 - [info] 10.1.252.218(10.1.252.218:3306)
  18. Sun Nov 27 20:19:06 2016 - [info] 10.1.249.70(10.1.249.70:3306)
  19. Sun Nov 27 20:19:06 2016 - [info] The latest binary log file/position on all slaves is master-bin.000003:1353
  20. Sun Nov 27 20:19:08 2016 - [info] * Phase 3.3: Determining New Master Phase..
    Sun Nov 27 20:19:08 2016 - [info]
    Sun Nov 27 20:19:08 2016 - [info] Finding the latest slave that has all relay logs for recovering other slaves..
    Sun Nov 27 20:19:08 2016 - [info] All slaves received relay logs to the same position. No need to resync each other.
    Sun Nov 27 20:19:08 2016 - [info] Searching new master from slaves..
    Sun Nov 27 20:19:08 2016 - [info] Candidate masters from the configuration file:
    Sun Nov 27 20:19:08 2016 - [info] 10.1.252.218(10.1.252.218:3306) Version=5.5.44-MariaDB-log (oldest major version between slaves) log-bin:enabled
    Sun Nov 27 20:19:08 2016 - [info] Replicating from 10.1.249.184(10.1.249.184:3306)
    Sun Nov 27 20:19:08 2016 - [info] Primary candidate for the new Master (candidate_master is set)
    Sun Nov 27 20:19:08 2016 - [info] Non-candidate masters:
    Sun Nov 27 20:19:08 2016 - [info] Searching from candidate_master slaves which have received the latest relay log events..
  21. Sun Nov 27 20:19:09 2016 - [info] Setting read_only=0 on 10.1.252.218(10.1.252.218:3306).. Sun Nov 27 20:19:08 2016 - [info] New master is 10.1.252.218(10.1.252.218:3306)
    Sun Nov 27 20:19:08 2016 - [info] Starting master failover..
  22. ..........................
  23. Applying log files succeeded. Sun Nov 27 20:19:09 2016 - [info] All relay logs were successfully applied. Sun Nov 27 20:19:09 2016 - [info] Getting new master's binlog name and position.. Sun Nov 27 20:19:09 2016 - [info] master-bin.000003:629 Sun Nov 27 20:19:09 2016 - [info] All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='10.1.252.218', MASTER_PORT=3306, MASTER_LOG_FILE='master-bin.000003', MASTER_LOG_POS=629, MASTER_USER='repluser', MASTER_PASSWORD='xxx'; Sun Nov 27 20:19:09 2016 - [warning] master_ip_failover_script is not set. Skipping taking over new master IP address. Sun Nov 27 20:19:09 2016 - [info] Setting read_only=0 on 10.1.252.218(10.1.252.218:3306).. Sun Nov 27 20:19:09 2016 - [info] ok. Sun Nov 27 20:19:09 2016 - [info] ** Finished master recovery successfully. Sun Nov 27 20:19:09 2016 - [info] * Phase 3: Master Recovery Phase completed. Sun Nov 27 20:19:09 2016 - [info] Sun Nov 27 20:19:09 2016 - [info] * Phase 4: Slaves Recovery Phase.. Sun Nov 27 20:19:09 2016 - [info] Sun Nov 27 20:19:09 2016 - [info] * Phase 4.1: Starting Parallel Slave Diff Log Generation Phase..
  24. .....................

                        #由以上日志内容可以看到故障转移成功
                注意:
                        每次故障成功转移以后,MHA manager 会自动停止, 此时再使用maserha__check_status检测状态信息
                        会出错:
                         105-MHA

                        出于数据完整性与可用性考虑,MHA每次故障转移后会停止工作,此后需要手动检查
                        数据是否有出错等,确保无误后,再重启MHA manager服务

                        

3.4 故障修复后重上线

        原master 节点故障修复后,数据恢复为原master节点上的数据,并作为一个新的集群从节点回集群中(配置需要
       重新修改) , 其IP地址必须配置为原master的ip地址,否则MHA将无法识别;
        再次启动MHA manager后,需要再检测一次各节点的工作状态,即执行masterha_check_status还有
       masterha_check_repl命令;
       
            

  1. [root@node5 .ssh]# nohup masterha_manager --conf=/etc/masterha/app1.cnf > /data/masterha/app1/manager.log 2>&1&
  2. [1] 10188
  3. [root@node5 .ssh]#
  4. [root@node5 .ssh]#
  5. [root@node5 .ssh]#
  6. [root@node5 .ssh]# masterha_check_status --conf=/etc/masterha/app1.cnf
  7. app1 (pid:10188) is running(0:PING_OK), master:10.1.252.218
  8. [root@node5 .ssh]#

  1. [root@node5 .ssh]# masterha_check_status --conf=/etc/masterha/app1.cnf app1 (pid:10188) is running(0:PING_OK), master:10.1.252.218 [root@node5 .ssh]# masterha_check_repl --conf=/etc/masterha/app1.cnf Sun Nov 27 21:03:04 2016 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping. Sun Nov 27 21:03:04 2016 - [info] Reading application default configuration from /etc/masterha/app1.cnf.. Sun Nov 27 21:03:04 2016 - [info] Reading server configuration from /etc/masterha/app1.cnf.. Sun Nov 27 21:03:04 2016 - [info] MHA::MasterMonitor version 0.56. Sun Nov 27 21:03:04 2016 - [warning] SQL Thread is stopped(no error) on 10.1.252.218(10.1.252.218:3306) Sun Nov 27 21:03:04 2016 - [info] Multi-master configuration is detected. Current primary(writable) master is 10.1.252.218(10.1.252.218:3306) Sun Nov 27 21:03:04 2016 - [info] Master configurations are as below: Master 10.1.252.218(10.1.252.218:3306), replicating from 10.1.249.184(10.1.249.184:3306) Master 10.1.249.184(10.1.249.184:3306), replicating from 10.1.252.218(10.1.252.218:3306), read-only Sun Nov 27 21:03:04 2016 - [info] GTID failover mode = 0 Sun Nov 27 21:03:04 2016 - [info] Dead Servers: Sun Nov 27 21:03:04 2016 - [info] Alive Servers: Sun Nov 27 21:03:04 2016 - [info] 10.1.249.184(10.1.249.184:3306) Sun Nov 27 21:03:04 2016 - [info] 10.1.252.218(10.1.252.218:3306) Sun Nov 27 21:03:04 2016 - [info] 10.1.249.70(10.1.249.70:3306) Sun Nov 27 21:03:04 2016 - [info] Alive Slaves: Sun Nov 27 21:03:04 2016 - [info] 10.1.249.184(10.1.249.184:3306) Version=5.5.44-MariaDB-log (oldest major version between slaves) log-bin:enabled Sun Nov 27 21:03:04 2016 - [info] Replicating from 10.1.252.218(10.1.252.218:3306) Sun Nov 27 21:03:04 2016 - [info] Primary candidate for the new Master (candidate_master is set) Sun Nov 27 21:03:04 2016 - [info] 10.1.249.70(10.1.249.70:3306) Version=5.5.44-MariaDB-log (oldest major version between slaves) log-bin:enabled Sun Nov 27 21:03:04 2016 - [info] Replicating from 10.1.252.218(10.1.252.218:3306)
  2. .....................

  3. Sun Nov 27 21:03:08 2016 - [info] Checking replication health on 10.1.249.184.. Sun Nov 27 21:03:08 2016 - [info] ok. Sun Nov 27 21:03:08 2016 - [info] Checking replication health on 10.1.249.70.. Sun Nov 27 21:03:08 2016 - [info] ok. Sun Nov 27 21:03:08 2016 - [warning] master_ip_failover_script is not defined. Sun Nov 27 21:03:08 2016 - [warning] shutdown_script is not defined. Sun Nov 27 21:03:08 2016 - [info] Got exit code 0 (Not master dead). MySQL Replication Health is OK.
 


105-MHA

 

原创文章,作者:ldt195175108,如若转载,请注明出处:http://www.178linux.com/61094

(0)
ldt195175108ldt195175108
上一篇 2016-11-27
下一篇 2016-11-27

相关推荐

  • wordpress和discuz的负载均衡(lvs-nat)

    实验目的:利用lvs-nat模型实现wordpress和discuz的负载均衡 实验要求:客户端访问wordpress或Discuz服务时,无论被调度至哪台RS上,其会话和访问的页面都应保持一致; 实验环境:一台server用作VS(需要两块网卡,eth1连接内部网络,eth0连接外部网络),两台server用作RS,一台server用于部署mysql、NF…

    2017-05-13
  • 浏览和管理log文件

    浏览和管理log文件 log文件是一种包含系统消息的文件,包括内核、服务和应用运行在其上。不同的日志文件对应不同的日志信息。例如,默认的系统log文件,一个log文件对应安全消息,一个log文件对应计划任务。当要对一个系统问题排错例如加载内核驱动或者当寻找对系统的非授权登录攻击,日志文件是很有帮助的。一些log文件被一个称之为rsyslogd的守护进程控制。…

    Linux干货 2017-05-15
  • 简单明了的Linux常用端口说明

    由于Linux常用端口很多,为了查看方便,所以特意整理了一些,详情如下        查看Linux常用端口   cat /etc/services # The Well Known Ports are those from 0 through 1023. 众所周知的端口是从0到1023。 # The Reg…

    Linux干货 2017-06-11
  • 文件系统结构

    -文件系统结构
    -应用程序的组成部分
    -文件类型

    2018-03-13
  • 基于rsync+inotify实现数据实时同步传输

    前言 与传统的cp、tar备份方式相比,rsync具有安全性高、备份迅速、支持增量备份等优点,通过rsync可以解决对实时性要求不高的数据备份需求,但随着文件数量的增大和实时同步的要求,rsync已不能满足需求,随之rsync+inotify便应运而生。本文将讲解rsync的基础知识和如何基于rsync+inotify实现数据实时同步传输。 rsync相关介…

    Linux干货 2015-04-27
  • 01Linux的发展历史

    1、1965年时,贝尔实验室(Bell Labs)加入一项由通用电气(General Electric)和麻省理工学院(MIT)合作的项目;该项目要建立一套多使用者、多任务、多层次(multi-user、multi-task、multi-level)的MULTICS操作系统。但是由于整个目标过于庞大,糅合了太多的特性,Multics虽然发布了一些产品,但是性…

    Linux干货 2016-10-14