keepalived介绍及相关实验

Keepalived是基于vrrp协议的一款高可用软件。它的作用是检测服务器的状态,如果有一台web服务器宕机,或工作出现故障,Keepalived将检测到,并将有故障的服务器从系统中剔除,同时使用其他服务器代替该服务器的工作

本节索引:

一、keepalived介绍

二、keepalived服务配置

三、实验:实现keepalived主备模型

四、实验:实现keepalived双主模型

五、实验:使用keepalived实现ipvs高可用集群

六、实验:实现双主模型的ipvs高可用集群

七、实验:使用keepalived实现Nginx高可用

八、实验:使用keepalived实现HAProxy高可用

 

一、keepalived介绍

Keepalived是基于vrrp协议的一款高可用软件。它的作用是检测服务器的状态,如果有一台web服务器

宕机,或工作出现故障,Keepalived将检测到,并将有故障的服务器从系统中剔除,同时使用其他服务器

代替该服务器的工作,当服务器工作正常后Keepalived自动将服务器加入到服务器群中,这些工作全部自

动完成,不需要人工干涉,需要人工做的只是修复故障的服务器。

 

vrrp协议Virtual Redundant Routing Protocol

相关术语:

虚拟路由器:Virtual Router

虚拟路由器标识:VRID(0-255)

物理路由器:

master:主设备

backup:备用设备

priority:优先级

   VIP:Virtual IP

    VMAC:Virutal MAC (00-00-5e-00-01-VRID)

GraciousARP

 

通告:心跳,优先级等;周期性;

抢占式,非抢占式;

 

工作模式:

主/备:单虚拟路径器;

主/主:主/备(虚拟路径器1),备/主(虚拟路径器2)

 

 

常见实现高可用集群软件:

keepalived

corosync

failover故障切换,或称故障转移,当集群内某节点不再发送心跳信息,leader将决定转移到另外一个

节点

failback故障切回,新上线的节点重新切回

 

keepalived

vrrp协议的软件实现,原生设计的目的为了高可用ipvs服务:

基于vrrp协议完成地址流动;

为vip地址所在的节点生成ipvs规则(在配置文件中预先定义);

为ipvs集群的各RS做健康状态检测;

基于脚本调用接口通过执行脚本完成脚本中定义的功能,进而影响集群事务;

 

组件:

核心组件:

                vrrp stack

                ipvs wrapper

                checkers

控制组件:配置文件分析器

IO复用器

内存管理组件

 

keepalive结构图:

keepalived结构图

 

HA Cluster的配置前提:

(1) 各节点时间必须同步;

ntp, chrony

(2) 确保iptables及selinux不会成为阻碍;

(3) 各节点之间可通过主机名互相通信(对KA并非必须);

建议使用/etc/hosts文件实现;

(4) 确保各节点的用于集群服务的接口支持MULTICAST通信;

D类:224-239;

 

二、Keepalived服务配置

Keepalived:CentOS 6.4+ 随base仓库提供;

程序环境:

主配置文件:/etc/keepalived/keepalived.conf

主程序文件:/usr/sbin/keepalived

Unit File:keepalived.service

Unit File的环境配置文件:/etc/sysconfig/keepalived

 

 配置文件组件部分:

TOP HIERACHY

GLOBAL CONFIGURATION

Global definitions

Static routes/addresses

VRRPD CONFIGURATION

VRRP synchronization group(s):vrrp同步组;

VRRP instance(s):每个vrrp instance即一个vrrp路由器;

LVS CONFIGURATION

Virtual server group(s)

Virtual server(s):ipvs集群的vs和rs;

 

 

三、实验:实现keepalived主备模型

前期准备:

5台虚拟机

keepalived1:192.168.30.10      系统:CentOS 7.4

keepalived2:192.168.30.18      系统:CentOS 7.4

RS1:192.168.30.27      系统:CentOS 7.4

RS2:192.168.30.17      系统:CentOS 7.4

Client:192.168.30.16      系统:CentOS 7.4

 

具体步骤:

keepalived1,keepalived2端:

安装keepalived服务

yum install keepalived

 

keepalived1操作:

修改主配置文件

vim /etc/keepalived/keepalived.conf

1

重启keepalived服务

systemctl restart keepalived

 

keepalived2:

修改主配置文件:

vim /etc/keepalived/keepalived.conf

2

重启keepalived服务

systemctl restart keepalived

 

查看keepalived1端的IP地址:

可看到MASTER获得了虚拟路由地址

3

模拟MASTER端网卡down

ifconfig ens33 down

此时查看keepalived2端的IP地址,发现虚拟IP地址已游走到BACKUP

4

当重新启用MASTER的网卡时,虚拟地址将重新游回到MASTER端

 

 

四、实验:实现keepalived双主模型

实验环境:承接主备模型实验

 

具体步骤:

keepalived1端操作:

在单主模型配置基础上添加一个新的虚拟路由配置:

vim /etc/keepalived/keepalived.conf

1

重启keepalived服务

systemctl restart keepalived

开启httpd服务

echo keepalived1 > /var/www/html/index.html

systemctl restart httpd

 

keepalived2端操作:

在单主模型配置基础上也添加一个对应的新虚拟路由配置:

vim /etc/keepalived/keepalived.conf

2

重启keepalived服务

systemctl restart keepalived

开启httpd服务

echo keepalived2 > /var/www/html/index.html

systemctl restart httpd

 

此时keepalived1与keepalived2分别拥有一个虚拟IP地址:

keepalived1:

3

keepalived2:

4

客户端测试:

5

模拟keepalived2下线

systemctl stop keepalived

此时查看keepalived1的IP地址:发现两个虚拟IP都来到keepalived1端

6

客户端再次发起测试:

7

 

 

五、实验:实现高可用LVS-DR

前期准备:

5台虚拟机

keepalived1:192.168.30.10      系统:CentOS 7.4

keepalived2:192.168.30.18      系统:CentOS 7.4

RS1:192.168.30.27      系统:CentOS 7.4

RS2:192.168.30.17      系统:CentOS 7.4

Client:192.168.30.16      系统:CentOS 7.4

keepalived配置承接双主模型实验配置

 

具体步骤:

keepalived1,keepalived2中添加下列配置

vim /etc/keepalived/keepalived.conf

1

重启keepalived服务

systemctl restart keepalived

 

RS1端操作:

绑定VIP地址

[root@RS1 ~]#echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore

[root@RS1 ~]#echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore

[root@RS1 ~]#echo 2> /proc/sys/net/ipv4/conf/lo/arp_announce

[root@RS1 ~]#echo 2> /proc/sys/net/ipv4/conf/all/arp_announce

[root@RS1 ~]# ip addr a 192.168.30.111/32 dev lo

开启web服务

echo R1 > /var/www/html/index.html

systemctl restart httpd

 

RS2端操作: 

[root@RS2 ~]#echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore

[root@RS2 ~]#echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore

[root@RS2 ~]#echo 2> /proc/sys/net/ipv4/conf/lo/arp_announce

[root@RS2 ~]#echo 2> /proc/sys/net/ipv4/conf/all/arp_announce

[root@RS2 ~]#ip addr a 192.168.30.111/32 dev lo

开启web服务

echo R1 > /var/www/html/index.html

systemctl restart httpd

 

此时在keepalived1,2端可看到lvs规则已生效:

2

客户端测试访问:

3

模拟keepalived2停止服务

systemctl stop keepalived

抓包keepalived1,可见keepalived将继续保持VIP的正常运行

3-2

模拟RS1停止web服务

systemctl stop httpd

此时查看lvs规则中,RS1已自动剔除

4

5

当两台REAL SERVER都下线时,将自动转向到本机的SORRY SERVER

模拟RS2停止web服务

systemctl stop httpd

客户端访问将指向SORRY SERVER

6

7

 

 

六、实验:实现双主模型的LVS-DR高可用集群

实验环境及前期准备承接实现高可用LVS-DR实验中的环境

在keepalived1及keepalived2端的主配置文件中添加一端新的virtual_server配置

vim /etc/keepalived/keepalived.conf

1

重启keepalived服务

systemctl restart keepalived.service

查看LVS规则:

2

将192.168.30.222地址分别绑定到RS1,RS2的lo网卡上

ip addr a 192.168.30.222/32 dev lo

 

客户端访问测试:

3

 

七、实验:使用keepalived实现Nginx高可用

前期准备:

5台虚拟机

keepalived1:192.168.30.10      系统:CentOS 7.4

keepalived2:192.168.30.18      系统:CentOS 7.4

RS1:192.168.30.27      系统:CentOS 7.4

RS2:192.168.30.17      系统:CentOS 7.4

Client:192.168.30.16      系统:CentOS 7.4

 

具体步骤:

分别在keepalived1及keepalived2上搭建代理功能:

vim /etc/nginx/conf.d/proxy.conf

1

重启nginx服务

systemctl start nginx.service

客户端测试:

2

配置keepalived

分别配置keepalived1及keepalived2的主配置文件内容如下:

3

重启keepalived服务

systemctl restart keepalived.service

此时keepalived1网卡地址如下:

4

keepalived2由于为BACKUP,网卡无虚拟IP地址:

5

模拟keepalived1,即MASTER服务停止:

systemctl stop keepalived.service

 

此时虚拟IP地址192.168.30.111游走到keepalived2网卡上

6

keepalived2仍可正常反向代理至RS的访问:

7

 

八、实验:使用keepalived实现HAProxy高可用

同理,我们先再两台keepalived服务器上配置haproxy的反向代理

具体步骤:

yum install haproxy

vim /etc/keepalived/keepalived.conf

8

测试两台keepalived上的haproxy反向代理功能已成功启动

2

只需将nginx高可用keepalived的脚本中的nginx换成haproxy即可

9

本文来自投稿,不代表Linux运维部落立场,如若转载,请注明出处:http://www.178linux.com/102861

(6)
wangxczwangxcz
上一篇 2018-07-13 20:12
下一篇 2018-07-14

相关推荐