keepalived +LVS DR 双主互备模型实验
实验环境介绍
操作系统:DR:centos 7.2 两个节点,都安装keepalived
Real Server :centos 6.7 两个节点。都安装上httpd
实验环境拓扑图
DR1 IP 192.168.36.131/24
DR2 IP 192.168.36.132/24
VIP1 IP 192.168.36.15/32
VIP2 IP 192.168.36.16/32
RIP1 IP 192.168.36.133/24
RIP2 IP 192.168.36.134/24
DR1 hostname:node1.centos7.cn
DR2 hostname:node2.centos7.cn
实验步骤
1、两个DR节点上需要配置host文件。能够以主机名进行通讯。当然也可以使用DNS解析来实现,只是这种方式效率比较低,成本也高些,如果高可用节点不是很多的情况下还是使用host文件比较好。
两个节点配置一样,如下所示:
192.168.36.132 node2 centos7.cn
192.168.36.131 node1 centos7.cn
2、配置两个节点相互之间的ssh认证:基于密钥的认证。这一步不是必须的,只是方便操作而已
3、两个节点的时间必须同步。centos7 使用chrony这个软件进行时间同步。只需要安装这个软件,并将其启动即可。当然这两个主机要能够上互联网。如果不能就需要在内网搭建一个ntp服务器。centos 7 当然也支持ntp同步时间
4、两个节点上关闭防火墙,Real Server也要关闭防火墙。如果需要都开启,DR 上要放行组播地址224.0.0.18的流量 Real Server上需要放行tcp 80 端口,源地址是任意地址的流量。selinux不关闭似乎没有太大的影响.
5、两个节点上安装keepalived和ipvsadm(这个不是必须的。安装了方便查看ipvs相关的信息)
6、两个Real Server安装httpd。设置arp相关信息。提供Real Server配置脚本如下:两个节点都运行该脚本即可。
#!/bin/bash
VIP1=192.168.36.15
VIP2=192.168.36.16
case $1 in
start_dr)
ifconfig lo:0 $VIP1 netmask 255.255.255.255 broadcast $VIP1
ifconfig lo:1 $VIP2 netmask 255.255.255.255 broadcast $VIP2
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
;;
stop_dr)
ifdown lo
ifup lo
echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce
echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore
;;
*)
echo "please input parameter:start_dr or stop_dr"
;;
esac
两个Real Server 上还需要配置httpd服务,centos 6.7上安装完后,需要修改配置文件,否则启动的时候总是警告信息或者直接报错
vim /etc/httpd/conf/httpd.conf
ServerName 192.168.36.134:80。把这一行前面的"#"去掉,并修改类似这样的即可,即使是不修改只去掉"#",就可以正常启动了
给web服务一个测试首页,做实验为了分辨出负载均衡的效果故意提供内容不相同的首页
vim /var/www/html/index.html
<h1> This is webserver1 192.168.36.134 </h>
另一个节点上
vim /var/www/html/index.html
<h1> This is webserver2 192.168.36.133 </h>
7、两个节点上keepalived的配置如下:
node1 上keepalived 配置文
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost.cn
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_MASTER
}
vrrp_instance VI_1 {
state MASTER
interface eno16777736
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type AH
auth_pass c87a5ba3176f
}
virtual_ipaddress {
192.168.36.15 dev eno16777736 label eno16777736:0
}
}
vrrp_instance VI_2 {
state BACKUP
interface eno16777736
virtual_router_id 52
priority 99
advert_int 1
authentication {
auth_type AH
auth_pass c87a5ba3176f
}
virtual_ipaddress {
192.168.36.16 dev eno16777736 label eno16777736:1
}
}
virtual_server 192.168.36.15 80
{
delay_loop 6
lb_algo wrr
lb_kind DR
nat_mask 255.255.255.255
# persistence_timeout 50
protocol TCP
real_server 192.168.36.133 80
{
weight 1
TCP_CHECK
{
connect_timeout 3
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
real_server 192.168.36.134 80
{
weight 3
HTTP_GET
{
url {
path /
status_code 200
}
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
virtual_server 192.168.36.16 80
{
delay_loop 6
lb_algo wrr
lb_kind DR
nat_mask 255.255.255.255
# persistence_timeout 50
protocol TCP
real_server 192.168.36.133 80
{
weight 1
TCP_CHECK
{
connect_timeout 3
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
real_server 192.168.36.134 80
{
weight 3
HTTP_GET
{
url {
path /
status_code 200
}
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
node 2
vim /etc/keepalived/keepalived.conf
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost.cn
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_BACKUP
}
vrrp_instance VI_1 {
state BACKUP
interface eno16777736
virtual_router_id 51
priority 99
advert_int 1
authentication {
auth_type AH
auth_pass c87a5ba3176f
}
virtual_ipaddress {
192.168.36.15 dev eno16777736 label eno16777736:0
}
}
vrrp_instance VI_2 {
state MASTER
interface eno16777736
virtual_router_id 52
priority 100
advert_int 1
authentication {
auth_type AH
auth_pass c87a5ba3176f
}
virtual_ipaddress {
192.168.36.16 dev eno16777736 label eno16777736:1
}
}
virtual_server 192.168.36.15 80
{
delay_loop 6
lb_algo wrr
lb_kind DR
nat_mask 255.255.255.255
# persistence_timeout 50
protocol TCP
real_server 192.168.36.133 80
{
weight 1
TCP_CHECK
{
connect_timeout 3
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
real_server 192.168.36.134 80
{
weight 3
HTTP_GET
{
url {
path /
status_code 200
}
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
virtual_server 192.168.36.16 80
{
delay_loop 6
lb_algo wrr
lb_kind DR
nat_mask 255.255.255.255
# persistence_timeout 50
protocol TCP
real_server 192.168.36.133 80
{
weight 1
TCP_CHECK
{
connect_timeout 3
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
real_server 192.168.36.134 80
{
weight 3
HTTP_GET
{
url {
path /
status_code 200
}
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
测试客户端浏览器分别访问http:/192.168.36.15和http:/192.168.36.16
在刷新几次会有下面的结果显示。由于两个Real Server权重不同,两个Real Server服务器访问量比大约3:1.192.168.36.134访问理论概率75%。192.168.36.133 访问的概率理论值25%
手动停止node1 节点上的keepalived的服务
[root@node2 keepalived]# ifconfig
eno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.36.132 netmask 255.255.255.0 broadcast 192.168.36.255
inet6 fe80::20c:29ff:fe2a:96f7 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:2a:96:f7 txqueuelen 1000 (Ethernet)
RX packets 1628624 bytes 140367911 (133.8 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 29468 bytes 2585699 (2.4 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eno16777736:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.36.15 netmask 255.255.255.255 broadcast 0.0.0.0
ether 00:0c:29:2a:96:f7 txqueuelen 1000 (Ethernet)
eno16777736:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.36.16 netmask 255.255.255.255 broadcast 0.0.0.0
ether 00:0c:29:2a:96:f7 txqueuelen 1000 (Ethernet)
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 0 (Local Loopback)
RX packets 52 bytes 3805 (3.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 52 bytes 3805 (3.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
通过查看node2的IP信息即可看出node2 将node1的地址抢占过来了。通过客户端测试。效果同刚才的一样,两个IP地址都能够访问
实际环境中一般是使用域名访问的,由于这里是做实验故使用IP地址访问web服务器。
实验总结:
1、在centos7中由于使用yum的方式安装keepalived的,日志信息不是很详细,排错不是很友好
2、不知道为什么keepalived的服务重启生效比较慢,有时候需要重启好几次,不知道是生效比较慢还是程序有bug,配置文件没有修过,第一次重启
没有生效,再次重启又生效。很奇怪!!!
原创文章,作者:jslijb,如若转载,请注明出处:http://www.178linux.com/12246
评论列表(1条)
为写标签的细节点赞,代码一定要格式化,不然整个篇幅外观非常乱