前言:
xen虚拟机提供了一种类似于heartbeat高可用方案,在保证也不中断的业务情况下实现虚拟机迁移技术。在保证虚拟机上的服务正常的情况下将运行中的Domain迁移到其他机器上,实现xen虚拟机的高可用。
一、实验准备:
(1)各个测试机之间时间要同步
(2)node3提供iscsi网络共享存储
(4)node4与node5做为xen虚拟机
(5)本人使用的都是CentOS6.6操作系统
(6)关闭所有节点的iptables和SELinux
二、安装xen虚拟机:
(1)准备yum源,若在物理机安装xen直接使用http://mirrors.aliyun.com/centos/6/xen4/x86_64/这个地址作为yum源即可,本次做实验使用的是虚拟机,所以使用的是软件是有马哥提供;软件包太多这里就不在上传了;安装过程如下
node4节点和node5节点安装方式一样:
yum -y localinstall xen-4.2.2-22.el6.centos.alt.x86_64.rpm xen-libs-4.2.2-22.el6.centos.alt.x86_64.rpm xen-licenses-4.2.2-22.el6.centos.alt.x86_64.rpm xen-runtime-4.2.2-22.el6.centos.alt.x86_64.rpm xen-hypervisor-4.2.2-22.el6.centos.alt.x86_64.rpm kernel-xen-3.7.4-1.el6xen.x86_64.rpm kernel-xen-firmware-3.7.4-1.el6xen.x86_64.rpm kernel-xen-release-6-4.noarch.rpm
(2)编辑grub.conf配置文件,修改内容如下:node4节点与node5节点修改一样
[root@node4 ~]# vim /boot/grub/grub.conf default=0 timeout=5 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title CentOS (3.7.4-1.el6xen.x86_64) root (hd0,0) kernel /xen.gz dom0_mem=1025M,max:1024M dom0_max_vcpus=1 dom0_vpus_pin cpufreq=xen \\此处更改内核为xen.gz,同时向xen内核传递相关参数 module /vmlinuz-3.7.4-1.el6xen.x86_64 ro root=/dev/mapper/vg_node4-lv_root rd_NO_LUKS.UTF-8 rd_NO_MD rd_LVM_LV=vg_node4/lv_swap SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_LVM_LV=vg_node4/lv_root KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet module /initramfs-3.7.4-1.el6xen.x86_64.img title CentOS 6 (2.6.32-504.el6.x86_64)\\将原有的内核信息和ramfs都更改为模块方式 root (hd0,0) kernel /vmlinuz-2.6.32-504.el6.x86_64 ro root=/dev/mapper/vg_node4-lv_root rd_NO_LUKS.UTF-8 rd_NO_MD rd_LVM_LV=vg_node4/lv_swap SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_LVM_LV=vg_node4/lv_root KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet initrd /initramfs-2.6.32-504.el6.x86_64.img
(3)重启主机:
[root@node4 ~]# reboot [root@node5 ~]# reboot
(4)查看xen是否安装成功
[root@node4 ~]# uname -r;ssh node5 'uname -r' 3.7.4-1.el6xen.x86_64 3.7.4-1.el6xen.x86_64 [root@node4 ~]# service xend status;ssh node5 'service xend status' xend (pid 3042) is running... [ OK ] xend (pid 3537) is running... [ OK ]
三、安装配置iscsi
(1)准备一块没有格式化的磁盘
(2)安装scsi-target-utils
[root@node3 ~]# yum -y install scsi-target-utils
(3)编辑scsi配置文件,添加以下信息
[root@node3 ~]# vim /etc/tgt/targets.conf <target iqn.2015-08.com.linux:xen.t1> direct-store /dev/sdb initiator-address 172.16.2.0/24 </target>
(4)启动scsi服务,查看scsi信息:
[root@node3 ~]#/etc/init.d/tgtd start [root@node3 ~]# tgtadm --lld iscsi --mode target --op show Target 1: iqn.2015-08.com.linux:xen.t1 System information: Driver: iscsi State: ready I_T nexus information: LUN information: LUN: 0 Type: controller SCSI ID: IET 00010000 SCSI SN: beaf10 Size: 0 MB, Block size: 1 Online: Yes Removable media: No Prevent removal: No Readonly: No Backing store type: null Backing store path: None Backing store flags: LUN: 1 Type: disk SCSI ID: IET 00010001 SCSI SN: beaf11 Size: 21468 MB, Block size: 512 Online: Yes Removable media: No Prevent removal: No Readonly: No Backing store type: rdwr Backing store path: /dev/sdb1 Backing store flags: Account information: ACL information: 172.16.2.0/24
(5)配置iscsi端;node4节点与node5节点操作相同
[root@node4 ~]# yum -y install iscsi-initiator-utils
(6)在node4节点发现并格式化此设备块
[root@node4 ~]# iscsiadm -m discovery -t st -p 172.16.2.14:3260 -l \\发现iscsi设备并登陆,node5节点步骤相同 172.16.2.14:3260,1 iqn.2015-08.com.linux:xen.t1 Logging in to [iface: default, target: iqn.2015-08.com.linux:xen.t1, portal: 172.16.2.14,3260] (multiple) Login to [iface: default, target: iqn.2015-08.com.linux:xen.t1, portal: 172.16.2.14,3260] successful.
[root@node4 ~]# mke2fs -t ext4 /dev/sdb \\格式化iscsi提供的块设备,node5节点不需要操作
[root@node4 ~]# mkdir -pv /xen/image 创建挂载点 [root@node4 ~]# mount /dev/sdb /xen/image \\将iscsi设备挂载到刚刚创建的目录,node5节点同样需要此目录
四、利用busybox制作一个linux系统
(1)创建磁盘镜像
[root@node4 ~]# cd /xen/image/ [root@node4 image]# qemu-img-xen create -f qcow2 -o preallocation=metadata busybox.img 100G Formatting 'busybox', fmt=qcow2 size=107374182400 encryption=off cluster_size=65536 preallocation='metadata'
(2)格式化磁盘镜像
[root@node4 image]# mke2fs -t ext4 busybox \\格式化磁盘镜像设备 [root@node4 ~]# mount -o loop /xen/image/busybox /mnt \\挂载镜像设
(3)编译安装busybox
[root@node4 ~]# yum install ncurses-devel glibc-static \\安装依赖软件 [root@node4 ~]# yum -y groupinstall "Development Tools" "Server Paltfrom Development" \\安装开发环境 [root@node4 ~]# tar xf busybox-1.22.1.tar.bz2 [root@node4 busybox-1.22.1]# make menuconfig Busybox Settings ---> Build Options ---> [*] Build BusyBox as a static binary (no shared libs) [root@node4 busybox-1.22.1]# make;make install [root@node4 busybox-1.22.1]# cp -a _install/* /mnt
(4)创建系统启动时所需要的目录
[root@node4 mnt]# mkdir -pv proc sys dev home root tmp usr lib/module mnt [root@node4 mnt]# cp /lib/modules/3.7.4-1.el6xen.x86_64/kernel/drivers/net/xen-netfront.ko lib/module/ [root@node4 mnt]# sync
(5)卸载/mnt
[root@node4 ~]# umount /mnt
(6)配置桥设备;node5节点也安装此方法创建桥设备
[root@node4 ~]# cd /etc/sysconfig/network-scripts/ [root@node4 network-scripts]# cp ifcfg-eth0 ifcfg-br0 [root@node4 network-scripts]# vim ifcfg-br0 DEVICE=br0 TYPE=Bridge ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=static IPADDR=172.16.2.15 NETMASK=255.255.255.0 GATEWAY=172.16.2.1 [root@node4 network-scripts]# vim ifcfg-eth0 DEVICE=eth0 HWADDR=00:0C:29:10:84:38 TYPE=Ethernet ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=none BRIDGE=br0 [root@node4 ~]# service network restart [root@node4 ~]# ifconfig br0 Link encap:Ethernet HWaddr 00:0C:29:10:84:38 inet addr:172.16.2.15 Bcast:172.16.2.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe10:8438/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:38 errors:0 dropped:0 overruns:0 frame:0 TX packets:42 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2844 (2.7 KiB) TX bytes:4552 (4.4 KiB) eth0 Link encap:Ethernet HWaddr 00:0C:29:10:84:38 inet6 addr: fe80::20c:29ff:fe10:8438/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:167781 errors:0 dropped:0 overruns:0 frame:0 TX packets:1750795 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:151936658 (144.8 MiB) TX bytes:2498647650 (2.3 GiB)
(7)编辑busybox的配置文件
[root@node4 ~]# vim /etc/xen/busybox kernel = "/boot/vmlinuz-2.6.32-504.el6.x86_64" ramdisk = "/boot/initramfs-2.6.32-504.el6.x86_64.img" name = "busybox" vcpus = 2 memory = 512 disk = [ 'file:/xen/image/busybox.img,xvda,w', ] root = "/dev/xvda ro" extra = "selinux=0 init=/bin/sh" vif = [ 'bridge=br0' ] on_reboot = "destroy" on_crash = "destroy"
(8)创建虚拟机:
[root@node4 xen]# xm create -c busybox /bin/sh: can't access tty; job control turned off / # \\这是最终结果,导出busybox小型linux制作完成 [root@node4 xen]# xm list \\查看虚拟机列表 Name ID Mem VCPUs State Time(s) Domain-0 0 1024 1 r----- 362.3 busybox 2 512 2 -b---- 10.7
(9)停止busybox系统,将iscsi设备挂载到node5节点,复制busybox配置文件到node5节点,测试busybo是否能启动
[root@node4 ~]# xm destroy busybox \\停止busybox系统 [root@node4 ~]# umount /xen/image/ \\卸载挂载点 [root@node5 ~]# mount /dev/sdb /xen/image \\node5节点挂载iscsi设备 [root@node4 ~]# scp /etc/xen/busybox node5:/etc/xen/ \\复制node4节点的busybox配置文件到node5节点一份 [root@node5 ~]# xm create -c busybox \\创建busybox虚拟机 dracut: Switching root /bin/sh: can't access tty; job control turned off / # \\node5节点创建busybox成功
五、对busybox小系统进行实施迁移
(1)在node4节点将iscsi设备挂载到/xen/image目录
[root@node4 ~]# mount /dev/sdb /xen/image/
(2)迁移busybox之前进行检查
[root@node4 ~]# xm list \\确保node4节点没有运行bosybox Name ID Mem VCPUs State Time(s) Domain-0 0 1024 1 r----- 393.7 root@node5 ~]# xm list \\确保在迁移之前busybox运行在node5节点 Name ID Mem VCPUs State Time(s) Domain-0 0 1024 1 r----- 286.1 busybox 2 512 2 -b---- 3.5
(3)修改node4节点与node5节点的xen配置文件,开启以下选项
[root@node4 ~]# vim /etc/xen/xend-config.sxp (xend-relocation-server yes) (xend-relocation-port 8002) (xend-address '172.16.2.16') (xend-relocation-hosts-allow '') [root@node5 ~]# vim /etc/xen/xend-config.sxp (xend-relocation-server yes) (xend-relocation-port 8002) (xend-address '172.16.2.15') (xend-relocation-hosts-allow '') [root@node4 ~]# service xend restart;ssh node5 'service xend restart' Stopping xend daemon: [ OK ] Starting xend daemon: [ OK ] Stopping xend daemon: [ OK ] Starting xend daemon: [ OK ]
(3)将运行在node5节点的busybox小系统迁移到node4节点
[root@node5 ~]# xm migrate -l busybox 172.16.2.15 \\将busybox系统迁移到node4节点 [root@node4 ~]# xm list \\在node4节点查看,busybox已经成功移植到node4节点 Name ID Mem VCPUs State Time(s) Domain-0 0 1024 1 r----- 427.8 busybox 3 512 2 -b---- 0.1 [root@node5 ~]# xm list \\node5节点已经没由DomU在运行 Name ID Mem VCPUs State Time(s) Domain-0 0 1024 1 r----- 64.7
至此实验完成,本人是刚刚学习xen技术,有不足之处请大家多多给宝贵的意见。O(∩_∩)O
原创文章,作者:马行空,如若转载,请注明出处:http://www.178linux.com/7330
评论列表(6条)
有序言,有准备,有开始,有结束,行云流水般,赞 :idea:
@stanley:谢谢
可以不可以添加此实验拓扑图,
@zx5200:不要意思,现在已经添加不了实验拓扑图了,你加我qq(249502221),我发给你 :smile:
node4的bridge设备,是做什么样的?
@wwenyunkui:bridge是虚拟化网络,通过TUN或TAP模拟出真实网络中的二层或三层网络