逻辑卷管理器(LVM)
※·步骤一:添加三块硬盘,分区,并且标记硬盘分区为 8e LVM 4
※·测试:把/dev/sdd1添加到VG中去,看看PV的显示信息如何 8
§·测试:缩减LV卷的大小。(lv0101为5G缩减为2G, lv0102为6G缩减为3G ) 15
※·使用resize2fs调整LV的逻辑边界;再使用lvreduce调整物理边界 16
※·挂载lv0101 lv0102两个lv看看数据还在不在的 16
§·逻辑卷管理器LVM介绍
※·LVM逻辑卷的简单描述
lvm(logical volume manager 逻辑卷管理器)的可以弹性的调整文件系统的容量,支持任何块设备,需要使用dm模块:device mapper设备映射,将一个或多个底层设备组织成一个逻辑设备的模块。
lvm的重点在于弹性的调整文件系统的容量,而并非在于数据的存储效率及安全上面,需要文件的读写效能或者是数据的可靠性是RAID所考虑的问题。
※·LVM逻辑卷的好坏
优点:
LVM的重点在与可以弹性的调整文件系统的容量,当实际场景中,我们如果使用普通的分区后,基本该分区的容量就已经固定,比如linux的/home分区由于用户存储的数据太多,导致/home分区的容量不够,我们需要把/home分区的数据拷贝到别的地方,挂载一块大分区上去,才可以完成/home的扩容,步骤比较繁琐。
由于LVM可以动态在线调整分区大小,我们可以直接通过LVM给/home分区扩容。如果有其它分区太大而浪费空间,我们还可以在线的缩小容量。
LVM还支持快照,在不中断业务的情况下,对数据做完整备份。
缺点:
LVM是基于操作系统之上,若由于程序问题导致数据丢失,或LVM的损坏,数据恢复比较麻烦。
※·LVM结构组成部分
以上为一个简单模型:
PV:物理磁盘,LVM是建立在物理磁盘上面的,增加物理磁盘可以扩展上层分区的大小。
VG:卷组,可以包含一个或多个物理卷。
LV:逻辑卷,从VG上分出可以实际存储数据的分区,建立文件系统
PE:VG上最小的块大小单元。
§·LVM的举例分析
※·LVM设备名称
/dev/mapper/VG_NAME_LV_NAME
例如:/dev/mapper/vo10_root <—- /dev/vo10/root (符号链接)
※·LVM分区类型:
类型: 8e LVM
※·LVM PV相关命令
pvs :简要PV信息显示;
pvdisplay :显示PV的详细信息;
pvcreate : 创建PV
例如:创建PV pvcreatr /dev/sda3
※·LVM VG相关命令
vgs :简要显示VG信息
vgdisplay :显示VG的详细信息
vgcreate :创建VG
例如:创建VG vgcreate myvg /dev/sda3
Vgxtend : 添加PV
例如: 添加PV vgxtend myvg /dev/sda5
缩小VG:
1.把PV中的数据移动到VG 的其它PV中;
Pvmove /dev/sda5
2.在VG中删除PV
Vgreduce myvg /dev/sda5
※·LVM LV 相关命令
创建LV : lvcreate
-L# : 直接指明空间;
-n :指明名称 后面接VG设备
例如: lvcreate -L 2G -n mylv myvg (相当与创建了分区)
#创建 一个2G的LV 从 VG(必须是创建好的VG)上
mke2fs –t ext4 –b 1024 –L mylv /dev/myvg/mylv (格式化分区为ext4格式)
#格式化 在设备 /dev/myvg/mylv的LV的文件系统为 ext4
扩展LV:
lvextend –L [+]#[MGT] /dev/vg_name/lv_name
重新识别文件系统大小:resize2fs /dev/myvg/mylv
例如:lvextend –L 5G /dev/myvg/mylv
缩减LV :
首先需要卸载mglv : umount /dev/myvg/mylv
强制检测mylv : e2fsck -f /dev/myvg/mylv
调整文件系统的大小:resize2fs /dev/myvg/mylv 3000M
缩小文件系统的逻辑大小:lvreduce -L 3000M /dev/myvg/mylv
再挂载使用: moung /dev/myvg/mylv /mylvm/
※·LVM 快照机制:
创建快照:
lvcreate -s -L #[GT] -P r -n sanphot_lv_name /dev/myvg/mylv
# -s : 创建快照的参数;-L:设置快照空间大小;-P r快照为只读模式-n创建快照的名称
例如:
lvcreate -s -L 512M -P r -n mylv_snap /dev/myvg/mylv
mount mylv_snap /mnt/snap
拷贝出快照文件,即可删除快照卷mylv_snap
删除快照卷lvremove mylv_snap
§·举例练习
1.使用三块20G硬盘做一个LVM,创建两个VG (VG01,VG02),每个VG上创建两个LV(LV0101,LV0102,LV0201,LV0202),每个LV 5G,自己做卷的扩展和收缩,快照等等。
※·步骤一:添加三块硬盘,分区,并且标记硬盘分区为 8e LVM
[root@love721 ~]# fdisk -l /dev/sdb Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0xa49e4ef2 Device Boot Start End Blocks Id System /dev/sdb1 2048 41943039 20970496 8e Linux LVM [root@love721 ~]# fdisk -l /dev/sdc Disk /dev/sdc: 21.5 GB, 21474836480 bytes, 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x05cfc514 Device Boot Start End Blocks Id System /dev/sdc1 2048 41943039 20970496 8e Linux LVM [root@love721 ~]# fdisk -l /dev/sdd Disk /dev/sdd: 21.5 GB, 21474836480 bytes, 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0xc9626279 Device Boot Start End Blocks Id System /dev/sdd1 2048 41943039 20970496 8e Linux LVM
※·步骤二:把三个硬盘的三个分区创建成PV
[root@love721 ~]# pvs #没有创建前 没有PV的相关信息 [root@love721 ~]# pvdisplay [root@love721 ~]# pvcreate /dev/sdb1 Physical volume "/dev/sdb1" successfully created [root@love721 ~]# pvcreate /dev/sdc1 Physical volume "/dev/sdc1" successfully created [root@love721 ~]# pvcreate /dev/sdd1 #创建PV的命令和信息提示 Physical volume "/dev/sdd1" successfully created [root@love721 ~]# pvs # 简要PV信息概述信息 PV VG Fmt Attr PSize PFree /dev/sdb1 lvm2 --- 20.00g 20.00g /dev/sdc1 lvm2 --- 20.00g 20.00g /dev/sdd1 lvm2 --- 20.00g 20.00g [root@love721 ~]# pvdisplay #详细的PV信息信息 "/dev/sdd1" is a new physical volume of "20.00 GiB" --- NEW Physical volume --- PV Name /dev/sdd1 VG Name PV Size 20.00 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID DiJsp3-PUu5-oFmp-min1-dfs8-q17e-E3dyb5 "/dev/sdb1" is a new physical volume of "20.00 GiB" --- NEW Physical volume --- PV Name /dev/sdb1 VG Name PV Size 20.00 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID xVVOUU-aRPa-oF0U-wVVb-HF9g-xLcg-dQWJuI "/dev/sdc1" is a new physical volume of "20.00 GiB" --- NEW Physical volume --- PV Name /dev/sdc1 VG Name PV Size 20.00 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID 9BfbMg-Rvwt-0NwD-kYm3-Rld1-zngF-NxWymJ
※·步骤三:有了PV我们就创建VG
[root@love721 ~]# vgcreate vg01 /dev/sdb1 #在sdb1上创建一个vg01 Volume group "vg01" successfully created [root@love721 ~]# vgcreate vg02 /dev/sdb1 #测试一个分区只能属于一个vg Physical volume '/dev/sdb1' is already in volume group 'vg01' Unable to add physical volume '/dev/sdb1' to volume group 'vg02'. [root@love721 ~]# vgcreate vg02 /dev/sdc1 #在sdc1上创建一个vg02 Volume group "vg02" successfully created [root@love721 ~]# vgdisplay #查看vg的信息 --- Volume group --- VG Name vg01 #vg 名称 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write #可读写 VG Status resizable #可调整的 MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 20.00 GiB #VG的尺寸 PE Size 4.00 MiB #PE的大小 Total PE 5119 Alloc PE / Size 0 / 0 Free PE / Size 5119 / 20.00 GiB VG UUID 1ImH19-1Y6G-mbnI-52c1-FB8C-jN9e-djU8rk --- Volume group --- VG Name vg02 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 20.00 GiB PE Size 4.00 MiB Total PE 5119 Alloc PE / Size 0 / 0 Free PE / Size 5119 / 20.00 GiB VG UUID hXcukw-bsgg-iTJv-WTVx-paHt-6HEK-IdTDfO [root@love721 ~]# pvdisplay # 显示pv的信息,由于把PV加入了VG,所以PV的信息肯定有所改变 --- Physical volume --- PV Name /dev/sdb1 #sdb1添加到VG01中 VG Name vg01 PV Size 20.00 GiB / not usable 3.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 5119 Free PE 5119 Allocated PE 0 PV UUID xVVOUU-aRPa-oF0U-wVVb-HF9g-xLcg-dQWJuI --- Physical volume --- PV Name /dev/sdc1 #sdc1添加到VG02中 VG Name vg02 PV Size 20.00 GiB / not usable 3.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 5119 Free PE 5119 Allocated PE 0 PV UUID 9BfbMg-Rvwt-0NwD-kYm3-Rld1-zngF-NxWymJ "/dev/sdd1" is a new physical volume of "20.00 GiB" --- NEW Physical volume --- PV Name /dev/sdd1 #sdd1没有添加到任何VG中 VG Name PV Size 20.00 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID DiJsp3-PUu5-oFmp-min1-dfs8-q17e-E3dyb5
※·测试:把/dev/sdd1添加到VG中去,看看PV的显示信息如何
[root@love721 ~]# vgextend vg01 /dev/sdd1 #把sdd1添加到VG01中去 Volume group "vg01" successfully extended [root@love721 ~]# vgdisplay #VG信息显示 --- Volume group --- VG Name vg01 System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 2 Act PV 2 VG Size 39.99 GiB #VG空间增大由于有 sdd1的加入 PE Size 4.00 MiB Total PE 10238 Alloc PE / Size 0 / 0 Free PE / Size 10238 / 39.99 GiB VG UUID 1ImH19-1Y6G-mbnI-52c1-FB8C-jN9e-djU8rk --- Volume group --- VG Name vg02 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 20.00 GiB PE Size 4.00 MiB Total PE 5119 Alloc PE / Size 0 / 0 Free PE / Size 5119 / 20.00 GiB VG UUID hXcukw-bsgg-iTJv-WTVx-paHt-6HEK-IdTDfO [root@love721 ~]# pvdisplay #PV的详细信息 --- Physical volume --- PV Name /dev/sdb1 VG Name vg01 PV Size 20.00 GiB / not usable 3.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 5119 Free PE 5119 Allocated PE 0 PV UUID xVVOUU-aRPa-oF0U-wVVb-HF9g-xLcg-dQWJuI --- Physical volume --- PV Name /dev/sdd1 VG Name vg01 PV Size 20.00 GiB / not usable 3.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 5119 Free PE 5119 Allocated PE 0 PV UUID DiJsp3-PUu5-oFmp-min1-dfs8-q17e-E3dyb5 --- Physical volume --- PV Name /dev/sdc1 VG Name vg02 PV Size 20.00 GiB / not usable 3.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 5119 Free PE 5119 Allocated PE 0 PV UUID 9BfbMg-Rvwt-0NwD-kYm3-Rld1-zngF-NxWymJ
※·步骤四:在VG中分配LV
LV在实际可以正常使用的空间,一定要区分 PV ,VG LV的区别 [root@love721 ~]# lvcreate -L 2G -n lv0101 vg01 #在vg01上创建两个2G的lv Logical volume "lv0101" created. [root@love721 ~]# lvcreate -L 2G -n lv0102 vg01 Logical volume "lv0102" created. [root@love721 ~]# lvdisplay #显示lv的信息 --- Logical volume --- LV Path /dev/vg01/lv0101 LV Name lv0101 VG Name vg01 LV UUID Kv9y7w-cdLQ-T1hb-GLcb-he3E-Zca1-kffH0T LV Write Access read/write LV Creation host, time love721.q.com, 2016-08-01 10:56:32 +0800 LV Status available # open 0 LV Size 2.00 GiB Current LE 512 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:0 --- Logical volume --- LV Path /dev/vg01/lv0102 LV Name lv0102 VG Name vg01 LV UUID eIDTga-iY8A-2BXg-TpSY-XoMH-vbsA-h3pmGd LV Write Access read/write LV Creation host, time love721.q.com, 2016-08-01 10:56:40 +0800 LV Status available # open 0 LV Size 2.00 GiB Current LE 512 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:1 [root@love721 ~]# vgdisplay #显示VG的信息 --- Volume group --- VG Name vg01 System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 0 Max PV 0 Cur PV 2 Act PV 2 VG Size 39.99 GiB PE Size 4.00 MiB Total PE 10238 Alloc PE / Size 1024 / 4.00 GiB #在vg01上显示有4的空间已经被使用了,就是分配两个LV Free PE / Size 9214 / 35.99 GiB VG UUID 1ImH19-1Y6G-mbnI-52c1-FB8C-jN9e-djU8rk --- Volume group --- VG Name vg02 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 20.00 GiB PE Size 4.00 MiB Total PE 5119 Alloc PE / Size 0 / 0 Free PE / Size 5119 / 20.00 GiB VG UUID hXcukw-bsgg-iTJv-WTVx-paHt-6HEK-IdTDfO
※·步骤五:格式化LV 挂载LV
从VG中分配出LV ,相当于分配了可用空间出来,我们需要格式化,挂载即可使用分配出来的 两个LV. 格式化LV: [root@love721 ~]# mke2fs -t ext4 -b 1024 -L mylv0101 /dev/mapper/vg01-lv0101 #创建ext4格式分区,卷标mylv0101的,设备在/dev/mapper/vg01-lv0101,创建LV自动生成的 mke2fs 1.42.9 (28-Dec-2013) Filesystem label=mylv0101 OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) Stride=0 blocks, Stripe width=0 blocks 131072 inodes, 2097152 blocks 104857 blocks (5.00%) reserved for the super user First data block=1 Maximum filesystem blocks=35651584 256 block groups 8192 blocks per group, 8192 fragments per group 512 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409, 663553, 1024001, 1990657 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done [root@love721 ~]# mkdir /mnt/mylv0101 #创建挂载目录 [root@love721 ~]# mount /dev/mapper/vg01-lv0101 /mnt/mylv0101/ #挂载mylv0101 [root@love721 ~]# cd /mnt/mylv0101/ [root@love721 mylv0101]# ls lost+found [root@love721 mylv0101]# cp -r /boot/* ./ #拷贝文件到LV中来 [root@love721 mylv0101]# ll total 107037 -rw-r--r-- 1 root root 126426 Aug 1 11:07 config-3.10.0-327.el7.x86_64 drwxr-xr-x 2 root root 1024 Aug 1 11:07 grub drwx------ 6 root root 1024 Aug 1 11:07 grub2 -rw-r--r-- 1 root root 57644379 Aug 1 11:07 initramfs-0-rescue-7d0dd8f054af463ca2d6ac1f4b210fdc.img -rw-r--r-- 1 root root 28097829 Aug 1 11:07 initramfs-3.10.0-327.el7.x86_64.img -rw-r--r-- 1 root root 10190079 Aug 1 11:07 initrd-plymouth.img drwx------ 2 root root 12288 Aug 1 11:05 lost+found -rw-r--r-- 1 root root 252612 Aug 1 11:07 symvers-3.10.0-327.el7.x86_64.gz -rw------- 1 root root 2963044 Aug 1 11:07 System.map-3.10.0-327.el7.x86_64 -rwxr-xr-x 1 root root 5156528 Aug 1 11:07 vmlinuz-0-rescue-7d0dd8f054af463ca2d6ac1f4b210fdc -rwxr-xr-x 1 root root 5156528 Aug 1 11:07 vmlinuz-3.10.0-327.el7.x86_64
※·测试:在线扩容 lv的空间大小
扩展LV的步骤:
由于我现有的VG容量有40G,LV的容量为3G,可以直接使用LV命令扩容;
lvextend –L 10G /dev/vg01/lv0101
resize2fs /dev/vg01/lv0101
查看现有lv的空间大小:
[root@love721 mylv0102]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 40G 315M 40G 1% / devtmpfs 475M 0 475M 0% /dev tmpfs 489M 0 489M 0% /dev/shm tmpfs 489M 6.8M 483M 2% /run tmpfs 489M 0 489M 0% /sys/fs/cgroup /dev/sda3 20G 2.6G 18G 13% /usr /dev/sda6 1003K 23K 909K 3% /mnt/tools /dev/sda1 485M 138M 348M 29% /boot tmpfs 98M 0 98M 0% /run/user/0 /dev/mapper/vg01-lv0101 2.9G 3.1M 2.8G 1% /mnt/mylv0101 #查看到lv0101的空间大小为 3G /dev/mapper/vg01-lv0102 1.9G 3.1M 1.8G 1% /mnt/mylv0102 #查看到lv0102的空间大小为 2G
可以看到两个LV已经挂载到主机上面,我们使用命令来扩容现有的两个LV,
lv0101从3G扩容到 5G ; lv0102从2G扩容到 6G
[root@love721 mylv0102]# lvextend -L 5G /dev/vg01/lv0101 #扩容lv0101命令 Size of logical volume vg01/lv0101 changed from 2.93 GiB (750 extents) to 5.00 GiB (1280 extents). Logical volume lv0101 successfully resized. [root@love721 mylv0102]# lvextend -L 6G /dev/vg01/lv0102 #扩容lv0102命令 Size of logical volume vg01/lv0102 changed from 1.95 GiB (500 extents) to 6.00 GiB (1536 extents). Logical volume lv0102 successfully resized. [root@love721 mylv0102]# fdisk –l #查看分区的信息,中间省了很多文字信息,直接看lv0101 lv0102 ………………………………………………………………………………………………………. Disk /dev/mapper/vg01-lv0101: 5368 MB, 5368709120 bytes, 10485760 sectors #lv0101容量为5G Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/vg01-lv0102: 6442 MB, 6442450944 bytes, 12582912 sectors #lv0102容量为6G Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes [root@love721 mylv0102]# df –h #查看挂载的分区信息,容量还是先前的容量空间,没有改变成为我们扩容的空间 Filesystem Size Used Avail Use% Mounted on …………………………………………………………………………………………………………… /dev/mapper/vg01-lv0101 2.9G 3.1M 2.8G 1% /mnt/mylv0101 /dev/mapper/vg01-lv0102 1.9G 3.1M 1.8G 1% /mnt/mylv0102
出现上面的问题,我们使用 resize2fs 命令将文件系统确实添加到系统中
[root@love721 mylv0102]# resize2fs /dev/vg01/lv0101 #重新让系统识别下lv0101 resize2fs 1.42.9 (28-Dec-2013) Filesystem at /dev/vg01/lv0101 is mounted on /mnt/mylv0101; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/vg01/lv0101 is now 1310720 blocks long. [root@love721 mylv0102]# resize2fs /dev/vg01/lv0102 #重新让系统识别下lv0102 resize2fs 1.42.9 (28-Dec-2013) Filesystem at /dev/vg01/lv0102 is mounted on /mnt/mylv0102; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/vg01/lv0102 is now 1572864 blocks long. [root@love721 mylv0102]# df –h #可以显示正常的扩容后的容量 Filesystem Size Used Avail Use% Mounted on /dev/sda2 40G 315M 40G 1% / devtmpfs 475M 0 475M 0% /dev /dev/mapper/vg01-lv0101 4.9G 4.0M 4.7G 1% /mnt/mylv0101 /dev/mapper/vg01-lv0102 5.9G 4.0M 5.7G 1% /mnt/mylv0102
测试原来的文件是否还存在原来的LV中:文件都存在且可以正常打开
[root@love721 mylv0102]# ll /mnt/mylv0101 total 20 -rw-r--r-- 1 root root 683 Aug 1 13:04 fstab drwx------ 2 root root 16384 Aug 1 13:03 lost+found [root@love721 mylv0102]# ll /mnt/mylv0102 total 20 -rw-r--r-- 1 root root 119 Aug 1 13:04 issue drwx------ 2 root root 16384 Aug 1 13:03 lost+found [root@love721 mylv0102]#
※·关于 lvm在线扩容总结一下:
1.在线扩容LVM命令比较简单,但是需要理解的的是,实际工作中应该是把新的PV分区格式化为8e的分区,把PV添加到VG中,,在通过VG来分配空间给我们的LV,由于我的VG空间充足,直接LV命令扩容。
2.在线lvetend扩容LV后,系统还无法识别LV的空间,需要 resize2fs重新读取一下,是系统识别到正确的容量空间。
§·测试:缩减LV卷的大小。(lv0101为5G缩减为2G, lv0102为6G缩减为3G )
缩减LV卷的步骤:
1. 先需要卸载已经挂载的LV;
2. 强制进行磁盘检查;e2fsck –f /dev/vg0/lv0101
3. resize2fs /dev/vg0/lv0101 2000M (调整逻辑边界);
4. lvreduce –L 2000M /dev/vg0/lv0101 (调整物理边界);
5.挂载设备,检查文件是否存在。
注意:无论是原因需要缩减LV卷的大小,首先需要确认原LV中的数据大小肯定要小于缩减后的LV的空间大小,不然是无法缩减的。还有一点就是,尽然要缩减LV的大小,那该LV中应该是没有重要的数据(有的话肯定需要自己先备份出来的)
※·卸载 : lv0101 lv0102两个LV卷
[root@love721 mylv0102]# df -h #可以查看到两个LV都挂载在系统中在 Filesystem Size Used Avail Use% Mounted on tmpfs 98M 0 98M 0% /run/user/0 /dev/mapper/vg01-lv0101 4.9G 4.0M 4.7G 1% /mnt/mylv0101 /dev/mapper/vg01-lv0102 5.9G 4.0M 5.7G 1% /mnt/mylv0102 [root@love721 mylv0102]# umount /mnt/mylv0101 [root@love721 mylv0102]# cd .. [root@love721 mnt]# umount /mnt/mylv0102 #由于我刚刚在LV卷的挂载目录里面无法卸载mylv0102 [root@love721 mnt]# df -h Filesystem Size Used Avail Use% Mounted on tmpfs 98M 0 98M 0% /run/user/0 #两个LV卷都被卸载掉了
※·强制检测两个LV的文件分区
(如果不检测,直接缩减resize2fs LV的空间,还是会提示你强制检测的)
[root@love721 mnt]# e2fsck -f /dev/vg01/lv0101 #强制检测/lv0101 e2fsck 1.42.9 (28-Dec-2013) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information mylv0101: 12/327680 files (0.0% non-contiguous), 29791/1310720 blocks [root@love721 mnt]# e2fsck -f /dev/vg01/lv0102 #强制检测/lv0101 e2fsck 1.42.9 (28-Dec-2013) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information mylv0102: 12/393216 files (0.0% non-contiguous), 33903/1572864 blocks
※·使用resize2fs调整LV的逻辑边界;再使用lvreduce调整物理边界
[root@love721 mnt]# resize2fs /dev/vg01/lv0101 2000M #resize2fs lv0101的空间为2G resize2fs 1.42.9 (28-Dec-2013) Resizing the filesystem on /dev/vg01/lv0101 to 512000 (4k) blocks. The filesystem on /dev/vg01/lv0101 is now 512000 blocks long. [root@love721 mnt]# resize2fs /dev/vg01/lv0102 3000M #resize2fs lv0102的空间为3G resize2fs 1.42.9 (28-Dec-2013) Resizing the filesystem on /dev/vg01/lv0102 to 768000 (4k) blocks. The filesystem on /dev/vg01/lv0102 is now 768000 blocks long. [root@love721 mnt]# lvreduce -L 2000M /dev/vg01/lv0101 # lvreduce lv0101的空间为2G WARNING: Reducing active logical volume to 1.95 GiB THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce lv0101? [y/n]: y Size of logical volume vg01/lv0101 changed from 5.00 GiB (1280 extents) to 1.95 GiB (500 extents). Logical volume lv0101 successfully resized. [root@love721 mnt]# lvreduce -L 3000M /dev/vg01/lv0102 # lvreduce lv0102的空间为3G WARNING: Reducing active logical volume to 2.93 GiB THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce lv0102? [y/n]: y Size of logical volume vg01/lv0102 changed from 6.00 GiB (1536 extents) to 2.93 GiB (750 extents). Logical volume lv0102 successfully resized.
※·挂载lv0101 lv0102两个lv看看数据还在不在的
[root@love721 mnt]# mount /dev/vg01/lv0101 /mnt/mylv0101 #挂载lv0101 [root@love721 mnt]# mount /dev/vg01/lv0102 /mnt/mylv0102 #挂载lv0102 [root@love721 mnt]# df -h Filesystem Size Used Avail Use% Mounted on tmpfs 98M 0 98M 0% /run/user/0 /dev/mapper/vg01-lv0101 1.9G 3.1M 1.9G 1% /mnt/mylv0101 #大小调整过来了 /dev/mapper/vg01-lv0102 2.9G 3.1M 2.8G 1% /mnt/mylv0102 [root@love721 mnt]# ll /mnt/mylv0101 #原来的数据文件都在,查看页没有问题 total 20 -rw-r--r-- 1 root root 683 Aug 1 13:04 fstab drwx------ 2 root root 16384 Aug 1 13:03 lost+found [root@love721 mnt]# ll /mnt/mylv0102 total 20 -rw-r--r-- 1 root root 119 Aug 1 13:04 issue drwx------ 2 root root 16384 Aug 1 13:03 lost+found [root@love721 mnt]#
※·LVM空间的缩减小结
以上就是LVM空间的缩减,关于为什么缩减需要卸载挂载点,扩容不需要卸载挂载点的理解。
在实际中有可能出现原来的LV空间数据存放BLOCK比较分散,缩减需要把磁盘空间数据的集中在限定大小的区域,所以需要需要卸载,以免在数据移动过程中,有其它的数据写入缩减空间以为的block。
还有记得卸载后一定记得 需要强制检测分区,使用 resize2fs调整分区,再使用lvreduce调整分区,不然会出现该LV卷出现损坏的故障报错。
§·LVM快照机制测试(在线备份)
介绍几个概念,关于数据的备份
冷备份:卸载掉文件系统,不能读不能写
温备份:不卸载文件系统,能读取文件系统内容但是不能写
热备份:不卸载文件系统,既能读取文件系统内容又能写入
注意两点:
1),快照其实也是一个逻辑卷
2),快照只能对逻辑卷LVM进行备份,并且只能对同一个卷组的逻辑卷进行备份
l
vcreate -s -L #[GT] -P r -n sanphot_lv_name /dev/myvg/mylv # -s : 创建快照的参数;-L:设置快照空间大小;-P r快照为只读模式-n创建快照的名称
※·对lv0102做快照,备份其数据
1.查看lv0102的空间大小,查看其中数据;
2.做快照后,删除和修改源卷的数据;
3.查看快照卷中的数据是否完整。
[root@love721 mylv0102]# ll –h # 查看到原卷上的数据有105M total 105M -rw-r--r-- 1 root root 124K Aug 1 16:10 config-3.10.0-327.el7.x86_64 drwxr-xr-x 2 root root 4.0K Aug 1 16:10 grub drwx------ 6 root root 4.0K Aug 1 16:10 grub2 -rw-r--r-- 1 root root 55M Aug 1 16:10 initramfs-0-rescue-7d0dd8f054af463ca2d6ac1f4b210fdc.img -rw-r--r-- 1 root root 27M Aug 1 16:10 initramfs-3.10.0-327.el7.x86_64.img -rw-r--r-- 1 root root 9.8M Aug 1 16:10 initrd-plymouth.img -rw-r--r-- 1 root root 119 Aug 1 13:04 issue drwx------ 2 root root 16K Aug 1 13:03 lost+found -rw-r--r-- 1 root root 247K Aug 1 16:10 symvers-3.10.0-327.el7.x86_64.gz -rw------- 1 root root 2.9M Aug 1 16:10 System.map-3.10.0-327.el7.x86_64 -rwxr-xr-x 1 root root 5.0M Aug 1 16:10 vmlinuz-0-rescue-7d0dd8f054af463ca2d6ac1f4b210fdc -rwxr-xr-x 1 root root 5.0M Aug 1 16:10 vmlinuz-3.10.0-327.el7.x86_64 [root@love721 mylv0102]# df –h #硬盘的空间还有2.7G左右 Filesystem Size Used Avail Use% Mounted on tmpfs 98M 0 98M 0% /run/user/0 /dev/mapper/vg01-lv0102 2.9G 116M 2.7G 5% /mnt/mylv0102
创建快照卷 lv0102_snap 大小为500M (快照卷空间不要太小了,一边原卷上的数据变动比较频繁时,快照卷无法保存所有的原卷数据备份)
[root@love721 mylv0102]# lvcreate -s -L 500M -n lv0102_snap -p r /dev/mapper/vg01-lv0102 #创建一个500M的 lv0102的快照卷 lv0102_snap,权限为只读的 Logical volume "lv0102_snap" created. [root@love721 mylv0102]# lvdisplay #显示LV卷的信息,部分文字省了,可以看到刚刚创建的快照卷 ………………………………….. --- Logical volume --- LV Path /dev/vg01/lv0102_snap LV Name lv0102_snap VG Name vg01 LV UUID YAETxW-lPfi-af9a-RM41-t5Ok-5BBT-ourZfl LV Write Access read only LV Creation host, time love721.q.com, 2016-08-01 16:16:34 +0800 LV snapshot status active destination for lv0102 LV Status available # open 0 LV Size 2.93 GiB Current LE 750 COW-table size 500.00 MiB COW-table LE 125 Allocated to snapshot 0.00% Snapshot chunk size 4.00 KiB Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:4
挂载快照卷,修改lv0102中的数据,删除部分数据,测试快照卷的完整性。
[root@love721 mylv0102]# mkdir /mnt/snap [root@love721 mylv0102]# mount /dev/vg01/lv0102_snap /mnt/snap #挂载快照卷 mount: /dev/mapper/vg01-lv0102_snap is write-protected, mounting read-only [root@love721 snap]# ll -h total 105M -rw-r--r-- 1 root root 124K Aug 1 16:10 config-3.10.0-327.el7.x86_64 drwxr-xr-x 2 root root 4.0K Aug 1 16:10 grub drwx------ 6 root root 4.0K Aug 1 16:10 grub2 -rw-r--r-- 1 root root 55M Aug 1 16:10 initramfs-0-rescue-7d0dd8f054af463ca2d6ac1f4b210fdc.img -rw-r--r-- 1 root root 27M Aug 1 16:10 initramfs-3.10.0-327.el7.x86_64.img -rw-r--r-- 1 root root 9.8M Aug 1 16:10 initrd-plymouth.img -rw-r--r-- 1 root root 119 Aug 1 13:04 issue drwx------ 2 root root 16K Aug 1 13:03 lost+found -rw-r--r-- 1 root root 247K Aug 1 16:10 symvers-3.10.0-327.el7.x86_64.gz -rw------- 1 root root 2.9M Aug 1 16:10 System.map-3.10.0-327.el7.x86_64 -rwxr-xr-x 1 root root 5.0M Aug 1 16:10 vmlinuz-0-rescue-7d0dd8f054af463ca2d6ac1f4b210fdc -rwxr-xr-x 1 root root 5.0M Aug 1 16:10 vmlinuz-3.10.0-327.el7.x86_64
删除原卷下一些数据,修改个别文件
[root@love721 mylv0102]# rm symvers-3.10.0-327.el7.x86_64.gz rm: remove regular file ‘symvers-3.10.0-327.el7.x86_64.gz’? y [root@love721 mylv0102]# rm vmlinuz-* rm: remove regular file ‘vmlinuz-0-rescue-7d0dd8f054af463ca2d6ac1f4b210fdc’? y rm: remove regular file ‘vmlinuz-3.10.0-327.el7.x86_64’? y
#删除lv0102原卷上的三个文件
[root@love721 mylv0102]# ll total 96736 -rw-r--r-- 1 root root 126426 Aug 1 16:10 config-3.10.0-327.el7.x86_64 drwxr-xr-x 2 root root 4096 Aug 1 16:10 grub drwx------ 6 root root 4096 Aug 1 16:10 grub2 -rw-r--r-- 1 root root 57644379 Aug 1 16:10 initramfs-0-rescue-7d0dd8f054af463ca2d6ac1f4b210fdc.img -rw-r--r-- 1 root root 28097829 Aug 1 16:10 initramfs-3.10.0-327.el7.x86_64.img -rw-r--r-- 1 root root 10190079 Aug 1 16:10 initrd-plymouth.img -rw-r--r-- 1 root root 119 Aug 1 13:04 issue drwx------ 2 root root 16384 Aug 1 13:03 lost+found -rw------- 1 root root 2963044 Aug 1 16:10 System.map-3.10.0-327.el7.x86_64 [root@love721 mylv0102]# echo "1234567890" >> issue #修改issue文件内容 [root@love721 mylv0102]# cat issue \S Kernel \r on an \m Mage Education Learning Services http://www.magedu.com TTY is \l HOSTNAME is \n DATE is \t 1234567890
查看快照上的文件内容:删除的文件在,修改issue的文件内容是快照以前的
[root@love721 mylv0102]# ll /mnt/snap/ total 107056 -rw-r--r-- 1 root root 126426 Aug 1 16:10 config-3.10.0-327.el7.x86_64 drwxr-xr-x 2 root root 4096 Aug 1 16:10 grub drwx------ 6 root root 4096 Aug 1 16:10 grub2 -rw-r--r-- 1 root root 57644379 Aug 1 16:10 initramfs-0-rescue-7d0dd8f054af463ca2d6ac1f4b210fdc.img -rw-r--r-- 1 root root 28097829 Aug 1 16:10 initramfs-3.10.0-327.el7.x86_64.img -rw-r--r-- 1 root root 10190079 Aug 1 16:10 initrd-plymouth.img -rw-r--r-- 1 root root 119 Aug 1 13:04 issue drwx------ 2 root root 16384 Aug 1 13:03 lost+found -rw-r--r-- 1 root root 252612 Aug 1 16:10 symvers-3.10.0-327.el7.x86_64.gz -rw------- 1 root root 2963044 Aug 1 16:10 System.map-3.10.0-327.el7.x86_64 -rwxr-xr-x 1 root root 5156528 Aug 1 16:10 vmlinuz-0-rescue-7d0dd8f054af463ca2d6ac1f4b210fdc -rwxr-xr-x 1 root root 5156528 Aug 1 16:10 vmlinuz-3.10.0-327.el7.x86_64 [root@love721 mylv0102]# cat /mnt/snap/issue \S Kernel \r on an \m Mage Education Learning Services http://www.magedu.com TTY is \l HOSTNAME is \n DATE is \t
以上快照就完成,我们把快照内的文件拷贝出来放在备份服务器上即可
删除快照卷:
lvremove lv0102_snap
[root@love721 mylv0102]# lvremove /dev/vg01/lv0102_snap # 移除快照卷,释放VG空间 Do you really want to remove active logical volume lv0102_snap? [y/n]: y Logical volume "lv0102_snap" successfully removed
※·测试删除整个LVM
1.卸载所有的挂载点上的LV卷;
2.删除LV卷
lvremove /dev/vg01/lv0101
lvremove /dev/vg01/lv0102
3.删除VG:
Vgremove /dev/vg01
Vgremove /dev/vg02
4.删除PE:
peremove /dev/sdd1
peremove /dev/sdc1
peremove /dev/sdb1
§·课外练习
1、创建一个2G的文件系统,块大小为2048byte,预留1%可用空间,文件系统ext4,卷标为TEST,要求此分区开机后自动挂载至/test目录,且默认有acl挂载选项
步骤一 : 分区格式化
[root@centos68 ~]# lsblk #创建一个2G的磁盘分区 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 200G 0 disk ©À©¤sda1 8:1 0 200M 0 part /boot ©À©¤sda2 8:2 0 48.8G 0 part / ©À©¤sda3 8:3 0 19.5G 0 part /testdir ©À©¤sda4 8:4 0 1K 0 part ©À©¤sda5 8:5 0 2G 0 part [SWAP] ©À©¤sda6 8:6 0 10G 0 part ©¸©¤sda7 8:7 0 10G 0 part /home sdb 8:16 0 20G 0 disk ©¸©¤sdb1 8:17 0 2G 0 part sdd 8:48 0 20G 0 disk sdc 8:32 0 20G 0 disk sde 8:64 0 20G 0 disk sr0 11:0 1 3.7G 0 rom /media/CentOS_6.8_Final [root@centos68 ~]# mke2fs -t ext4 -b 2048 -m 1 -L "TEST" /dev/sdb1 #格式化分区,类型为 ext4 块大小为 2048 预留比率 1%,卷标为: TEST mke2fs 1.41.12 (17-May-2010) Filesystem label=TEST #卷标 OS type: Linux Block size=2048 (log=1) #块大小 Fragment size=2048 (log=1) Stride=0 blocks, Stripe width=0 blocks 131560 inodes, 1052240 blocks 10522 blocks (1.00%) reserved for the super user #预留空间 1% First data block=0 Maximum filesystem blocks=538968064 65 block groups 16384 blocks per group, 16384 fragments per group 2024 inodes per group Superblock backups stored on blocks: 16384, 49152, 81920, 114688, 147456, 409600, 442368, 802816 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 24 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@centos68 ~]# [root@centos68 ~]# dumpe2fs -h /dev/sdb1 #查看格式化后的详细信息 dumpe2fs 1.41.12 (17-May-2010) Filesystem volume name: TEST Last mounted on: <not available> Filesystem UUID: e4e8efdb-9ae5-45b2-aac5-e447ca608626 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 131560 Block count: 1052240 Reserved block count: 10522 Free blocks: 998252 Free inodes: 131549 First block: 0 Block size: 2048 Fragment size: 2048 Reserved GDT blocks: 512 Blocks per group: 16384 Fragments per group: 16384 Inodes per group: 2024 Inode blocks per group: 253 Flex block group size: 16 Filesystem created: Sat Aug 27 09:24:51 2016 Last mount time: n/a Last write time: Sat Aug 27 09:24:53 2016 Mount count: 0 Maximum mount count: 24 Last checked: Sat Aug 27 09:24:51 2016 Check interval: 15552000 (6 months) Next check after: Thu Feb 23 09:24:51 2017 Lifetime writes: 97 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: b0bc8c76-2b96-437e-98e3-ca2043607802 Journal backup: inode blocks Journal features: (none) Journal size: 64M Journal length: 32768 Journal sequence: 0x00000001 Journal start: 0
步骤2:设置开机挂载并启用ACL
cat /etc/fstab # # /etc/fstab # Created by anaconda on Tue Jul 19 18:10:17 2016 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # UUID=ca4c44c8-1c65-4896-a295-d55e5d5e5c5e / ext4 defaults 1 1 UUID=2c97fd2d-e455-493b-822c-25ce8c330e2b /boot ext4 defaults 1 2 UUID=1c6d09df-f7a1-4a72-b842-2b94063f38c7 /testdir ext4 defaults 1 2 UUID=ebd1d743-af4a-465b-98a3-6c9d3945c1d7 swap swap defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 UUID="466d9111-784b-4206-b212-35f91a8a56cc" /home ext4 defaults,usrquota,grpquota 0 0 UUID="e4e8efdb-9ae5-45b2-aac5-e447ca608626" /test ext4 defaults,acl 0 0
2、写一个脚本,完成如下功能:
(1) 列出当前系统识别到的所有磁盘设备
(2) 如磁盘数量为1,则显示其空间使用信息
否则,则显示最后一个磁盘上的空间使用信息
解题思路:nums统计磁盘数量,lastdisk取出最后还磁盘的信息
[root@centos68 ~]# cat disk_num.sh #!/bin/bash nums=$(fdisk -l | grep -o "Disk /dev/sd." | cut -d" " -f2 | sort | wc -l ) lastdisk=$(fdisk -l | grep -o "Disk /dev/sd." | cut -d" " -f2 | sort | tail -1 ) echo "disk nums is : $nums" echo "lastdisk info : $(fdisk -l $lastdisk)"
3、创建一个可用空间为1G的RAID1设备,要求其chunk大小为128k,文件系统为ext4,有一个空闲盘,开机可自动挂载至/backup目录
解:步骤一 准备两个分区为1G的空间,由于是RAID1所以镜像卷
[root@centos68 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 200G 0 disk ├─sda1 8:1 0 200M 0 part /boot ├─sda2 8:2 0 48.8G 0 part / ├─sda3 8:3 0 19.5G 0 part /testdir ├─sda4 8:4 0 1K 0 part ├─sda5 8:5 0 2G 0 part [SWAP] ├─sda6 8:6 0 10G 0 part └─sda7 8:7 0 10G 0 part /home sdd 8:48 0 20G 0 disk └─sdd1 8:49 0 1G 0 part sdc 8:32 0 20G 0 disk └─sdc1 8:33 0 1G 0 part [root@centos68 ~]#
步骤二 : 创建RAID设备。
[root@centos68 ~]# mdadm -C -l 1 -a yes -n 2 -c 128 /dev/md1 /dev/sdd1 /dev/sdc1 #创建免md1 raid为1级别,自动创建设备文件,硬盘数量为2 chunk为 128K mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md1 started. [root@centos68 ~]# mdadm -D /dev/md1 /dev/md1: Version : 1.2 Creation Time : Sat Aug 27 10:15:59 2016 Raid Level : raid1 #RAID 等级 Array Size : 1059200 (1034.38 MiB 1084.62 MB) Used Dev Size : 1059200 (1034.38 MiB 1084.62 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Sat Aug 27 10:16:12 2016 State : clean, resyncing Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Resync Status : 81% complete Name : centos68.qq.com:1 (local to host centos68.qq.com) UUID : d721a5d7:a7ee3b35:2f42a5ff:7945abfb Events : 13 Number Major Minor RaidDevice State 0 8 49 0 active sync /dev/sdd1 1 8 33 1 active sync /dev/sdc1 [root@centos68 ~]#
步骤三 : 创建文件系统
[root@centos68 ~]# mke2fs -t ext4 -L raid1-disk /dev/md1 mke2fs 1.41.12 (17-May-2010) Filesystem label=raid1-disk OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 66240 inodes, 264800 blocks 13240 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=272629760 9 block groups 32768 blocks per group, 32768 fragments per group 7360 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376 Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 31 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@centos68 ~]# blkid /dev/sda1: UUID="2c97fd2d-e455-493b-822c-25ce8c330e2b" TYPE="ext4" /dev/sda2: UUID="ca4c44c8-1c65-4896-a295-d55e5d5e5c5e" TYPE="ext4" LABEL="mydate2" /dev/sda3: UUID="1c6d09df-f7a1-4a72-b842-2b94063f38c7" TYPE="ext4" /dev/sda5: UUID="ebd1d743-af4a-465b-98a3-6c9d3945c1d7" TYPE="swap" /dev/sdb1: UUID="e4e8efdb-9ae5-45b2-aac5-e447ca608626" TYPE="ext4" LABEL="TEST" /dev/sda7: LABEL="MYHOME" UUID="466d9111-784b-4206-b212-35f91a8a56cc" TYPE="ext4" /dev/sdd1: UUID="d721a5d7-a7ee-3b35-2f42-a5ff7945abfb" UUID_SUB="f1be1939-6b90-6b6a-59aa-b07e20795a4e" LABEL="centos68.qq.com:1" TYPE="linux_raid_member" /dev/sdc1: UUID="d721a5d7-a7ee-3b35-2f42-a5ff7945abfb" UUID_SUB="c4db6eb3-881b-7197-b043-b12ca769aa2d" LABEL="centos68.qq.com:1" TYPE="linux_raid_member" /dev/md1: LABEL="raid1-disk" UUID="12510dfb-60d5-4bb3-9bd3-a819389b5708" TYPE="ext4"
步骤四 :设置自动挂载
[root@centos68 ~]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Tue Jul 19 18:10:17 2016 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # UUID=ca4c44c8-1c65-4896-a295-d55e5d5e5c5e / ext4 defaults 1 1 UUID=2c97fd2d-e455-493b-822c-25ce8c330e2b /boot ext4 defaults 1 2 UUID=1c6d09df-f7a1-4a72-b842-2b94063f38c7 /testdir ext4 defaults 1 2 UUID=ebd1d743-af4a-465b-98a3-6c9d3945c1d7 swap swap defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 UUID="466d9111-784b-4206-b212-35f91a8a56cc" /home ext4 defaults,usrquota,grpquota 0 0 UUID="e4e8efdb-9ae5-45b2-aac5-e447ca608626" /test ext4 defaults,acl 0 0 UUID="12510dfb-60d5-4bb3-9bd3-a819389b5708" /backup ext4 defaults 0 0
4、创建由三块硬盘组成的可用空间为2G的RAID5设备,要求其chunk大小为256k,文件系统为ext4,开机可自动挂载至/mydata目录
解:步骤一:创建三个分区,每个分区为2G
[root@centos68 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 200G 0 disk ├─sda1 8:1 0 200M 0 part /boot ├─sda2 8:2 0 48.8G 0 part / ├─sda3 8:3 0 19.5G 0 part /testdir ├─sda4 8:4 0 1K 0 part ├─sda5 8:5 0 2G 0 part [SWAP] ├─sda6 8:6 0 10G 0 part └─sda7 8:7 0 10G 0 part /home sdb 8:16 0 20G 0 disk ├─sdb1 8:17 0 2G 0 part │ └─md1 9:1 0 1G 0 raid1 └─sdb2 8:18 0 1G 0 part sdd 8:48 0 20G 0 disk ├─sdd1 8:49 0 1G 0 part │ └─md1 9:1 0 1G 0 raid1 └─sdd2 8:50 0 1G 0 part sdc 8:32 0 20G 0 disk ├─sdc1 8:33 0 1G 0 part │ └─md1 9:1 0 1G 0 raid1 └─sdc2 8:34 0 1G 0 part sde 8:64 0 20G 0 disk sr0 11:0 1 3.7G 0 rom /media/CentOS_6.8_Final
步骤二:创建raid 5
[root@centos68 ~]# mdadm -C /dev/md5 -l 5 -c 256 -n 3 /dev/sd{b,c,d}2 mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md5 started. [root@centos68 ~]# mdadm /dev/md5 /dev/md5: 2.02GiB raid5 3 devices, 0 spares. Use mdadm --detail for more detail. [root@centos68 ~]# mdadm -D /dev/md5 /dev/md5: Version : 1.2 Creation Time : Sat Aug 27 11:00:03 2016 Raid Level : raid5 Array Size : 2118144 (2.02 GiB 2.17 GB) Used Dev Size : 1059072 (1034.25 MiB 1084.49 MB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Sat Aug 27 11:00:13 2016 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 256K Name : centos68.qq.com:5 (local to host centos68.qq.com) UUID : 4732a4aa:c4955360:c2e69a98:101c395c Events : 18 Number Major Minor RaidDevice State 0 8 18 0 active sync /dev/sdb2 1 8 34 1 active sync /dev/sdc2 3 8 50 2 active sync /dev/sdd2 [root@centos68 ~]#
步骤三 :格式化
[root@centos68 ~]# mke2fs -t ext4 -L raid5_disk /dev/md5 mke2fs 1.41.12 (17-May-2010) Filesystem label=raid5_disk OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=64 blocks, Stripe width=128 blocks 132464 inodes, 529536 blocks 26476 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=545259520 17 block groups 32768 blocks per group, 32768 fragments per group 7792 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912 Writing inode tables: done Creating journal (16384 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 26 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@centos68 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 200G 0 disk ├─sda1 8:1 0 200M 0 part /boot ├─sda2 8:2 0 48.8G 0 part / ├─sda3 8:3 0 19.5G 0 part /testdir ├─sda4 8:4 0 1K 0 part ├─sda5 8:5 0 2G 0 part [SWAP] ├─sda6 8:6 0 10G 0 part └─sda7 8:7 0 10G 0 part /home sdb 8:16 0 20G 0 disk ├─sdb1 8:17 0 2G 0 part │ └─md1 9:1 0 1G 0 raid1 └─sdb2 8:18 0 1G 0 part └─md5 9:5 0 2G 0 raid5 sdd 8:48 0 20G 0 disk ├─sdd1 8:49 0 1G 0 part │ └─md1 9:1 0 1G 0 raid1 └─sdd2 8:50 0 1G 0 part └─md5 9:5 0 2G 0 raid5 sdc 8:32 0 20G 0 disk ├─sdc1 8:33 0 1G 0 part │ └─md1 9:1 0 1G 0 raid1 └─sdc2 8:34 0 1G 0 part └─md5 9:5 0 2G 0 raid5 sde 8:64 0 20G 0 disk sr0 11:0 1 3.7G 0 rom /media/CentOS_6.8_Final
步骤四:自动挂载配置
[root@centos68 ~]# blkid /dev/sda1: UUID="2c97fd2d-e455-493b-822c-25ce8c330e2b" TYPE="ext4" /dev/sda2: UUID="ca4c44c8-1c65-4896-a295-d55e5d5e5c5e" TYPE="ext4" LABEL="mydate2" /dev/sda3: UUID="1c6d09df-f7a1-4a72-b842-2b94063f38c7" TYPE="ext4" /dev/sda5: UUID="ebd1d743-af4a-465b-98a3-6c9d3945c1d7" TYPE="swap" /dev/sdb1: UUID="e4e8efdb-9ae5-45b2-aac5-e447ca608626" TYPE="ext4" LABEL="TEST" /dev/sda7: LABEL="MYHOME" UUID="466d9111-784b-4206-b212-35f91a8a56cc" TYPE="ext4" /dev/sdd1: UUID="d721a5d7-a7ee-3b35-2f42-a5ff7945abfb" UUID_SUB="f1be1939-6b90-6b6a-59aa-b07e20795a4e" LABEL="centos68.qq.com:1" TYPE="linux_raid_member" /dev/sdc1: UUID="d721a5d7-a7ee-3b35-2f42-a5ff7945abfb" UUID_SUB="c4db6eb3-881b-7197-b043-b12ca769aa2d" LABEL="centos68.qq.com:1" TYPE="linux_raid_member" /dev/md1: LABEL="raid1-disk" UUID="12510dfb-60d5-4bb3-9bd3-a819389b5708" TYPE="ext4" /dev/sdb2: UUID="4732a4aa-c495-5360-c2e6-9a98101c395c" UUID_SUB="edba2fa3-3ba9-4053-8a56-873f9598faae" LABEL="centos68.qq.com:5" TYPE="linux_raid_member" /dev/sdd2: UUID="4732a4aa-c495-5360-c2e6-9a98101c395c" UUID_SUB="81b8750e-7dfb-09a8-48cd-20202aea41ad" LABEL="centos68.qq.com:5" TYPE="linux_raid_member" /dev/sdc2: UUID="4732a4aa-c495-5360-c2e6-9a98101c395c" UUID_SUB="4ec508cf-50a0-2ce9-f4ff-8793b2ac4a7a" LABEL="centos68.qq.com:5" TYPE="linux_raid_member" /dev/md5: LABEL="raid5_disk" UUID="7c615c12-26f2-4de4-8798-c388e4bb7d48" TYPE="ext4" [root@centos68 ~]# vim /etc/fstab [root@centos68 ~]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Tue Jul 19 18:10:17 2016 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # UUID=ca4c44c8-1c65-4896-a295-d55e5d5e5c5e / ext4 defaults 1 1 UUID=2c97fd2d-e455-493b-822c-25ce8c330e2b /boot ext4 defaults 1 2 UUID=1c6d09df-f7a1-4a72-b842-2b94063f38c7 /testdir ext4 defaults 1 2 UUID=ebd1d743-af4a-465b-98a3-6c9d3945c1d7 swap swap defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 UUID="466d9111-784b-4206-b212-35f91a8a56cc" /home ext4 defaults,usrquota,grpquota 0 0 UUID="e4e8efdb-9ae5-45b2-aac5-e447ca608626" /test ext4 defaults,acl 0 0 UUID="12510dfb-60d5-4bb3-9bd3-a819389b5708" /backup ext4 defaults 0 0 UUID="7c615c12-26f2-4de4-8798-c388e4bb7d48" /mydate ext4 defaults 0 0 [root@centos68 ~]#
原创文章,作者:linux_root,如若转载,请注明出处:http://www.178linux.com/40811
评论列表(1条)
有图有文,每个练习结果均有图证,看得出其中的认真程度,赞。建议目录部分篇幅不要占据太多。