服务器制作raid磁盘阵列并管理

来自CloudWiki
跳转至: 导航搜索

1.创建raid

创建raid 0
利用磁盘分区新建2个磁盘分区,每个大小为20 GB。用这2个20 GB的分区来模拟1个40 GB的硬盘。
[root@localhost ~]# lsblk 
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   20G  0 disk 
├─sda1            8:1    0  500M  0 part /boot
└─sda2            8:2    0 19.5G  0 part 
  ├─centos-root 253:0    0 17.5G  0 lvm  /
  └─centos-swap 253:1    0    2G  0 lvm  [SWAP]
sdb               8:16   0   20G  0 disk 
sdc               8:32   0   20G  0 disk 
sr0              11:0    1    4G  0 rom  

安装工具mdadm,使用已有YUM源进行安装,命令如下:
[root@localhost ~]# yum install -y mdadm 
创建一个RAID 0设备:这里使用/dev/sdb和/dev/sdc做实验。
将/dev/sdb和/dev/sdc建立RAID等级为RAID 0的md0(设备名)。
[root@localhost ~]# mdadm -C -v /dev/md0 -l 0 -n 2 /dev/sdb /dev/sdc 
mdadm: chunk size defaults to 512K
mdadm: Fail create md0 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
命令解析:
 -C v:创建设备,并显示信息。
 -l 0:RAID的等级为RAID 0。
 -n 2:创建RAID的设备为2块。
查看系统上的RAID,命令及返回结果如下。
[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid0] 
md0 : active raid0 sdc[1] sdb[0]
      41908224 blocks super 1.2 512k chunks      
unused devices: <none>
查看RAID详细信息,命令及返回结果如下。
[root@localhost ~]# mdadm -Ds
ARRAY /dev/md0 metadata=1.2 name=localhost.localdomain:0 UUID=35792eb3:51f58189:44cef502:cdcee441
[root@localhost ~]# mdadm -D /dev/md0 
/dev/md0:
           Version : 1.2
     Creation Time : Sat Oct  5 10:21:41 2019
        Raid Level : raid0
        Array Size : 41908224 (39.97 GiB 42.91 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Sat Oct  5 10:21:41 2019
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

        Chunk Size : 512K

Consistency Policy : unknown

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : 35792eb3:51f58189:44cef502:cdcee441
            Events : 0

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
生成配置文件mdadm.conf,命令如下。
[root@localhost ~]# mdadm -Ds > /etc/mdadm.conf 
对创建的RAID进行文件系统创建并挂载,命令如下。
[root@localhost ~]# mdadm -Ds > /etc/mdadm.conf
[root@localhost ~]# mkfs.xfs /dev/md0
meta-data=/dev/md0               isize=256    agcount=16, agsize=654720 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=10475520, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=5120, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@localhost ~]# mkdir /raid0/
[root@localhost ~]# mount /dev/md0 /raid0/
[root@localhost ~]# df -Th /raid0/
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/md0       xfs    40G   33M   40G   1% /raid0
设置成开机自动挂载,命令如下。
[root@localhost ~]# blkid /dev/md0 
/dev/md0: UUID="8eafdcb6-d46a-430a-8004-d58a68dc0751" TYPE="xfs" 
[root@localhost ~]# echo "UUID=8eafdcb6-d46a-430a-8004-d58a68dc0751 /raid0 xfs defaults 0 0" >> /etc/fstab
删除RAID操作,命令如下:
[root@localhost ~]# umount /raid0/
[root@localhost ~]# mdadm -S /dev/md0
[root@localhost ~]# rm -rf /etc/mdadm.conf
[root@localhost ~]# rm -rf /raid0/
[root@localhost ~]# mdadm --zero-superblock /dev/sdb
[root@localhost ~]# mdadm --zero-superblock /dev/sdc
[root@localhost ~]# vi /etc/fstab
UUID=8eafdcb6-d46a-430a-8004-d58a68dc0751 /raid0 xfs defaults 0 0  //删除此行

 

2. 运维操作

(1)raid 5运维操作
利用磁盘分区新建4个磁盘分区,每个大小为20 GB。用3个20 GB的分区来模拟raid 5,加一个热备盘。
[root@localhost ~]# mdadm -Cv /dev/md5 -l5 -n3 /dev/sdb /dev/sdc /dev/sdd --spare-devices=1 /dev/sde 
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 20954112K
mdadm: Fail create md5 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.

查看RAID的详细信息,命令如下。
[root@localhost ~]# mdadm -D /dev/md5 
/dev/md5:
           Version : 1.2
     Creation Time : Sat Oct  5 13:17:41 2019
        Raid Level : raid5
        Array Size : 41908224 (39.97 GiB 42.91 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Sat Oct  5 13:19:27 2019
             State : clean 
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : unknown

              Name : localhost.localdomain:5  (local to host localhost.localdomain)
              UUID : f51467bd:1199242b:bcb73c7c:160d523a
            Events : 18

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       4       8       48        2      active sync   /dev/sdd

       3       8       64        -      spare   /dev/sde
(2)模拟硬盘故障
 [root@localhost ~]# mdadm -f /dev/md5 /dev/sdb 
mdadm: set /dev/sdb faulty in /dev/md5
查看RAID的详细信息,命令如下。
[root@localhost ~]# mdadm -D /dev/md5          
/dev/md5:
           Version : 1.2
     Creation Time : Sat Oct  5 13:17:41 2019
        Raid Level : raid5
        Array Size : 41908224 (39.97 GiB 42.91 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Sat Oct  5 13:28:54 2019
             State : clean 
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 1
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : unknown

              Name : localhost.localdomain:5  (local to host localhost.localdomain)
              UUID : f51467bd:1199242b:bcb73c7c:160d523a
            Events : 37

    Number   Major   Minor   RaidDevice State
       3       8       64        0      active sync   /dev/sde
       1       8       32        1      active sync   /dev/sdc
       4       8       48        2      active sync   /dev/sdd

       0       8       16        -      faulty   /dev/sdb
从以上结果可以发现原来的热备盘/dev/sde正在参与RAID 5的重建,而原来的/dev/sdb变成了坏盘。
热移除故障盘,命令如下:
[root@localhost ~]# mdadm -r /dev/md5 /dev/sdb 
mdadm: hot removed /dev/sdb from /dev/md5
查看RAID的详细信息,命令如下:
[root@localhost ~]# mdadm -D /dev/md5          
/dev/md5:
           Version : 1.2
     Creation Time : Sat Oct  5 13:17:41 2019
        Raid Level : raid5
        Array Size : 41908224 (39.97 GiB 42.91 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 3
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Sat Oct  5 13:35:54 2019
             State : clean 
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : unknown

              Name : localhost.localdomain:5  (local to host localhost.localdomain)
              UUID : f51467bd:1199242b:bcb73c7c:160d523a
            Events : 38

    Number   Major   Minor   RaidDevice State
       3       8       64        0      active sync   /dev/sde
       1       8       32        1      active sync   /dev/sdc
       4       8       48        2      active sync   /dev/sdd
格式化RAID并进行挂载,命令如下:
[root@localhost ~]# mkfs.xfs -f /dev/md5 
meta-data=/dev/md5               isize=256    agcount=16, agsize=654720 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=10475520, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=5120, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@localhost ~]# mount /dev/md5 /mnt/
mount:/dev/md5不能读超级块
[root@localhost ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   18G  906M   17G   6% /
devtmpfs                 903M     0  903M   0% /dev
tmpfs                    913M     0  913M   0% /dev/shm
tmpfs                    913M  8.6M  904M   1% /run
tmpfs                    913M     0  913M   0% /sys/fs/cgroup
/dev/sda1                497M  125M  373M  25% /boot
tmpfs                    183M     0  183M   0% /run/user/0
/dev/md5                  40G   33M   40G   1% /mnt