서버/Linux II 2016. 2. 23. 11:32
Linux II - 07. RAID 관리
- VMware 스냅샷 실시 - 'RAID 구성 이전'
login as: root
root@192.168.1.7's password:
Last login: Mon Feb 22 14:11:40 2016 from 192.168.1.1
[root@CentOS2 /root]#
1. RAID(Redudant Array of Inexpensive Disks)
여러개의 디스크를 하나의 디스크처럼 사용할 수 있도록 하며, 신뢰성과 성능을 향상시킬수 있는 저장 장치를 의미한다.
RAID 종류 |
장점 |
단점 |
단위 |
하드웨어 RAID |
신뢰성 및 안정성 보장 |
고비용 |
디스크 |
소프트웨어 RAID |
저비용 |
신뢰성 및 안정성 비보장 |
파티션 |
2. RAID 구성
[root@CentOS2 /root]# cat /proc/mdstat
Personalities :
unused devices: <none>
'mdadm' 명령어를 이용하여 RAID를 구성한다. 해당 명령어는 다음과 같다.
--create |
RAID 구성 |
# mdadm --create /dev/md0 |
--detail |
RAID 상태 상세 확인 |
# mdadm --detail /dev/md0 |
--remove |
RAID 디스크 제거 |
# mdadm --manage /dev/md0 --remove /dev/sdc1 |
--add |
RAID 디스크 추가 |
# mdadm --manage /dev/md0 --add /dev/sdc1 |
--fail |
RAID 디스크 강제 장애 |
# mdadm --manage /dev/md0 --fail /dev/sdc1 |
-S (--stop) |
RAID 정지 |
# mdadm --stop /dev/md0 |
-r (--remove) |
RAID 제거 |
# mdadm -r /dev/md0 |
1) RAID 0 Volume (Stripe Volume)
최소 디스크 개수 |
2개 이상 |
장점 |
읽기/쓰기 속도가 빠르며, 디스크 가용율이 100%이다. |
단점 |
이중화가 안되며, 디스크 하나가 손상되면 나머지 디스크에 저장된 데이터 전부를 사용할 수 없다. |
비고 |
|
Ex1) RAID 0 구성
- /dev/sdc1, /dev/sdd1, /dev/sde1 RAID 타입 파티션 생성
[root@CentOS2 /root]# fdisk /dev/sdc
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-204, default 1): 1
Last cylinder, +cylinders or +size{K,M,G} (1-204, default 204): 204
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): p
Disk /dev/sdc: 213 MB, 213909504 bytes
64 heads, 32 sectors/track, 204 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00024910
Device Boot Start End Blocks Id System
/dev/sdc1 1 204 208880 fd Linux raid autodetect
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@CentOS2 /root]# fdisk /dev/sdd
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-204, default 1): 1
Last cylinder, +cylinders or +size{K,M,G} (1-204, default 204): 204
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@CentOS2 /root]# fdisk /dev/sde
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-204, default 1): 1
Last cylinder, +cylinders or +size{K,M,G} (1-204, default 204): 204
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@CentOS2 /root]#
[root@CentOS2 /root]# fdisk -l /dev/sdc /dev/sdd /dev/sde | grep -i raid
/dev/sdc1 1 204 208880 fd Linux raid autodetect
/dev/sdd1 1 204 208880 fd Linux raid autodetect
/dev/sde1 1 204 208880 fd Linux raid autodetect
- 'mdadm' 명령어를 이용하여 RAID 0 구성 실시
[root@CentOS2 /root]# mdadm --create /dev/md0 --level=stripe --raid-devices=3 /dev/sdc1 /dev/sdd1 /dev/sde1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[root@CentOS2 /root]# ls /dev/md0
/dev/md0
[root@CentOS2 /root]# fdisk -l /dev/md0
Disk /dev/md0: 637 MB, 637009920 bytes
2 heads, 4 sectors/track, 155520 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 524288 bytes / 1572864 bytes
Disk identifier: 0x00000000
[root@CentOS2 /root]# mdadm --detail --scan -v
ARRAY /dev/md0 level=raid0 num-devices=3 metadata=1.2 name=CentOS2:0 UUID=d1e6afb4:770ed44d:bee91aad:46719986
devices=/dev/sdc1,/dev/sdd1,/dev/sde1
[root@CentOS2 /root]# cat /proc/mdstat
Personalities : [raid0]
md0 : active raid0 sde1[2] sdd1[1] sdc1[0]
622080 blocks super 1.2 512k chunks
unused devices: <none>
- 파일 시스템 생성 및 마운트 실시
[root@CentOS2 /root]# mkfs.ext4 /dev/md0 1> /dev/null
mke2fs 1.41.12 (17-May-2010)
[root@CentOS2 /root]# mkdir /raid0
[root@CentOS2 /root]# mount /dev/md0 /raid0
[root@CentOS2 /root]# df -h /raid0
Filesystem Size Used Avail Use% Mounted on
/dev/md0 582M 468K 552M 1% /raid0
- CentOS 6.x에서는 재부팅하면 RAID 볼륨 장치 번호가 변경된다. 변경되는 것을 방지하는 방법은 다음과 같다.
[root@CentOS2 /root]# mdadm --detail --scan -v 1> /etc/mdadm.conf (RAID 설정 정보를 'mdadm.conf'에 저장)
Ex2) RAID 0 제거
[root@CentOS2 /root]# umount /dev/md0
[root@CentOS2 /root]# mdadm --stop /dev/md0
mdadm: stopped /dev/md0
[root@CentOS2 /root]# mdadm --zero-superblock /dev/sdc1 /dev/sdd1 /dev/sde1
[root@CentOS2 /root]# mdadm --examine /dev/sdc1 /dev/sdd1 /dev/sde1
mdadm: No md superblock detected on /dev/sdc1.
mdadm: No md superblock detected on /dev/sdd1.
mdadm: No md superblock detected on /dev/sde1.
[root@CentOS2 /root]# ls /dev/md0
ls: cannot access /dev/md0: 그런 파일이나 디렉터리가 없습니다
[root@CentOS2 /root]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 13G 4.4G 7.3G 38% /
tmpfs 495M 500K 494M 1% /dev/shm
/dev/sda3 477M 2.3M 449M 1% /home
2) RAID 1 Volume (Mirror Volume)
최소 디스크 개수 |
2개 이상 |
장점 |
이중화가 가능하다. |
단점 |
디스크 가용율이 50%이다. |
비고 |
읽기 속도는 빠르지만, 쓰기 속도는 보통이다. |
Ex1) RAID 1 구성
- /dev/sdf1, /dev/sdg1 RAID 타입 파티션 생성
[root@CentOS2 /root]# fdisk /dev/sdf
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-204, default 1): 1
Last cylinder, +cylinders or +size{K,M,G} (1-204, default 204): 204
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@CentOS2 /root]#
[root@CentOS2 /root]# fdisk /dev/sdg
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-204, default 1): 1
Last cylinder, +cylinders or +size{K,M,G} (1-204, default 204): 204
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@CentOS2 /root]#
[root@CentOS2 /root]# fdisk -l /dev/sdf /dev/sdg | grep -i raid
/dev/sdf1 1 204 208880 fd Linux raid autodetect
/dev/sdg1 1 204 208880 fd Linux raid autodetect
- 'mdadm' 명령어를 이용하여 RAID 1 구성 실시
[root@CentOS2 /root]# mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdf1 /dev/sdg1
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
[root@CentOS2 /root]# ls /dev/md1
/dev/md1
[root@CentOS2 /root]# fdisk -l /dev/md1
Disk /dev/md1: 213 MB, 213712896 bytes
2 heads, 4 sectors/track, 52176 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
[root@CentOS2 /root]# mdadm --detail -scan -v
ARRAY /dev/md1 level=raid1 num-devices=2 metadata=1.2 name=CentOS2:1 UUID=c957ee01:f985c8f8:8bfaad46:125b6bfb
devices=/dev/sdf1,/dev/sdg1
[root@CentOS2 /root]# cat /proc/mdstat
Personalities : [raid0] [raid1]
md1 : active raid1 sdg1[1] sdf1[0]
208704 blocks super 1.2 [2/2] [UU]
unused devices: <none>
- 파일 시스템 생성 및 마운트 실시
[root@CentOS2 /root]# mkfs.ext4 /dev/md1 1> /dev/null
mke2fs 1.41.12 (17-May-2010)
[root@CentOS2 /root]# mkdir /raid1
[root@CentOS2 /root]# mount /dev/md1 /raid1
[root@CentOS2 /root]# df -h /raid1
Filesystem Size Used Avail Use% Mounted on
/dev/md1 194M 1.8M 182M 1% /raid1
- CentOS 6.x에서는 재부팅하면 RAID 볼륨 장치 번호가 변경된다. 변경되는 것을 방지하는 방법은 다음과 같다.
[root@CentOS2 /root]# mdadm --detail --scan -v 1> /etc/mdadm.conf (RAID 설정 정보를 'mdadm.conf'에 저장)
Ex2) RAID 1 제거
[root@CentOS2 /root]# umount /dev/md1
[root@CentOS2 /root]# mdadm --stop /dev/md1
mdadm: stopped /dev/md1
[root@CentOS2 /root]# mdadm --zero-superblock /dev/sdf1 /dev/sdg1
[root@CentOS2 /root]# mdadm --examine /dev/sdf1 /dev/sdg1
mdadm: No md superblock detected on /dev/sdf1.
mdadm: No md superblock detected on /dev/sdg1.
[root@CentOS2 /root]# ls /dev/md1
ls: cannot access /dev/md1: 그런 파일이나 디렉터리가 없습니다
[root@CentOS2 /root]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 13G 4.4G 7.3G 38% /
tmpfs 495M 500K 494M 1% /dev/shm
/dev/sda3 477M 2.3M 449M 1% /home
3) RAID 5
최소 디스크 개수 |
3개 이상 |
장점 |
디스크 1개가 장애가 발생되어도, 데이터 복구가 가능하다. |
단점 |
디스크를 가용할 수 있는 용량은 '전체 디스크 개수 -1' 을 해야 한다. |
비고 |
성능이 그리 중요하지 않고 쓰기 작업이 많지 않은 다중 사용자 시스템에 적합하다. |
Ex1) RAID 5 구성
- /dev/sdc1, /dev/sdd1, /dev/sde1 RAID 타입 파티션 생성 확인
[root@CentOS2 /root]# fdisk -l /dev/sdc /dev/sdd /dev/sde | grep -i raid
/dev/sdc1 1 204 208880 fd Linux raid autodetect
/dev/sdd1 1 204 208880 fd Linux raid autodetect
/dev/sde1 1 204 208880 fd Linux raid autodetect
- 'mdadm' 명령어를 이용하여 RAID 5 구성 실시
[root@CentOS2 /root]# mdadm --create /dev/md5 --level=5 --raid-devices=3 /dev/sdc1 /dev/sdd1 /dev/sde1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.
[root@CentOS2 /root]# ls /dev/md5
/dev/md5
[root@CentOS2 /root]# fdisk -l /dev/md5
Disk /dev/md5: 424 MB, 424673280 bytes
2 heads, 4 sectors/track, 103680 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 524288 bytes / 1048576 bytes
Disk identifier: 0x00000000
[root@CentOS2 /root]# mdadm --detail --scan -v
ARRAY /dev/md5 level=raid5 num-devices=3 metadata=1.2 name=CentOS2:5 UUID=bf8908c0:0cb128dc:f4e87b4b:b6ffb981
devices=/dev/sdc1,/dev/sdd1,/dev/sde1
[root@CentOS2 /root]# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]
md5 : active raid5 sde1[3] sdd1[1] sdc1[0]
414720 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
unused devices: <none>
- CentOS 6.x에서는 재부팅하면 RAID 볼륨 장치 번호가 변경된다. 변경되는 것을 방지하는 방법은 다음과 같다.
[root@CentOS2 /root]# mdadm --detail --scan -v 1> /etc/mdadm.conf (RAID 설정 정보를 'mdadm.conf'에 저장)
ARRAY /dev/md5 level=raid5 num-devices=3 metadata=1.2 name=CentOS2:5 UUID=bf8908c0:0cb128dc:f4e87b4b:b6ffb981
devices=/dev/sdc1,/dev/sdd1,/dev/sde1
- 파일 시스템 생성 및 마운트 실시
[root@CentOS2 /root]# mkfs.ext4 /dev/md5 1> /dev/null
mke2fs 1.41.12 (17-May-2010)
[root@CentOS2 /root]# mkdir /raid5
[root@CentOS2 /root]# mount /dev/md5 /raid5
[root@CentOS2 /root]# df -h /raid5
Filesystem Size Used Avail Use% Mounted on
/dev/md5 385M 2.3M 362M 1% /raid5
- /dev/sdd1 디스크 손상 실시
[root@CentOS2 /root]# ls /home
lost+found user1 user2
[root@CentOS2 /root]# cp -rp /home/* /raid5
[root@CentOS2 /root]# echo "hello" > /raid5/testfile
[root@CentOS2 /root]# ls /raid5
lost+found testfile user1 user2
[root@CentOS2 /root]# dd if=/dev/zero of=/raid5/testfile bs=1024k count=500
dd: writing `/raid5/testfile': 장치에 남은 공간이 없음
378+0 records in
377+0 records out
396242944 bytes (396 MB) copied, 30.5113 s, 13.0 MB/s
[root@CentOS2 /root]# mdadm --detail --scan -v >> /etc/mdadm.conf
[root@CentOS2 /root]# mdadm --manage /dev/md5 --fail /dev/sdd1
mdadm: set /dev/sdd1 faulty in /dev/md5
[root@CentOS2 /root]# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]
md5 : active raid5 sde1[3] sdd1[1](F) sdc1[0]
414720 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [U_U]
unused devices: <none>
[root@CentOS2 /root]# ls /raid5
lost+found testfile user1 user2
- /dev/sdd1' 손상 상태에서 'dev/sdf1'으로 복구 실시
[root@CentOS2 /root]# fdisk -l /dev/sdf
Disk /dev/sdf: 213 MB, 213909504 bytes
64 heads, 32 sectors/track, 204 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000875f9
Device Boot Start End Blocks Id System
/dev/sdf1 1 204 208880 fd Linux raid autodetect
[root@CentOS2 /root]# mdadm --manage /dev/md5 --add /dev/sdf1
mdadm: added /dev/sdf1
[root@CentOS2 /root]# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]
md5 : active raid5 sdf1[4] sde1[3] sdd1[1](F) sdc1[0]
414720 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [U_U]
[======>..............] recovery = 33.7% (70148/207360) finish=0.1min speed=14029K/sec
unused devices: <none>
[root@CentOS2 /root]# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]
md5 : active raid5 sdf1[4] sde1[3] sdd1[1](F) sdc1[0]
414720 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [U_U]
[============>........] recovery = 60.8% (126720/207360) finish=0.0min speed=14080K/sec
unused devices: <none>
[root@CentOS2 /root]# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]
md5 : active raid5 sdf1[4] sde1[3] sdd1[1](F) sdc1[0]
414720 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
unused devices: <none>
[root@CentOS2 /root]# fdisk -l /dev/md5
Disk /dev/md5: 424 MB, 424673280 bytes
2 heads, 4 sectors/track, 103680 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 524288 bytes / 1048576 bytes
Disk identifier: 0x00000000
[root@CentOS2 /root]# su - user1
[user1@CentOS2 /home/user1]$ exit
logout
Ex2) RAID 5 제거
[root@CentOS2 /root]# umount /dev/md5
[root@CentOS2 /root]# mdadm --stop /dev/md5
mdadm: stopped /dev/md5
[root@CentOS2 /root]# mdadm --zero-superblock /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1
[root@CentOS2 /root]# mdadm --examine /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1
mdadm: No md superblock detected on /dev/sdc1.
mdadm: No md superblock detected on /dev/sdd1.
mdadm: No md superblock detected on /dev/sde1.
mdadm: No md superblock detected on /dev/sdf1.
[root@CentOS2 /root]# ls /dev/md5
ls: cannot access /dev/md5: 그런 파일이나 디렉터리가 없습니다
[root@CentOS2 /root]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 13G 4.4G 7.3G 38% /
tmpfs 495M 500K 494M 1% /dev/shm
/dev/sda3 477M 2.3M 449M 1% /home
4) RAID 0+1
최소 디스크 개수 |
4개 이상 |
장점 |
Stripe와 Mirror를 동시에 적용할 수 있다. |
단점 |
비용이 많이 발생한다. |
비고 |
CentOS에서는 지원하지 않기 때문에, Stripe를 구성하고 이를 Mirror로 구성해야 한다. |
Ex1) RAID 0+1 구성
- /dev/sdc1, /dev/sdd1, /dev/sde1, /dev/sdf1 RAID 타입 파티션 생성 확인
[root@CentOS2 /root]# fdisk -l /dev/sdc /dev/sdd /dev/sde /dev/sdf | grep -i raid
/dev/sdc1 1 204 208880 fd Linux raid autodetect
/dev/sdd1 1 204 208880 fd Linux raid autodetect
/dev/sde1 1 204 208880 fd Linux raid autodetect
/dev/sdf1 1 204 208880 fd Linux raid autodetect
- CentOS에서는 'RAID 0+1' 레벨이 지원되지 않기 때문에, 먼저 Stripe 구성 실시
[root@CentOS2 /root]# mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sdc1 /dev/sdd1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[root@CentOS2 /root]# mdadm --create /dev/md1 --level=0 --raid-devices=2 /dev/sde1 /dev/sdf1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
- Mirror 구성 실시
[root@CentOS2 /root]# mdadm --create /dev/md2 --level=1 --raid-devices=2 /dev/md0 /dev/md1
mdadm: /dev/md0 appears to contain an ext2fs file system
size=414720K mtime=Tue Feb 23 13:49:04 2016
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
mdadm: /dev/md1 appears to contain an ext2fs file system
size=414720K mtime=Tue Feb 23 13:49:04 2016
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md2 started.
[root@CentOS2 /root]# ls /dev/md0
/dev/md0
[root@CentOS2 /root]# ls /dev/md1
/dev/md1
[root@CentOS2 /root]# ls /dev/md2
/dev/md2
[root@CentOS2 /root]# mdadm --detail --scan -v
ARRAY /dev/md0 level=raid0 num-devices=2 metadata=1.2 name=CentOS2:0 UUID=20576727:16c13b0e:946c3812:25fab12d
devices=/dev/sdc1,/dev/sdd1
ARRAY /dev/md1 level=raid0 num-devices=2 metadata=1.2 name=CentOS2:1 UUID=1a11ffe9:4cbb5a2a:e3f645a8:220a4f4c
devices=/dev/sde1,/dev/sdf1
ARRAY /dev/md2 level=raid1 num-devices=2 metadata=1.2 name=CentOS2:2 UUID=e67dc48f:eb0410e4:d24d484e:74e329fb
devices=/dev/md0,/dev/md1
[root@CentOS2 /root]# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]
md2 : active raid1 md1[1] md0[0]
414400 blocks super 1.2 [2/2] [UU]
md1 : active raid0 sdf1[1] sde1[0]
414720 blocks super 1.2 512k chunks
md0 : active raid0 sdd1[1] sdc1[0]
414720 blocks super 1.2 512k chunks
unused devices: <none>
- CentOS 6.x에서는 재부팅하면 RAID 볼륨 장치 번호가 변경된다. 변경되는 것을 방지하는 방법은 다음과 같다.
[root@CentOS2 /root]# mdadm --detail --scan -v 1> /etc/mdadm.conf (RAID 설정 정보를 'mdadm.conf'에 저장)
[root@CentOS2 /root]# cat /etc/mdadm.conf
ARRAY /dev/md0 level=raid0 num-devices=2 metadata=1.2 name=CentOS2:0 UUID=20576727:16c13b0e:946c3812:25fab12d
devices=/dev/sdc1,/dev/sdd1
ARRAY /dev/md1 level=raid0 num-devices=2 metadata=1.2 name=CentOS2:1 UUID=1a11ffe9:4cbb5a2a:e3f645a8:220a4f4c
devices=/dev/sde1,/dev/sdf1
ARRAY /dev/md2 level=raid1 num-devices=2 metadata=1.2 name=CentOS2:2 UUID=e67dc48f:eb0410e4:d24d484e:74e329fb
devices=/dev/md0,/dev/md1
- 파일 시스템 생성 및 마운트 실시
[root@CentOS2 /root]# mkfs.ext4 /dev/md2 > /dev/null
mke2fs 1.41.12 (17-May-2010)
[root@CentOS2 /root]# mkdir /raid01
[root@CentOS2 /root]# mount /dev/md2 /raid01
[root@CentOS2 /root]# df -h /raid01
Filesystem Size Used Avail Use% Mounted on
/dev/md2 384M 2.3M 362M 1% /raid01
Ex2) RAID 0+1 제거
[root@CentOS2 /root]# umount /dev/md2
[root@CentOS2 /root]# mdadm --stop /dev/md2
mdadm: stopped /dev/md2
[root@CentOS2 /root]# mdadm --stop /dev/md1
mdadm: stopped /dev/md1
[root@CentOS2 /root]# mdadm --stop /dev/md0
mdadm: stopped /dev/md0
[root@CentOS2 /root]# mdadm --zero-superblock /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1
[root@CentOS2 /root]# mdadm --examine /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1
mdadm: No md superblock detected on /dev/sdc1.
mdadm: No md superblock detected on /dev/sdd1.
mdadm: No md superblock detected on /dev/sde1.
mdadm: No md superblock detected on /dev/sdf1.
[root@CentOS2 /root]# ls /dev/md2
ls: cannot access /dev/md2: 그런 파일이나 디렉터리가 없습니다
[root@CentOS2 /root]# ls /dev/md1
ls: cannot access /dev/md1: 그런 파일이나 디렉터리가 없습니다
[root@CentOS2 /root]# ls /dev/md0
ls: cannot access /dev/md0: 그런 파일이나 디렉터리가 없습니다
[root@CentOS2 /root]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 13G 4.4G 7.3G 38% /
tmpfs 495M 500K 494M 1% /dev/shm
/dev/sda3 477M 2.3M 449M 1% /home
5) RAID 1+0
최소 디스크 개수 |
4개 이상 |
장점 |
Stripe와 Mirror를 동시에 적용할 수 있다. |
단점 |
비용이 많이 발생한다. |
비고 |
RAID 0+1 보다 효율적이다. |
Ex1) RAID 1+0 구성
- /dev/sdc1, /dev/sdd1, /dev/sde1 RAID 타입 파티션 생성 확인
[root@CentOS2 /root]# fdisk -l /dev/sdc /dev/sdd /dev/sde /dev/sdf | grep -i raid
/dev/sdc1 1 204 208880 fd Linux raid autodetect
/dev/sdd1 1 204 208880 fd Linux raid autodetect
/dev/sde1 1 204 208880 fd Linux raid autodetect
/dev/sdf1 1 204 208880 fd Linux raid autodetect
- 'mdadm' 명령어를 이용하여 RAID 1+0 구성 실시
[root@CentOS2 /root]# mdadm --create /dev/md10 --level=10 --raid-devices=4 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1
mdadm: /dev/sdc1 appears to contain an ext2fs file system
size=103408K mtime=Fri Feb 19 12:01:57 2016
mdadm: /dev/sdd1 appears to contain an ext2fs file system
size=103408K mtime=Fri Feb 19 10:15:46 2016
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md10 started.
[root@CentOS2 /root]# ls /dev/md10
/dev/md10
[root@CentOS2 /root]# fdisk -l /dev/md10
Disk /dev/md10: 424 MB, 424673280 bytes
2 heads, 4 sectors/track, 103680 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 524288 bytes / 1048576 bytes
Disk identifier: 0x00000000
[root@CentOS2 /root]# mdadm --detail --scan -v
ARRAY /dev/md10 level=raid10 num-devices=4 metadata=1.2 name=CentOS2:10 UUID=4b9dd62e:d750e756:98da4eaa:1c841ea6
devices=/dev/sdc1,/dev/sdd1,/dev/sde1,/dev/sdf1
[root@CentOS2 /root]# cat /proc/mdstat
Personalities : [raid10]
md10 : active raid10 sdf1[3] sde1[2] sdd1[1] sdc1[0]
414720 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
unused devices: <none>
- CentOS 6.x에서는 재부팅하면 RAID 볼륨 장치 번호가 변경된다. 변경되는 것을 방지하는 방법은 다음과 같다.
[root@CentOS2 /root]# mdadm --detail --scan -v 1> /etc/mdadm.conf (RAID 설정 정보를 'mdadm.conf'에 저장)
[root@CentOS2 /root]# cat /etc/mdadm.conf
ARRAY /dev/md10 level=raid10 num-devices=4 metadata=1.2 name=CentOS2:10 UUID=4b9dd62e:d750e756:98da4eaa:1c841ea6
devices=/dev/sdc1,/dev/sdd1,/dev/sde1,/dev/sdf1
- 파일 시스템 생성 및 마운트 실시
[root@CentOS2 /root]# mkfs.ext4 /dev/md10 1> /dev/null
mke2fs 1.41.12 (17-May-2010)
[root@CentOS2 /root]# mkdir /raid10
[root@CentOS2 /root]# mount /dev/md10 /raid10
[root@CentOS2 /root]# df -h /raid10
Filesystem Size Used Avail Use% Mounted on
/dev/md10 385M 2.3M 362M 1% /raid10
Ex2) RAID 1+0 제거
[root@CentOS2 /root]# umount /dev/md10
[root@CentOS2 /root]# mdadm --stop /dev/md10
mdadm: stopped /dev/md10
[root@CentOS2 /root]# mdadm --zero-superblock /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1
[root@CentOS2 /root]# mdadm --examine /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1
mdadm: No md superblock detected on /dev/sdc1.
mdadm: No md superblock detected on /dev/sdd1.
mdadm: No md superblock detected on /dev/sde1.
mdadm: No md superblock detected on /dev/sdf1.
[root@CentOS2 /root]# ls /dev/md10
/dev/md10
[root@CentOS2 /root]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 13G 4.4G 7.3G 38% /
tmpfs 495M 72K 495M 1% /dev/shm
/dev/sda3 477M 2.3M 449M 1% /home
- 설정 초기화
[root@CentOS2 /root]# fdisk /dev/sdc
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): d
Selected partition 1
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@CentOS2 /root]# fdisk /dev/sdd
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): d
Selected partition 1
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@CentOS2 /root]# fdisk /dev/sde
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): d
Selected partition 1
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@CentOS2 /root]# fdisk /dev/sdf
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): d
Selected partition 1
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@CentOS2 /root]# fdisk /dev/sdg
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): d
Selected partition 1
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
'서버 > Linux II' 카테고리의 다른 글
Linux II - 09. Linux Shell (0) | 2016.02.26 |
---|---|
Linux II - 08. 디스크 Quota (0) | 2016.02.23 |
Linux II - 06. LVM (Logical Volume Manager) (0) | 2016.02.19 |
Linux II - 05. 파일 시스템 점검 및 복구 (0) | 2016.02.17 |
Linux II - 04. 파일 시스템 구조 (0) | 2016.02.04 |