replacing fail raid1 disk software raid

Issues related to hardware problems
Post Reply
piloteight
Posts: 1
Joined: 2016/05/26 17:44:26

replacing fail raid1 disk software raid

Post by piloteight » 2016/05/26 17:57:07

Hello,

CentOS release 6.6 (Final) - I have a failed raid 1 disk. I would like to replace this disk. I am not too familiar with software raid. I inherited this server from another who is no longer with the company.
I want to make sure when I replace the failed raid 1 disk, the server will boot up. It appears the system OS is installed on this software raid1.
I have not done anything to this server at this time. I have a replacement disk ready and am ready to power down the server.

I would appreciate if someone would provide some guidance and instructions on how I can accomplish this task. I have included detail info below.

JOe




[root@sunshine init.d]# fdisk -l

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x000d2fd4

Device Boot Start End Blocks Id System
/dev/sda1 * 1 26 204800 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 26 115516 927675392 83 Linux

WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted.


Disk /dev/sdc: 22520.2 GB, 22520191254528 bytes
255 heads, 63 sectors/track, 2737923 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Device Boot Start End Blocks Id System
/dev/sdc1 1 267350 2147483647+ ee GPT

Disk /dev/md126: 950.2 GB, 950150365184 bytes
2 heads, 4 sectors/track, 231970304 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x000d2fd4

Device Boot Start End Blocks Id System
/dev/md126p1 * 257 51456 204800 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/md126p2 51457 231970304 927675392 83 Linux
Partition 2 does not end on cylinder boundary.
[root@sunshine init.d]#


Filesystem Size Used Avail Use% Mounted on
/dev/md126p2 871G 12G 816G 2% /
tmpfs 95G 0 95G 0% /dev/shm
/dev/md126p1 194M 64M 121M 35% /boot
/dev/sdc1 21T 14T 7.2T 66% /data
[root@sunshine init.d]#




[root@sunshine log]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931.5G 0 disk
└─md126 9:126 0 884.9G 0 raid1
├─md126p1 259:0 0 200M 0 md /boot
└─md126p2 259:1 0 884.7G 0 md /
sdb 8:16 0 931.5G 0 disk
sdc 8:32 0 20.5T 0 disk
└─sdc1 8:33 0 20.5T 0 part /data
[root@sunshine log]#




[root@sunshine etc]# cat /proc/mdstat
Personalities : [raid1]
md126 : active raid1 sda[1]
927881216 blocks super external:/md0/0 [2/1] [U_]

md0 : inactive sdb[1](S) sda[0](S)
6306 blocks super external:imsm

unused devices: <none>
[root@sunshine etc]#





[root@sunshine tmp]# mdadm --detail /dev/md126p1
/dev/md126p1:
Container : /dev/md0, member 0
Raid Level : raid1
Array Size : 204800 (200.03 MiB 209.72 MB)
Used Dev Size : 927881348 (884.90 GiB 950.15 GB)
Raid Devices : 2
Total Devices : 1

State : active, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0


UUID : 38cc32dc:60df7efa:6c7a24ff:6972f560
Number Major Minor RaidDevice State
1 8 0 0 active sync /dev/sda
2 0 0 2 removed
[root@sunshine tmp]#





[root@sunshine tmp]# mdadm --detail /dev/md126p2
/dev/md126p2:
Container : /dev/md0, member 0
Raid Level : raid1
Array Size : 927675392 (884.70 GiB 949.94 GB)
Used Dev Size : unknown
Raid Devices : 2
Total Devices : 1

State : active, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0



UUID : 38cc32dc:60df7efa:6c7a24ff:6972f560
Number Major Minor RaidDevice State
1 8 0 0 active sync /dev/sda
2 0 0 2 removed
[root@sunshine tmp]#



[root@sunshine ~]# mdadm --detail /dev/md0
/dev/md0:
Version : imsm
Raid Level : container
Total Devices : 2

Working Devices : 2


UUID : 15ea3ba0:f80a61e4:68da4d12:c607dde4
Member Arrays : /dev/md/Volume0

Number Major Minor RaidDevice

0 8 0 - /dev/sda
1 8 16 - /dev/sdb

aks
Posts: 2538
Joined: 2014/09/20 11:22:14

Re: replacing fail raid1 disk software raid

Post by aks » 2016/05/27 15:52:03

https://www.howtoforge.com/tutorial/lin ... -harddisk/
(for GPT type disks - which I think you're using)

Post Reply