General support questions
-
dustinduse
- Posts: 4
- Joined: 2018/06/15 03:45:00
Post
by dustinduse » 2018/06/15 03:50:21
I recently upgraded my server to a fresh install of CentOS7 from the latest version of 6. I reconfigured the raid just as it was previously and am now unable to mount it. Not sure what the issue is.
Code: Select all
[root@test ~]# fsck -N /dev/md0
fsck from util-linux 2.23.2
[/sbin/fsck.ext2 (1) -- /dev/md0] fsck.ext2 /dev/md0
Code: Select all
[root@test ~]# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Wed Jun 13 23:34:48 2018
Raid Level : raid1
Array Size : 2930133440 (2794.39 GiB 3000.46 GB)
Used Dev Size : 2930133440 (2794.39 GiB 3000.46 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Thu Jun 14 07:02:25 2018
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : bitmap
Name : test:0 (local to host test)
UUID : 97be13eb:b9ffc14f:181a5634:333c58ca
Events : 5375
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
Code: Select all
[root@test ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[0] sdc1[1]
2930133440 blocks super 1.2 [2/2] [UU]
bitmap: 0/22 pages [0KB], 65536KB chunk
unused devices: <none>
Code: Select all
[root@test ~]# mount /dev/md0 /mnt/Storage -t ext2
mount: wrong fs type, bad option, bad superblock on /dev/md0,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so.
Code: Select all
[root@test ~]# dmesg | tail
[97021.702768] EXT4-fs (md0): VFS: Can't find ext4 filesystem
Pulling out my hair here.
-
desertcat
- Posts: 843
- Joined: 2014/08/07 02:17:29
- Location: Tucson, AZ
Post
by desertcat » 2018/06/16 04:32:12
dustinduse wrote: ↑2018/06/15 03:50:21
I recently upgraded my server to a fresh install of CentOS7 from the latest version of 6. I reconfigured the raid just as it was previously and am now unable to mount it. Not sure what the issue is.
Code: Select all
[root@test ~]# fsck -N /dev/md0
fsck from util-linux 2.23.2
[/sbin/fsck.ext2 (1) -- /dev/md0] fsck.ext2 /dev/md0
Code: Select all
[root@test ~]# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Wed Jun 13 23:34:48 2018
Raid Level : raid1
Array Size : 2930133440 (2794.39 GiB 3000.46 GB)
Used Dev Size : 2930133440 (2794.39 GiB 3000.46 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Thu Jun 14 07:02:25 2018
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : bitmap
Name : test:0 (local to host test)
UUID : 97be13eb:b9ffc14f:181a5634:333c58ca
Events : 5375
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
Code: Select all
[root@test ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[0] sdc1[1]
2930133440 blocks super 1.2 [2/2] [UU]
bitmap: 0/22 pages [0KB], 65536KB chunk
unused devices: <none>
Code: Select all
[root@test ~]# mount /dev/md0 /mnt/Storage -t ext2
mount: wrong fs type, bad option, bad superblock on /dev/md0,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so.
Code: Select all
[root@test ~]# dmesg | tail
[97021.702768] EXT4-fs (md0): VFS: Can't find ext4 filesystem
Pulling out my hair here.
OK I'll bite: Why are you using ext
2?? I think etx
2 was used back in the days of kernel 2.2. Why are you not using ext
4? I think your computer is asking the same question:
[root@test ~]# dmesg | tail
[97021.702768]
EXT4-fs (md0): VFS:
Can't find ext4 filesystem
I'm sure there is a reason you are still using a FS last seen in the Jurassic Era.
-
TrevorH
- Site Admin
- Posts: 33220
- Joined: 2009/09/24 10:40:56
- Location: Brighton, UK
Post
by TrevorH » 2018/06/16 11:30:44
Creation Time : Wed Jun 13 23:34:48 2018
That doesn't look great. That looks like you recreated your RAID array on the 13th at 23:24:48. If it had a filesystem on it before then I suspect it's gone. What is the output from
file -s /dev/md0 ?
The ext2 module was deprecated in CentOS 7 but the functionality of the ext2 filesystem is now provided by the ext4 module. In nay case you shouldn't need to specify -t ext2 on the mount command as it should auto-detect any filesystem that is present.
-
dustinduse
- Posts: 4
- Joined: 2018/06/15 03:45:00
Post
by dustinduse » 2018/06/16 17:05:33
Says "/dev/md0: data"
God i hope i didnt lose the data. I have moved this array a few times. And Its been a few years but i thought that was how i did it... Shit there was almost 2.5tb of data on those..
-
dustinduse
- Posts: 4
- Joined: 2018/06/15 03:45:00
Post
by dustinduse » 2018/06/16 17:06:27
desertcat wrote: ↑2018/06/16 04:32:12
OK I'll bite: Why are you using ext
2?? I think etx
2 was used back in the days of kernel 2.2. Why are you not using ext
4? I think your computer is asking the same question:
[root@test ~]# dmesg | tail
[97021.702768]
EXT4-fs (md0): VFS:
Can't find ext4 filesystem
I'm sure there is a reason you are still using a FS last seen in the Jurassic Era.
This raid was originally setup like 7 years ago.
-
dustinduse
- Posts: 4
- Joined: 2018/06/15 03:45:00
Post
by dustinduse » 2018/06/16 17:11:33
TrevorH wrote: ↑2018/06/16 11:30:44
Creation Time : Wed Jun 13 23:34:48 2018
That doesn't look great. That looks like you recreated your RAID array on the 13th at 23:24:48. If it had a filesystem on it before then I suspect it's gone. What is the output from
file -s /dev/md0 ?
The ext2 module was deprecated in CentOS 7 but the functionality of the ext2 filesystem is now provided by the ext4 module. In nay case you shouldn't need to specify -t ext2 on the mount command as it should auto-detect any filesystem that is present.
You sure are correct. I fsck up
"mdadm --assemble --scan --verbose /dev/md{number} /dev/{disk1} /dev/{disk2} /dev/{disk3} /dev/{disk4}"
is the correct command. And not the one i used.. Think i have a backup somewhere. hopefully.
-
desertcat
- Posts: 843
- Joined: 2014/08/07 02:17:29
- Location: Tucson, AZ
Post
by desertcat » 2018/06/18 20:31:52
dustinduse wrote: ↑2018/06/16 17:11:33
You sure are correct. I fsck up
"mdadm --assemble --scan --verbose /dev/md{number} /dev/{disk1} /dev/{disk2} /dev/{disk3} /dev/{disk4}"
is the correct command. And not the one i used.. Think i have a backup somewhere. hopefully.
Hopefully you have a backup somewhere. I thought, maybe wrongly, that was was the idea behind RAID: If a drive failed there was one that pick up the slack with no loss of data. Just for fun what does fdisk -l show? I doubt all 4 disks failed. The data may still be on the disks unless you somehow managed to wipe out or deleted the information on them. Keep us posted.
-
MartinR
- Posts: 714
- Joined: 2015/05/11 07:53:27
- Location: UK
Post
by MartinR » 2018/06/21 14:04:56
@desertcat: The problem here (AIUI) is that a disk didn't fail, dustinuse created a new raidset on top of his disks. It's as if you issued
mkfs to a partition that already had a filesystem - the old filesystem is gone forever. Like, I suspect, all of us here I hope his backups are good; it's not a nice feeling to lose complete filesystems.
-
TrevorH
- Site Admin
- Posts: 33220
- Joined: 2009/09/24 10:40:56
- Location: Brighton, UK
Post
by TrevorH » 2018/06/21 15:17:37
And RAID is not a backup. It's a protection against single disk failure (higher RAID levels allow protection against more than one failing). But it's not a backup - for example if I have a RAID array with a filesystem mounted and I rm -rf /raid then it's gone. Bye.
-
desertcat
- Posts: 843
- Joined: 2014/08/07 02:17:29
- Location: Tucson, AZ
Post
by desertcat » 2018/06/21 20:56:25
MartinR wrote: ↑2018/06/21 14:04:56
@desertcat: The problem here (AIUI) is that a disk didn't fail, dustinuse created a new raidset on top of his disks. It's as if you issued
mkfs to a partition that already had a filesystem - the old filesystem is gone forever. Like, I suspect, all of us here I hope his backups are good; it's not a nice feeling to lose complete filesystems.
OOOPS!! That is NOT good!! Learned something new. I don't use RAID (unless it is to kill bugs!!
) But I've done some stupid things as well. I did have backups. I do hope he had backups. 2TB of data loss would be a disaster.