Corrupted drive after a power supply failure

Issues related to hardware problems
Post Reply
andrewju
Posts: 49
Joined: 2008/01/12 19:14:48

Corrupted drive after a power supply failure

Post by andrewju » 2019/07/09 23:19:14

Hi All,

I have a CentOS 7 system refusing to boot up after a PSU failure (yes, it does have a new PSU now...).

In brief: smartctl reports two pending sectors on /dev/sda, and that's my issue.
I wonder if there's a chance to recover the system, rather than having to reinstall from scratch...

Here are some more details:
After the new PSU was installed, the system refuses to boot properly. entering an emergency mode and complaining about I/O error on /dev/sda:

Code: Select all

blk_update_request: I/O error, dev sda, sector 105129760
XFS (dm-1): metadata I/O error, block 0x62e2db8 ("xlog_bread_noalign") error 5 numblks 8200
The first thing that came on my mind is to boot from a USB stick and try to fix it with:

Code: Select all

[root@mgmt ~]# xfs_repair /dev/sda1
Phase 1 - find and verify superblock...
bad primary superblock - bad magic number !!!

attempting to find secondary superblock...
.........................................................................................................................................................................................Sorry, could not find valid secondary superblock
Exiting now.
Then I followed this article to repair an LVM volume:

Code: Select all

[root@mgmt ~]# lvscan
  ACTIVE            '/dev/centos/root' [<98.83 GiB] inherit
  ACTIVE            '/dev/centos/home' [<638.31 GiB] inherit
  ACTIVE            '/dev/centos/swap' [<7.52 GiB] inherit
[root@mgmt ~]#
Ok, all of them are already 'ACTIVE', so I went ahead and proceeded directly to xfs_repair again:

Code: Select all

[root@mgmt ~]# xfs_repair /dev/centos/root
Phase 1 - find and verify superblock...
superblock read failed, offset 53057945600, size 131072, ag 2, rval -1

fatal error -- Input/output error
still no joy...

And here is the smartctl status of /dev/sda :

Code: Select all

[root@mgmt ~]# smartctl --all /dev/sda
smartctl 6.5 2016-05-07 r4318 [x86_64-linux-3.10.0-957.el7.x86_64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Red
Device Model:     WDC WD30EFRX-68AX9N0
Serial Number:    WD-WMC1T0921526
LU WWN Device Id: 5 0014ee 6ad8ba9a1
Firmware Version: 80.00A80
User Capacity:    3,000,592,982,016 bytes [3.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-2 (minor revision not indicated)
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Tue Jul  9 22:44:07 2019 UTC
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x00) Offline data collection activity
                                        was never started.
                                        Auto Offline Data Collection: Disabled.
Self-test execution status:      ( 121) The previous self-test completed having
                                        the read element of the test failed.
Total time to complete Offline
data collection:                (39540) seconds.
Offline data collection
capabilities:                    (0x7b) SMART execute Offline immediate.
                                        Auto Offline data collection on/off support.
                                        Suspend Offline collection upon new
                                        command.
                                        Offline surface scan supported.
                                        Self-test supported.
                                        Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                        power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine
recommended polling time:        (   2) minutes.
Extended self-test routine
recommended polling time:        ( 397) minutes.
Conveyance self-test routine
recommended polling time:        (   5) minutes.
SCT capabilities:              (0x70bd) SCT Status supported.
                                        SCT Error Recovery Control supported.
                                        SCT Feature Control supported.
                                        SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       15
  3 Spin_Up_Time            0x0027   183   179   021    Pre-fail  Always       -       5808
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       173
  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x002e   200   200   000    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   023   023   000    Old_age   Always       -       56924
 10 Spin_Retry_Count        0x0032   100   100   000    Old_age   Always       -       0
 11 Calibration_Retry_Count 0x0032   100   100   000    Old_age   Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       173
192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       111
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       61
194 Temperature_Celsius     0x0022   107   091   000    Old_age   Always       -       43
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       2
198 Offline_Uncorrectable   0x0030   100   253   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0008   100   253   000    Old_age   Offline      -       0

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed: read failure       90%     56923         54439176

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
As I understand, the drive has two pending sectors. At least one of them is causing the problem...

So, two major questions here:
1. How do I check and make sure /dev/sda has only those two troublesome sectors, and no other major issues ?
2. How can I try and recover this system? I would really like to make it boot properly again ...

User avatar
TrevorH
Forum Moderator
Posts: 26971
Joined: 2009/09/24 10:40:56
Location: Brighton, UK

Re: Corrupted drive after a power supply failure

Post by TrevorH » 2019/07/10 00:01:43

What is the output from file -s /dev/sda1 and the same command for each of /dev/centos/root and /dev/centos/home
CentOS 5 died in March 2017 - migrate NOW!
CentOS 6 goes EOL sooner rather than later, get upgrading!
Full time Geek, part time moderator. Use the FAQ Luke

andrewju
Posts: 49
Joined: 2008/01/12 19:14:48

Re: Corrupted drive after a power supply failure

Post by andrewju » 2019/07/10 00:12:21

Here it is:

Code: Select all

[root@tftp ~]# file -s /dev/sda1
/dev/sda1: x86 boot sector, mkdosfs boot message display, code offset 0x3c, OEM-ID "mkfs.fat", sectors/cluster 8, root entries 512, Media descriptor 0xf8, sectors/FAT 200, heads 255, sectors 409600 (volumes > 32 MB) , reserved 0x1, serial number 0x45a427b7, label: "           ", FAT (16 bit)
[root@tftp ~]# file -s /dev/centos/root
/dev/centos/root: symbolic link to `../dm-3'
[root@tftp ~]# file -s /dev/centos/home
/dev/centos/home: symbolic link to `../dm-4'
[root@tftp ~]#

Code: Select all

[root@tftp dev]# file -s /dev/dm-3
/dev/dm-3: SGI XFS filesystem data (blksz 4096, inosz 256, v2 dirs)
[root@tftp dev]# file -s /dev/dm-4
/dev/dm-4: SGI XFS filesystem data (blksz 4096, inosz 256, v2 dirs)

andrewju
Posts: 49
Joined: 2008/01/12 19:14:48

Re: Corrupted drive after a power supply failure

Post by andrewju » 2019/07/10 07:07:48

Some more data:

Code: Select all

[root@tftp dev]# xfs_repair /dev/dm-3
Phase 1 - find and verify superblock...
superblock read failed, offset 53057945600, size 131072, ag 2, rval -1

fatal error -- Input/output error
[root@tftp dev]# xfs_repair /dev/dm-4
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed.  Mount the filesystem to replay the log, and unmount it before
re-running xfs_repair.  If you are unable to mount the filesystem, then use
the -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.
[root@tftp dev]#

Hmm... As I understand, the LVM is on /dev/sda5, not on /dev/sda1.

Code: Select all

[root@tftp dev]# lvmdiskscan
  /dev/mapper/live-rw        [       8.00 GiB]
  /dev/loop1                 [      <4.31 MiB]
  /dev/sda1                  [     200.00 MiB]
  /dev/mapper/live-base      [       8.00 GiB]
  /dev/loop2                 [       1.27 GiB]
  /dev/sda2                  [     931.37 GiB]
  /dev/mapper/live-osimg-min [       8.00 GiB]
  /dev/loop3                 [       8.00 GiB]
  /dev/sda3                  [       1.09 TiB]
  /dev/centos/root           [     <98.83 GiB]
  /dev/loop4                 [     512.00 MiB]
  /dev/sda4                  [     500.00 MiB]
  /dev/centos/home           [    <638.31 GiB]
  /dev/sda5                  [     744.65 GiB] LVM physical volume
  /dev/centos/swap           [      <7.52 GiB]
  /dev/sdb2                  [       1.80 TiB]
  /dev/sdb3                  [       1.80 TiB]
  /dev/sdc1                  [      <2.73 TiB]
  /dev/sde1                  [       1.37 GiB]
  /dev/sde2                  [      <4.89 MiB]
  /dev/sde3                  [      19.59 MiB]
  /dev/md127                 [       1.80 TiB]
  6 disks
  15 partitions
  0 LVM physical volume whole disks
  1 LVM physical volume

[root@tftp dev]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda5
  VG Name               centos
  PV Size               744.65 GiB / not usable 0
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              190631
  Free PE               0
  Allocated PE          190631
  PV UUID               nHxayv-3tIM-m32i-r8Gt-Sbr5-Eysd-bL3NsZ
[root@tftp dev]# 

Code: Select all

[root@tftp dev]# file -s /dev/sda5
/dev/sda5: LVM2 PV (Linux Logical Volume Manager), UUID: nHxayv-3tIM-m32i-r8Gt-Sbr5-Eysd-bL3NsZ, size: 799565414400

User avatar
TrevorH
Forum Moderator
Posts: 26971
Joined: 2009/09/24 10:40:56
Location: Brighton, UK

Re: Corrupted drive after a power supply failure

Post by TrevorH » 2019/07/10 07:51:50

fatal error -- Input/output error
Do you also get errors in the output from dmesg > The fact that it's still reporting i/o errors makes me wonder if your disk is toast.

Pending sectors are those where the disk has detected a problem and has assigned a spare sector to replace one that's gone bad but the contents of the old one are lost as it was unable to read it. The pending sector status only goes away once that sector is written again with new data. Those shouldn't affect anything unless they just happen to be ones occupied by filesystem critical structures.

So for your /home filesystem on dm-4 you need to do what it says - start by mounting the filesystem and let it replay the log then umount it and run xfs_repair again.

For your root filesystem, I am not sure what to suggest. The man page for xfs_repair says
Corrupted Superblocks
XFS has both primary and secondary superblocks. xfs_repair uses information in the primary superblock to automatically find
and validate the primary superblock against the secondary superblocks before proceeding. Should the primary be too cor‐
rupted to be useful in locating the secondary superblocks, the program scans the filesystem until it finds and validates
some secondary superblocks. At that point, it generates a primary superblock.
Personally I would make a bit for bit copy of the entire disk to another one before you attempt any further repairs. You might find better help on the xfs_repair problem for your root filesystm on the xfs mailing list. If I read the man page correctly I would have expected it to try to find the secondary superblock but maybe it only does that when the primary is readable and corrupt. I don't see any options in the man page to allow you to tell it to ignore the primary superblock error nor anything else that might force it to look for a secondary.
CentOS 5 died in March 2017 - migrate NOW!
CentOS 6 goes EOL sooner rather than later, get upgrading!
Full time Geek, part time moderator. Use the FAQ Luke

MartinR
Posts: 457
Joined: 2015/05/11 07:53:27
Location: UK

Re: Corrupted drive after a power supply failure

Post by MartinR » 2019/07/10 11:31:10

You can use xfs_copy to make a copy of the filesystem, though it may report errors from the superblock. As a dangerous last resort there is always xfs_db which would allow you to copy the superblock from one of the allocation groups into the main superblock. Get this wrong and the filesystem WILL be unreadable, so use a copy if at all possible. I don't at the moment have the resources available to test this, and the last time I did it was 17 years ago on a genuine SGI machine, so I'm afraid you're on your own if you try it.

andrewju
Posts: 49
Joined: 2008/01/12 19:14:48

Re: Corrupted drive after a power supply failure

Post by andrewju » 2019/07/10 12:08:49

Thanks a lot for your ideas!
The situation looks much more serious than I initially thought... :(

Right now I am copying the content of the entire drive to a new place with ddrescue. I could have left the swap partition out, but it will take me more time to figure out how to leave that specific piece out - well, it's a minor piece anyway... The failing disk is 3TB large, so the copying will take about 5 more hours. Of course, this estimation is only valid if all goes well and there won't be many unreadable blocks. Time will tell...

When copying is done, I'll give xfs_copy and xfs_db a try...
I'll post here if / when I get any progress...


P.S. I also sent a question to the linux-XFS mailing list.

andrewju
Posts: 49
Joined: 2008/01/12 19:14:48

Re: Corrupted drive after a power supply failure

Post by andrewju » 2019/07/11 11:03:32

WIth the great help from linux-xfs mailing list, I managed to resolve this issue!
I'll describe it here in case anyone will get into a similar situation.

It all started because of unreadable sectors on the HDD. When there's a read error preventing the root filesystem from being mounted, the system gets into an emergency mode - just as I observed.

Now, the tricky part is that the 'xfs_repair' will NOT repair a partition if there's an I/O error while trying to read a superblock. This is important: if a superblock is corrupted but readable - xfs_repair will try to recover it. But if there's an I/O error - xfs_repair just quits with a fatal error. This makes sense, as any attempt to work on a physically failing drive may lead to even greater issues.

The solution is to make a copy of the failing drive first, and then to work on that copy. The copy won't have these unreadable sectors, so xfs_repair will be able to proceed. This is the preferred and in absolute majority of the cases the proper way to recover your system.

In some special situations, if you believe the drive is Ok and unreadable sectors are there by mistake, you can try WRITING to these unreadable sectors. If data is unreadable, it is lost anyway. So overwriting it with something else doesn't make it worse... Therefore, this will either overwrite the data on that particular sector so that it's readable again, or push the HDD to remap that sector - use a spare sector from an unused area instead.
(Note: Remap is generally a bad sign. If the number of remapped sectors grows over time - you should seriously consider replacing the HDD ASAP.)

In any case, a backup of a problematic drive is strongly recommended. ddrescue is a great tool to make that backup. There are many tutorials online describing how to use it. If your drive is in a very bad condition, you may even have to seek for help of professional data recovery experts.

Based on recommendation on the linux-XFS mailing list, I updated the timeout values like that:

Code: Select all

 # smartctl -l scterc,900,100 /dev/sda
 # echo 180 > /sys/block/sda/device/timeout
With the above, the drive firmware will try longer to recover the data *if* the sectors are marginally bad. If the sectors are flat out bad, then the firmware will still (almost) immediately give up and at that point nothing else can be done except zero the bad sectors and hope fsck can reconstruct what's missing.

So, I made a backup with ddrescue (this can take hours and even days for larger drives!).
And then I decided to try and overwrite the unreadable sectors on my original drive.

You can find problematic sector numbers in your /var/log/messages.
Just search for messages like this:

Code: Select all

kernel: blk_update_request: I/O error, dev sda, sector 105066528 
Then use hdparm to try and READ that sector:

Code: Select all

# hdparm --read-sector 105066528 /dev/sda

/dev/sda:
reading sector 105066528: FAILED: Input/output error
#
If reading fails, we can try to WRITE to that sector:

Code: Select all

# hdparm --yes-i-know-what-i-am-doing --write-sector 105066528 /dev/sda


/dev/sda:
re-writing sector 105066528: succeeded
#
And then we re-read it again, just to make sure:

Code: Select all

# hdparm --read-sector 105066528 /dev/sda

/dev/sda:
reading sector 105066528: succeeded
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
# 
It's a good idea to monitor the SMART attributes of your HDD ('smartctl -a /dev/sda'). Specifically, watch the Current_Pending_Sector and Reallocated_Sector_Ct values.

In my case, I had over 170 sectors to be overwritten.

When done, I ran the xfs_repair.
IMPORTANT: if you use LVM, you should run the 'xfs_repair' on an LVM volume, and not on /dev/sda1 (or whatever your drive letter and partition number is). You can find more details on 'using xfs_repair on an LVM' if you follow the link in a 'useful reading' section below.

Code: Select all

# xfs_repair /dev/centos/root
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed.  Mount the filesystem to replay the log, and unmount it before
re-running xfs_repair.  If you are unable to mount the filesystem, then use
the -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.
# 
# 
# mount /dev/centos/root /tmp/root/
mount: mount /dev/mapper/centos-root on /tmp/root failed: Structure needs cleaning
Note that xfs_repair found some changes that needed to be 'replayed' and suggested to try and mount the partition. In my case the mount failed, so I had to pass a '-L' option:

Code: Select all

# xfs_repair -L /dev/centos/root
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
ALERT: The filesystem has valuable metadata changes in a log which is being
destroyed because the -L option was used.
        - scan filesystem freespace and inode maps...
Metadata corruption detected at xfs_allocbt block 0x62d4020/0x1000
btree block 2/4 is suspect, error -117
bad magic # 0 in btbno block 2/4
Metadata corruption detected at xfs_allocbt block 0x62d4028/0x1000
btree block 2/5 is suspect, error -117
bad magic # 0 in btcnt block 2/5
agf_freeblks 2605217, counted 2604934 in ag 2
agf_btreeblks 10, counted 8 in ag 2
agi unlinked bucket 37 is 3721957 in ag 1 (inode=137939685)
agi unlinked bucket 43 is 3023403 in ag 1 (inode=137241131)
agi unlinked bucket 42 is 1234602 in ag 2 (inode=269670058)
agi unlinked bucket 28 is 131612 in ag 3 (inode=402784796)
agi unlinked bucket 30 is 131614 in ag 3 (inode=402784798)
agi unlinked bucket 60 is 1104636 in ag 3 (inode=403757820)
agi unlinked bucket 61 is 1104637 in ag 3 (inode=403757821)
agi unlinked bucket 62 is 1104638 in ag 3 (inode=403757822)
sb_ifree 11171, counted 11170
sb_fdblocks 16021712, counted 16028442
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
Phase 5 - rebuild AG headers and trees...
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
disconnected inode 137241131, moving to lost+found
disconnected inode 137939685, moving to lost+found
disconnected inode 269670058, moving to lost+found
disconnected inode 402784796, moving to lost+found
disconnected inode 402784798, moving to lost+found
disconnected inode 403757820, moving to lost+found
disconnected inode 403757821, moving to lost+found
disconnected inode 403757822, moving to lost+found
Phase 7 - verify and correct link counts...
done
After this, the partition was finally mounted and I was able to access the data.



Some useful reading:

An important note on using xfs_repair on an LVM: http://www.acmedata.in/2016/09/23/xfs_r ... omment-391
ddrescue usage example: https://bitsanddragons.wordpress.com/20 ... -centos-7/
Another ddrescue example: viewtopic.php?t=48634
A nice hint to use hdparm to read / write specific sectors on a HDD: https://unix.stackexchange.com/question ... iling-disk
An example of using smartmontools to test your HDD: https://www.linuxquestions.org/question ... ng-920243/
Discussion of this issue on the linux-XFS mailing list: https://www.spinics.net/lists/linux-xfs/msg29263.html

User avatar
TrevorH
Forum Moderator
Posts: 26971
Joined: 2009/09/24 10:40:56
Location: Brighton, UK

Re: Corrupted drive after a power supply failure

Post by TrevorH » 2019/07/11 11:48:43

Glad you got it back and running again.
In my case, I had over 170 sectors to be overwritten.
That would make me replace the drive...
CentOS 5 died in March 2017 - migrate NOW!
CentOS 6 goes EOL sooner rather than later, get upgrading!
Full time Geek, part time moderator. Use the FAQ Luke

andrewju
Posts: 49
Joined: 2008/01/12 19:14:48

Re: Corrupted drive after a power supply failure

Post by andrewju » 2019/07/11 12:07:29

While the drive's SMART data looks pretty good at the moment, I already have a new drive to put on its place.
So yes, it's going to be replaced within the next few days!

Thanks!!!

Post Reply

Return to “CentOS 7 - Hardware Support”