Hi all,
We are using Centos 7,but we always met xfs internal error.. I list below:
Aug 20 22:30:58 bbc800e0ed563678 kernel: XFS (sdd2): Internal error xfs_trans_cancel at line 1007 of file fs/xfs/xfs_trans.c. Caller xfs_create+0x40e/0x710 [xfs]
Aug 20 22:30:58 bbc800e0ed563678 kernel: XFS (sdd2): xfs_do_force_shutdown(0x8) called from line 1008 of file fs/xfs/xfs_trans.c. Return address = 0xffffffffa03549c2
Aug 20 22:30:58 bbc800e0ed563678 kernel: XFS (sdd2): Corruption of in-memory data detected. Shutting down filesystem
Aug 20 22:30:58 bbc800e0ed563678 kernel: XFS (sdd2): Please umount the filesystem and rectify the problem(s)
Aug 20 22:30:59 bbc800e0ed563678 systemd: Device dev-disk-by\x2dpartlabel-ceph.device appeared twice with different sysfs paths /sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0/host10/port-10:0/expander-10:0/port-10:0:5/end_device-10:0:5/target10:0:5/10:0:5:0/block/sde/sde1 and /sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0/host10/port-10:0/expander-10:0/port-10:0:4/end_device-10:0:4/target10:0:4/10:0:4:0/block/sdd/sdd1
Aug 20 22:31:12 bbc800e0ed563678 kernel: XFS (sdd2): xfs_log_force: error -5 returned.
Aug 20 22:31:42 bbc800e0ed563678 kernel: XFS (sdd2): xfs_log_force: error -5 returned.
Aug 20 22:32:12 bbc800e0ed563678 kernel: XFS (sdd2): xfs_log_force: error -5 returned.
Aug 20 22:32:42 bbc800e0ed563678 kernel: XFS (sdd2): xfs_log_force: error -5 returned.
Aug 20 22:33:12 bbc800e0ed563678 kernel: XFS (sdd2): xfs_log_force: error -5 returned.
I would like to know how to fix it?
Thanks a lot!
centos 7 xfs issue
Re: centos 7 xfs issue
Probably this:
Device dev-disk-by\x2dpartlabel-ceph.device appeared twice with different sysfs paths /sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0/host10/port-10:0/expander-10:0/port-10:0:5/end_device-10:0:5/target10:0:5/10:0:5:0/block/sde/sde1 and /sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0/host10/port-10:0/expander-10:0/port-10:0:4/end_device-10:0:4/target10:0:4/10:0:4:0/block/sdd/sdd1
Device has appeared twice with different "hardware" paths and XFS is not a cluster "aware" filesystem so shutdown is the only logical thing to do.
Device dev-disk-by\x2dpartlabel-ceph.device appeared twice with different sysfs paths /sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0/host10/port-10:0/expander-10:0/port-10:0:5/end_device-10:0:5/target10:0:5/10:0:5:0/block/sde/sde1 and /sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0/host10/port-10:0/expander-10:0/port-10:0:4/end_device-10:0:4/target10:0:4/10:0:4:0/block/sdd/sdd1
Device has appeared twice with different "hardware" paths and XFS is not a cluster "aware" filesystem so shutdown is the only logical thing to do.
Re: centos 7 xfs issue
Are you using any form of software raid? There's a bug in some versions of systemd where it spits out the appeared twice message if you are.
I'd be more concerned about this:
Have you tried running a couple of passes of memtest64?
I'd be more concerned about this:
Code: Select all
Corruption of in-memory data detected. Shutting down filesystem
Re: centos 7 xfs issue
aks wrote:Probably this:
Device dev-disk-by\x2dpartlabel-ceph.device appeared twice with different sysfs paths /sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0/host10/port-10:0/expander-10:0/port-10:0:5/end_device-10:0:5/target10:0:5/10:0:5:0/block/sde/sde1 and /sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0/host10/port-10:0/expander-10:0/port-10:0:4/end_device-10:0:4/target10:0:4/10:0:4:0/block/sdd/sdd1
Device has appeared twice with different "hardware" paths and XFS is not a cluster "aware" filesystem so shutdown is the only logical thing to do.
Does it caused by https://github.com/systemd/systemd/issues/2705 ?
Re: centos 7 xfs issue
Doesn't look likely. Is your /dev/sdd an nvme device by any chance?
The future appears to be RHEL or Debian. I think I'm going Debian.
Info for USB installs on http://wiki.centos.org/HowTos/InstallFromUSBkey
CentOS 5 and 6 are deadest, do not use them.
Use the FAQ Luke
Info for USB installs on http://wiki.centos.org/HowTos/InstallFromUSBkey
CentOS 5 and 6 are deadest, do not use them.
Use the FAQ Luke