[issues] PCIe SSD with NVMe - Performance degradation

Issues related to hardware problems
Post Reply
dbfontes
Posts: 4
Joined: 2017/04/08 15:43:13

[issues] PCIe SSD with NVMe - Performance degradation

Post by dbfontes » 2017/04/08 16:01:28

Hi folks.

I have a curious story that's been tormenting me for days.

Can someone give me some light??? :( :( :(

I installed the SSD on a PowerEdge 1950 server and I have performance issues! For writing only! Reading is going well. I have operations that take about 2s (attachment) !!!

I use the disk in a cache partition on an nginx server.

CentOS release 6.8 (Final)
Server: PowerEdge 1950
Disk: Toshiba OCZ RD400A M.2 2280 + AIC 256GB PCI-Express 3.0 x 4 MLC Internal Solid State Drive (SSD) RVD400-22280-256G-A

dmesg:
Nvme 0000: 0a: 00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
Nvme 0000: 0a: 00.0: setting latency timer to 64
Ata_piix 0000: 00: 1f.1: version 2.13
Ata_piix 0000: 00: 1f.1: PCI INT A -> GSI 16 (level, low) -> IRQ 16
Ata_piix 0000: 00: 1f.1: setting latency timer to 64
Scsi1: ata_piix
Scsi2: ata_piix
Ata1: PATA max UDMA / 100 cmd 0x1f0 ctl 0x3f6 bmdma 0xfc00 irq 14
Ata2: PATA max UDMA / 100 cmd 0x170 ctl 0x376 bmdma 0xfc08 irq 15
Ata2: port disabled. Ignoring.
IRQ 16 / nvme0q0: IRQF_DISABLED is not guaranteed on shared IRQs
  Alloc irq_desc for 31 on node -1
  Alloc kstat_irqs on node -1
Nvme 0000: 0a: 00.0: irq 31 for MSI / MSI-X
  Alloc irq_desc for 32 on node -1
  Alloc kstat_irqs on node -1
Nvme 0000: 0a: 00.0: irq 32 for MSI / MSI-X
  Alloc irq_desc for 33 on node -1
  Alloc kstat_irqs on node -1
Nvme 0000: 0a: 00.0: irq 33 for MSI / MSI-X
  Alloc irq_desc for 34 on node -1
  Alloc kstat_irqs on node -1
Nvme 0000: 0a: 00.0: irq 34 for MSI / MSI-X
  Alloc irq_desc for 35 on node -1
  Alloc kstat_irqs on node -1
Nvme 0000: 0a: 00.0: irq 35 for MSI / MSI-X
  Alloc irq_desc for 36 on node -1
  Alloc kstat_irqs on node -1
Nvme 0000: 0a: 00.0: irq 36 for MSI / MSI-X
  Alloc irq_desc for 37 on node -1
  Alloc kstat_irqs on node -1
Nvme 0000: 0a: 00.0: irq 37 for MSI / MSI-X
IRQ 31 / nvme0q0: IRQF_DISABLED is not guaranteed on shared IRQs
IRQ 31 / nvme0q1: IRQF_DISABLED is not guaranteed on shared IRQs
IRQ 32 / nvme0q2: IRQF_DISABLED is not guaranteed on shared IRQs
IRQ 33 / nvme0q3: IRQF_DISABLED is not guaranteed on shared IRQs
IRQ 34 / nvme0q4: IRQF_DISABLED is not guaranteed on shared IRQs
IRQ 35 / nvme0q5: IRQF_DISABLED is not guaranteed on shared IRQs
IRQ 36 / nvme0q6: IRQF_DISABLED is not guaranteed on shared IRQs
IRQ 37 / nvme0q7: IRQF_DISABLED is not guaranteed on shared IRQs
 Nvme0n1: p1
Attachments
disk.jpeg
disk.jpeg (18.25 KiB) Viewed 8042 times
Last edited by dbfontes on 2017/04/11 22:32:02, edited 2 times in total.

dbfontes
Posts: 4
Joined: 2017/04/08 15:43:13

Re: [issues] PowerEdge 1950 + SSD M.2 with PCI-Express

Post by dbfontes » 2017/04/09 14:22:00

ls -lsa /dev/nvme*
0 crw-rw---- 1 root root 246, 0 Feb 1 13:17 /dev/nvme0
0 brw-rw---- 1 root disk 259, 0 Feb 1 13:17 /dev/nvme0n1
0 brw-rw---- 1 root disk 259, 1 Feb 1 13:17 /dev/nvme0n1p1

fdisk
Disk /dev/nvme0n1: 256.1 GB, 256060514304 bytes
255 heads, 63 sectors/track, 31130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xaee37497

Device Boot Start End Blocks Id System
/dev/nvme0n1p1 1 31130 250051693+ 83 Linux

iostat -dxk 5 (attachment)
Attachments
disk-iostat-1.jpeg
disk-iostat-1.jpeg (77.51 KiB) Viewed 8009 times

dbfontes
Posts: 4
Joined: 2017/04/08 15:43:13

Re: [issues] PowerEdge 1950 + SSD M.2 with PCI-Express

Post by dbfontes » 2017/04/11 21:57:29

I'm thinking of upgrading the kernel and seeing what happens. I do not know what else to do.

fstab (PCIe SSD)
UUID="c3725ef4-f1ab-495e-a7b7-409d002e0e3a" /storage/0 ext4 defaults 1 2

uname -a
Linux x1-cache1 2.6.32-642.4.2.el6.x86_64 #1 SMP Tue Aug 23 19:58:13 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

modinfo nvme
filename: /lib/modules/2.6.32-642.4.2.el6.x86_64/kernel/drivers/block/nvme.ko
version: 0.10
license: GPL
author: Matthew Wilcox <willy@linux.intel.com>
srcversion: 38BF2C912186C6289DEF773
alias: pci:v*d*sv*sd*bc01sc08i02*
depends:
vermagic: 2.6.32-642.4.2.el6.x86_64 SMP mod_unload modversions
parm: admin_timeout:timeout in seconds for admin commands (byte)
parm: io_timeout:timeout in seconds for I/O (byte)
parm: retry_time:time in seconds to retry failed I/O (byte)
parm: shutdown_timeout:timeout in seconds for controller shutdown (byte)
parm: nvme_major:int
parm: nvme_char_major:int
parm: use_threaded_interrupts:int

dbfontes
Posts: 4
Joined: 2017/04/08 15:43:13

Re: [issues] PCIe with NVMe - Performance degradation

Post by dbfontes » 2017/04/11 22:20:25

lspci | grep OCZ
0a:00.0 Non-Volatile memory controller: OCZ Technology Group, Inc. Device 6018 (rev 01)

lspci -s 0a:00 -v
0a:00.0 Non-Volatile memory controller: OCZ Technology Group, Inc. Device 6018 (rev 01) (prog-if 02 [NVM Express])
Subsystem: OCZ Technology Group, Inc. Device 6018
Flags: bus master, fast devsel, latency 0, IRQ 16
Memory at fc3fc000 (64-bit, non-prefetchable)
Capabilities: [40] Power Management version 3
Capabilities: [50] MSI: Enable- Count=1/8 Maskable- 64bit+
Capabilities: [70] Express Endpoint, MSI 00
Capabilities: [b0] MSI-X: Enable+ Count=8 Masked-
Capabilities: [100] Advanced Error Reporting
Capabilities: [178] #19
Capabilities: [198] Latency Tolerance Reporting
Capabilities: [1a0] #1e
Kernel driver in use: nvme
Kernel modules: nvme

User avatar
TrevorH
Site Admin
Posts: 33191
Joined: 2009/09/24 10:40:56
Location: Brighton, UK

Re: [issues] PCIe SSD with NVMe - Performance degradation

Post by TrevorH » 2017/04/12 01:19:08

You could try the ELRepo kernel-lt for el6 but I saw OCZ in the name and immediately thoguht, oh, there we go...
The future appears to be RHEL or Debian. I think I'm going Debian.
Info for USB installs on http://wiki.centos.org/HowTos/InstallFromUSBkey
CentOS 5 and 6 are deadest, do not use them.
Use the FAQ Luke

Post Reply