Poor passthrough disk performance CentOs7 VM on Hyper-V

Issues related to hardware problems
Post Reply
Posts: 2
Joined: 2018/01/23 10:31:49

Poor passthrough disk performance CentOs7 VM on Hyper-V

Post by mihai_b » 2018/01/23 12:10:48


I'm running a CentOS 7 VM on Microsoft Hyper-V Server with the following configuration:

-HOST 4 Intel Xeon E3-1226 v3 CPUs, 8 GB RAM, Gigabyte ethernet, 1 TB RAID 1 (volume1) for VM's VHDX & 4 TB RAID 1 (volume2) as passthrough disk for samba share on VM - physical disks (ST1000DM010 & WD40EFRX) are connected to the SAS backplane on a Fujitsu RAID D2616 (LSISAS2108 chip) controller. Both volumes have the same write policy (write back) as the controller is protected by BBU and the server is on UPS.

-VM 4 vCPUs, 3 GB RAM, Gigabyte ethernet, 32 GB fixed VHDX (sda for OS) & 4 TB passthrough disk (sdb) connected both via SCSI controller

The main purpose of the VM is to serve as a samba share. I've formated the passthrough disk in XFS, mounted in /RAID and shared via smb. On the same VM I have another smb share (for testing only) that resides on the VHDX.

The problem: write performance (~40 Mb/s) on the passthrough disk is far less than the VHDX (~110-120 Mb/s which is nearly the maximum speed of the gigabyte adapter). The IO scheduler on both VM disks is noop - the hypervisor is doing the IO scheduling. I surely know the hardware is capable of grater write speeds.

So far I changed the IO sheduler to deadline and cfq, first only for the passthrough disk, then for both, but nothing changed. Then I changed the write policy on the host to write through, but there wasn't any improvement too.

I am no Linux expert but I've noticed the passthrough disk is using drivers from LSI, though it is presented to the VM through the host's scsi controller.

[0:0:0:0] disk Msft Virtual Disk 1.0 /dev/sda
dir: /sys/bus/scsi/devices/0:0:0:0 [/sys/devices/LNXSYSTM:00/device:00/ACPI0004:00/VMBUS:00/b92f53c8-2d73-4724-ac25-726d87345113/host0/target0:0:0/0:0:0:0]
[1:0:0:0] disk LSI RAID 5/6 SAS 6G 2.12 /dev/sdb
dir: /sys/bus/scsi/devices/1:0:0:0 [/sys/devices/LNXSYSTM:00/device:00/ACPI0004:00/VMBUS:00/7f8c96c8-a2be-476c-951f-bb5ccdac0596/host1/target1:0:0/1:0:0:0]

Maybe the poor performance could be from the linux LSI RAID 5/6 SAS 6G driver?
Could someone please shed some light on this?

Kind regards,

Posts: 2
Joined: 2018/01/23 10:31:49

Re: Poor passthrough disk performance CentOs7 VM on Hyper-V

Post by mihai_b » 2018/01/25 12:19:55

More details about passthrough disk below:

[ 2.382764] PTP clock support registered
[ 2.384142] hv_utils: Registering HyperV Utility Driver
[ 2.384144] hv_vmbus: registering driver hv_util
[ 2.388208] hv_vmbus: registering driver hv_netvsc
[ 2.392316] hv_vmbus: registering driver hv_storvsc
[ 2.393572] hv_utils: Heartbeat IC version 3.0
[ 2.396543] hv_utils: Shutdown IC version 3.0
[ 2.397059] hv_utils: TimeSync IC version 3.0
[ 2.397787] hv_utils: VSS IC version 5.0
[ 2.411708] scsi host0: storvsc_host_t
[ 2.412413] scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 4
[ 2.416264] scsi host1: storvsc_host_t
[ 2.416962] scsi 1:0:0:0: Direct-Access LSI RAID 5/6 SAS 6G 2.12 PQ: 0 ANSI: 5

[ 2.417746] scsi 0:0:0:1: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0
[ 2.449089] sr 0:0:0:1: [sr0] scsi3-mmc drive: 0x/0x caddy
[ 2.449095] cdrom: Uniform CD-ROM driver Revision: 3.20
[ 2.449260] sr 0:0:0:1: Attached scsi CD-ROM sr0
[ 2.454126] sd 0:0:0:0: [sda] 62914560 512-byte logical blocks: (32.2 GB/30.0 GiB)
[ 2.454133] sd 0:0:0:0: [sda] 4096-byte physical blocks
[ 2.454444] sd 1:0:0:0: [sdb] 7812939776 512-byte logical blocks: (4.00 TB/3.63 TiB)
[ 2.454931] sd 1:0:0:0: [sdb] Write Protect is off
[ 2.454938] sd 1:0:0:0: [sdb] Mode Sense: 0f 00 00 00
[ 2.455303] sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA

[ 2.455375] sd 1:0:0:0: [storvsc] Sense Key : Illegal Request [current]
[ 2.455384] sd 1:0:0:0: [storvsc] Add. Sense: Invalid command operation code
[ 2.456672] sd 0:0:0:0: [sda] Write Protect is off
[ 2.456680] sd 0:0:0:0: [sda] Mode Sense: 0f 00 00 00
[ 2.456868] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 2.457007] sd 0:0:0:0: [storvsc] Sense Key : Illegal Request [current]
[ 2.457033] sd 0:0:0:0: [storvsc] Add. Sense: Invalid command operation code
[ 2.461721] sdb: sdb1 sdb2
[ 2.465939] sd 1:0:0:0: [sdb] Attached SCSI disk
[ 2.485428] sda: sda1 sda2 sda3
[ 2.488280] sd 0:0:0:0: [sda] Attached SCSI disk
[ 3.085767] SGI XFS with ACLs, security attributes, no debug enabled
[ 3.090777] XFS (dm-0): Mounting V5 Filesystem
[ 3.174537] XFS (dm-0): Ending clean mount
[ 5.910679] sd 0:0:0:0: Attached scsi generic sg0 type 0
[ 5.911728] sr 0:0:0:1: Attached scsi generic sg1 type 5
[ 5.915150] sd 1:0:0:0: Attached scsi generic sg2 type 0
[ 6.472442] XFS (sda2): Mounting V5 Filesystem
[ 6.473870] XFS (sdb2): Mounting V5 Filesystem
[ 6.566381] XFS (sda2): Ending clean mount

modinfo megaraid_sas
filename: /lib/modules/3.10.0-514.26.2.el7.x86_64/kernel/drivers/scsi/megaraid/megaraid_sas.ko
description: Avago MegaRAID SAS Driver
author: megaraidlinux.pdl@avagotech.com
version: 06.811.02.00-rh1
license: GPL
rhelversion: 7.3
srcversion: 221DC110F10B050D99A7998
alias: pci:v00001000d00000053sv*sd*bc*sc*i*
alias: pci:v00001000d00000052sv*sd*bc*sc*i*
alias: pci:v00001000d000000CFsv*sd*bc*sc*i*
alias: pci:v00001000d000000CEsv*sd*bc*sc*i*
alias: pci:v00001000d0000005Fsv*sd*bc*sc*i*
alias: pci:v00001000d0000005Dsv*sd*bc*sc*i*
alias: pci:v00001000d0000002Fsv*sd*bc*sc*i*
alias: pci:v00001000d0000005Bsv*sd*bc*sc*i*
alias: pci:v00001028d00000015sv*sd*bc*sc*i*
alias: pci:v00001000d00000413sv*sd*bc*sc*i*
alias: pci:v00001000d00000071sv*sd*bc*sc*i*
alias: pci:v00001000d00000073sv*sd*bc*sc*i*
alias: pci:v00001000d00000079sv*sd*bc*sc*i*
alias: pci:v00001000d00000078sv*sd*bc*sc*i*
alias: pci:v00001000d0000007Csv*sd*bc*sc*i*
alias: pci:v00001000d00000060sv*sd*bc*sc*i*
alias: pci:v00001000d00000411sv*sd*bc*sc*i*
intree: Y
vermagic: 3.10.0-514.26.2.el7.x86_64 SMP mod_unload modversions
signer: CentOS Linux kernel signing key
parm: lb_pending_cmds:Change raid-1 load balancing outstanding threshold. Valid Values are 1-128. Default: 4 (int)
parm: max_sectors:Maximum number of sectors per IO command (int)
parm: msix_disable:Disable MSI-X interrupt handling. Default: 0 (int)
parm: msix_vectors:MSI-X max vector count. Default: Set by FW (int)
parm: allow_vf_ioctls:Allow ioctls in SR-IOV VF mode. Default: 0 (int)
parm: throttlequeuedepth:Adapter queue depth when throttled due to I/O timeout. Default: 16 (int)
parm: resetwaittime:Wait time in seconds after I/O timeout before resetting adapter. Default: 180 (int)
parm: smp_affinity_enable:SMP affinity feature enable/disbale Default: enable(1) (int)
parm: rdpq_enable: Allocate reply queue in chunks for large queue depth enable/disable Default: disable(0) (int)
parm: dual_qdepth_disable:Disable dual queue depth feature. Default: 0 (int)
parm: scmd_timeout:scsi command timeout (10-90s), default 90s. See megasas_reset_timer. (int)

Post Reply