Difficulty getting Fibre Channel storage working

Issues related to hardware problems
Post Reply
liveoaksf
Posts: 7
Joined: 2016/01/22 22:13:05

Difficulty getting Fibre Channel storage working

Post by liveoaksf » 2017/03/09 01:01:56

CentOS 7 running in a Dell R710 server without issues. I inherited an old disk array (nStor "NexStor Wahoo") and successfully created two RAID arrays within the enclosure using NexStor software that only runs on Windows. Now I want to attach this storage to the CentOS server. So I bought an HBA card (Emulex LPe11000, listed as supported by both Dell and RHEL7), installed it in the R710, and configured it in Point to Point mode. However while CentOS detects the attached device, it won't let me configure the attached storage -- e.g. partition with fdisk.

The module used for this HBA card is lpfc:

Code: Select all

$ lsmod |grep lpfc
lpfc                  713897  0 
crc_t10dif             12714  2 lpfc,sd_mod
scsi_transport_fc      64056  1 lpfc

Code: Select all

$ modinfo lpfc
filename:       /lib/modules/3.10.0-514.10.2.el7.x86_64/kernel/drivers/scsi/lpfc/lpfc.ko
version:        0:11.1.0.2
author:         Emulex Corporation - tech.support@emulex.com
description:    Emulex LightPulse Fibre Channel SCSI driver 11.1.0.2
license:        GPL
rhelversion:    7.3
srcversion:     09E2FDF703389415BEFDD75
..

Code: Select all

$ sudo lspci -v -s 07:00.0
07:00.0 Fibre Channel: Emulex Corporation Zephyr-X LightPulse Fibre Channel Host Adapter (rev 02)
	Subsystem: Emulex Corporation Zephyr-X LightPulse Fibre Channel Host Adapter
	Flags: bus master, fast devsel, latency 0, IRQ 38
	Memory at df2fc000 (64-bit, non-prefetchable) [size=4K]
	Memory at df2fd000 (64-bit, non-prefetchable) [size=256]
	I/O ports at e800 [size=256]
	Expansion ROM at df200000 [disabled] [size=256K]
	Capabilities: [58] Power Management version 2
	Capabilities: [60] MSI: Enable+ Count=1/16 Maskable- 64bit+
	Capabilities: [44] Express Endpoint, MSI 00
	Capabilities: [100] Advanced Error Reporting
	Capabilities: [12c] Power Budgeting <?>
	Kernel driver in use: lpfc
	Kernel modules: lpfc
	
When the FC storage is connected, system sees and adds two new generic scsi disks at /dev/sg2, /dev/sg3, but fdisk -l does not show anything besides the installed HDDs:

Code: Select all

$ sudo fdisk -l

Disk /dev/sda: 249.5 GB, 249510756352 bytes, 487325696 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00081628

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     1026047      512000   83  Linux
/dev/sda2         1026048   487325695   243149824   8e  Linux LVM

Disk /dev/mapper/centos-root: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/mapper/centos-swap: 16.9 GB, 16911433728 bytes, 33030144 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/mapper/centos-home: 178.3 GB, 178316640256 bytes, 348274688 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
I do get a couple of error messages when connecting storage, but I'm not convinced these are critical:

Code: Select all

$ dmesg
..
[   89.105845] lpfc 0000:07:00.0: 0:1303 Link Up Event x1 received Data: x1 x1 x8 x2 x0 x0 0
[   89.105915] lpfc 0000:07:00.0: 0:1309 Link Up Event npiv not supported in loop topology
[   89.106336] lpfc 0000:07:00.0: 0:(0):2858 FLOGI failure Status:x3/x18 TMO:x0 Data x0 x0
[   89.106676] lpfc 0000:07:00.0: 0:(0):2858 FLOGI failure Status:x3/x18 TMO:x0 Data x0 x0
[   89.106916] lpfc 0000:07:00.0: 0:(0):2858 FLOGI failure Status:x3/x18 TMO:x0 Data x0 x0
[   89.106952] lpfc 0000:07:00.0: 0:(0):0100 FLOGI failure Status:x3/x18 TMO:x0
[   89.134082] scsi 1:0:0:0: Processor         nStor    NexStor Wahoo         PQ: 0 ANSI: 3
[   89.134494] scsi 1:0:0:0: Attached scsi generic sg2 type 3
[   89.136783] scsi 1:0:0:4: Processor         nStor    NexStor Wahoo         PQ: 0 ANSI: 3
[   89.136972] scsi 1:0:0:4: Attached scsi generic sg3 type 3
..
Here is output from lsscsi:

Code: Select all

$ lsscsi -l
[0:0:32:0]   enclosu DP       BACKPLANE        1.07  -        
  state=running queue_depth=256 scsi_level=6 type=13 device_blocked=0 timeout=90
[0:2:0:0]    disk    DELL     PERC 6/i         1.22  /dev/sda 
  state=running queue_depth=256 scsi_level=6 type=0 device_blocked=0 timeout=90
[1:0:0:0]    process nStor    NexStor Wahoo          -        
  state=running queue_depth=30 scsi_level=4 type=3 device_blocked=0 timeout=0
[1:0:0:4]    process nStor    NexStor Wahoo          -        
  state=running queue_depth=30 scsi_level=4 type=3 device_blocked=0 timeout=0
Trying $ fdisk /dev/sg2 or $ fdisk /dev/sg3 just hangs, no console output.

tunk
Posts: 1205
Joined: 2017/02/22 15:08:17

Re: Difficulty getting Fibre Channel storage working

Post by tunk » 2017/03/11 16:09:44

Could you try parted instead of fdisk?

aks
Posts: 3073
Joined: 2014/09/20 11:22:14

Re: Difficulty getting Fibre Channel storage working

Post by aks » 2017/03/13 17:42:50

Are you sure both end points are running in N_port mode?

Post Reply