Vlad Bolkhovitin, the founder and primary developer of the SCST SCSI target subsystem for Linux, said: “By joining the Fusion-io team, we are pleased to be able to provide SCST with more engineering resources and talent to support the development of the open source and commercial SCST subsystems as we meet the needs of Fusion-io customers. Fusion-io 2.5-inch, SCSI Express-supporting SSDs plugged into the top two ports in the card pictured above. Poulton says these ports are SFF 8639 ones. The other six ports appear to be SAS ports. A podcast on HP social media guy Calvin Zito’s blog has two HP staffers at Vienna talking about SCSI Express. SCSI Express productisation.

  1. Fusion-io Scsi & Raid Devices Drivers
  2. Fusion-io SCSI & RAID Devices Driver
  3. Fusion-io Scsi Connector
  4. Fusion-io Scsi Adapter

As I described in a previous article, Fusion-IO cards are not natively supported by VMware ESXi.

After installing the card and restart the server, you can see that it claims there is no persistent storage available, not even a lun or local disk to be formatted:

Fusion-io SCSI & RAID Devices DriverFusion-io SCSI & RAID Devices Driver

So, we need to intall Fusion-IO drivers. They are available for ESX(i) 4.0, 4.1 e 5.0. Having driver in .vib format, you can use the usual method like any other third party driver inside an ESXi server, first copying the .vib file inside ESXi and then issuing this command from the command line (or via ssh):

Finished the installation you need to reboot ESXi, and when it comes up again you will have two new elements in ESXi: a new “IOMemory VSL” among the storage adapters, and a 600 Gb local disk (identified by ESXi as SSD):

From here, you can format the disk with VMFS and use it as a “common” local datastore, but also as host cache.

CIM Providers

If you want to monitor the card health and status, the only way to do it directly from ESXi is to install CIM Providers. Fusion-IO, like many other hardware vendors, gives you their CIM providers for several ESXi versions.

To install them in ESXi 5.0, you first need to place your server in maintenance mode.

You will then upload the software into ESXi, and you will install it using this command:

(no-sig-check is needed since the software is not digitally signed); you will finally put ESXi out of the maintenance mode.

the Fusion-IO card is now ready to host virtual machines.

Main Page > Virtualization > VMware > VMware vSphere 6.7
VMware

Today, more and more workloads are running in virtual machines (VMs), including workloads that require significantly more IO in the guest operating system. In a VM on VMware vSphere, all virtual disks (VMDKs) are attached to the LSI Logical SAS SCSI Adapter in the default configuration. This adapter is recognized by all operating systems without installing additional drivers, but does not always provide the best performance, especially when an SSD RAID or NVMe Storage is used. In this article we have compared the virtual storage controllers LSI Logical SAS, VMware Paravirtual and the NVMe Controller.

  • 3Performance Comparison

Controller models

The standard controller in almost every VM is the LSI Logical SAS SCSI controller. This controller is recognized and supported by every guest operating system without additional drivers. It is suitable for almost any workload that does not have large I/O requirements. It is also necessary for the configuration of Microsoft Server Cluster Service (MSCS).

Starting with ESXi 4.0 and virtual hardware version 7, the VMware Paravirtual controller is available. This controller was developed for high performance storage systems, because it can handle much higher I/O and reduces the CPU load. In order for the controller to be used by the guest operating system, the VMware Tools must be installed.

Starting with ESXi 6.5 and virtual hardware version 13, an NVMe controller can also be added to the VM. This controller further optimizes the performance of SSD RAIDs, NVMe and PMEM storage. This Controller is the default Controller for Windows VMs in vSphere 7.0.

The choice of the right controller depends on the applications within the VM. For example, if it is an office VM, relatively little performance is required and the standard LSI Logical SAS SCSI controller can be used. If more storage performance is required within the VM and the storage system behind it also offers more performance, the VMware Paravirtual Controller is usually more suitable. For absolute high end performance when using an SSD RAID, NVMe or PMEM storage and very high performance requirements within the VM, the NVMe controller is the best choice.

Performance test

We have conducted various performance tests for different scenarios. The test scenarios are only examples, the individual values should be adjusted individually to the own workload to achieve realistic results. Details of the test system used:

Hardware / Software:

  • Supermicro Mainboard X11DPi-NT
  • 2x Intel Xeon Gold 5222 (3.80GHz, 4-Core, 16.5MB)
  • 256GB ECC Registered (RDIMM) DDR4 2666 RAM 4 Rank
  • 3.2 TB Samsung SSD NVMe PCI-E 3.0 (PM1725b)
  • ESXi 6.7.0 Update 2 (Build 13981272)

Test VM

  • Windows 10 Pro (18362)
  • 2 CPU sockets
  • 8 vCPUs
  • 8GB RAM
  • VMware Paravirtual
  • LSI Logical SAS
  • NVMe Controller
  • Thick-Provisioned eager-zeroed VMDK

Fusion-io Scsi & Raid Devices Drivers

  • LSI Logical SAS

  • VMware Paravirtual

  • NVMe Controller

Performance Comparison

Database Server

Database Server (8K Random; 70% Read; 8 Threads; 16 Outstanding IO)
IOPSMByte/sLatency (ms)CPU (%)
LSI Logical SAS78210.16611.021.63324.81
VMware Paravirtual153723.451200.960,83231.27
NVMe Controller203612.541590.720,62848.03

E-Mail-Server

E-Mail-Server (4K Random; 60% Read; 8 Threads; 16 Outstanding IO)
IOPSMByte/sLatency (ms)CPU (%)
LSI Logical SAS83403.47325,791.50623.52
VMware Paravirtual157624.97615.720,81131.46
NVMe Controller236622.59924.310,54052.11

File-Server

File-Server (64K Sequential; 90% Read; 8 Threads; 16 Outstanding IO)
IOPSMByte/sLatency (ms)CPU (%)
LSI Logical SAS44739.432796.212.86012.29
VMware Paravirtual53717.263357.332.38216.87
NVMe Controller48929.053058.072.61514.14

Fusion-io SCSI & RAID Devices Driver

Streaming-Server

Streaming Server (5120K Random; 80% Read; 8 Threads; 16 Outstanding IO)
IOPSMByte/sLatency (ms)CPU (%)
LSI Logical SAS458.162290.81279.6072.18
VMware Paravirtual504.222521.10253.94912.26
NVMe Controller505.142525.68253.6591.56

Fusion-io Scsi Connector

VDI-Workload

VDI-Workload (4K Random; 20% Read; 8 Threads; 8 Outstanding IO)
IOPSMByte/sLatency (ms)CPU (%)
LSI Logical SAS140155.89547.480,45635.69
VMware Paravirtual163073.26637.000,39237,98
NVMe Controller203464.89794.780.31449.55

Author: Sebastian Köbke

RAID Controller Management and monitoring on VMware vSphere

Fusion-io Scsi Adapter

Retrieved from 'https://www.thomas-krenn.com/en/wikiEN/index.php?title=VMware_Performance_Comparison_SCSI_Controller_and_NVMe_Controller&oldid=4496'