Emulex Blogs

Expresslane Demo Video Highlights “How to Double Your Throughput”

Posted April 17th, 2015 by

By: Allison Hope

Beginning with the Emulex 10.2 software release, all current Emulex LightPulse® LPe16002 Fibre Channel (FC) Host Bus Adapters (HBAs) gained a new, FREE feature called Emulex ExpressLane™.

Just like a fast lane to the office, ExpressLane allows for traffic prioritization to specific Logical Unit Numbers (LUNs). For example, ExpressLane can be used to enable flash workloads to be given higher priority over less time-sensitive workloads and traffic, thereby lowering latency, and reducing jitter.

The feature is really easy to set up as it is enabled by default on the ports of the adapter; the administrator only needs to use Emulex OneCommand® Manager to choose a LUN or LUNs to prioritize.

It takes just a simple right click to select the feature. The yellow lightening symbol in the screenshot below shows that it is selected for that LUN.


Once a LUN has been assigned ExpressLane status, the commands for that LUN are flagged as being given priority.

We’ve created this short video to demonstrate how easy it is to improve I/O performance with ExpressLane:

Click here to watch a two-minute ExpressLane video.

 The video demo shows the throughput to five LUNs on a LPe16002 port before ExpressLane is enabled. After it is enabled on one of the LUNs, we clearly see nearly double the throughput in the Medusa Test Tools window.

  • In addition, ExpressLane can be configured with a quality of service (QoS) value, carried in the header of every I/O frame of a prioritized LUN that can then be prioritized by switch QoS policy.


  • The switch takes the prioritization bit, called the CS_CTL (Class Specific Control) bit, from the frame and processes it to ensure the frame is not deprioritized on the switch layer.


  • The CS_CTL bit can also be used by storage arrays to prioritize processing at the final layer.



If you are interested in using the Priority field of ExpressLane, please contact your switch and storage vendors for more information on the CS_CTL settings supported by these devices.


Best practices for adjusting the device queue depth of a 16GFC HBA in VMware Horizion View 6.0

Posted April 2nd, 2015 by

By: Alex Amaya

There are many best practices with VMware solutions that cover networking, storage, deployment, virtual desktop infrastructure (VDI) and so on. Additionally, there are best practices on the same topics from each major OEM, such as HP, DELL, IBM, and Lenovo, to name a few. So of course, we had to jump on the bandwagon and create some of our own best practices with Emulex LightPulse® Gen 5 (16Gb) Fibre Channel (FC) Host Bus Adapters (HBAs) in a VMware Horizon View 6.0 environment. The result of this effort is in our new Implementer’s Lab Guide, Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters.

Starting with VMware vSphere 5.5, the ability to support end-to-end Gen 5 FC, the increasing demand for monster virtual machines (VMs) as well as newer virtual hardware versions makes it ideal for FC storage area networks (SANs) to step in and address scalability, cloud and VDI concerns.

At our Emulex tech marketing labs, we took a stab at trying to understand workloads, block size, and I/O generated by a VDI environment with VMware Horizon View 6.0. We configured a single Xeon-based host with a Gen 5FC HBA connected to an all-flash array. The two FC ports were configured in a zone. The VMs all resided in a 4TB Logical Unit Number (LUN) from the all-flash array. We took a snapshot of a Windows 7 VM golden image and provisioned 200 VMs with VMware View Composer. The VMs were all running a single vCPU, 4GB HDD and 2GB RAM on Windows 7. To create a load and simulate a VDI environment, we used what I would call a static load with LoginVSI—an exceptional tool to measure and test ESXi in a VDI environment by running simulated workloads of different sizes. And of course, we followed VMware’s best practices.

The following figure shows the layout of our lab test setup:

During the setup of our SAN, we fiddled around with the device queue depth a bit. Because it was a single host test with VMware View 6.0, we were expecting minimal change to throughput or bandwidth from the Gen 5 HBA. When you begin to do your own setup, by adding more ESXi hosts to the cluster and increasing VMs into the thousands, you can anticipate that the queue depth parameter will increase to 128 up to 254. Testing should be done prior to deploying a VDI environment into production to understand and obtain the best performance. In order to test the adapter’s queue depth, we updated the driver to the new native mode lpfc driver.

The device queue depth for ESXi 5.0 has changed from 32 to 64 for storage I/O control improvements. Emulex adapters are still at 30—as two are reserved—but there are other FC adapters whose queue depths have changed. Proceed with caution before making any changes to the adapter queue depth.

To adjust the queue depth for Emulex FC HBAs:

  1. Ensure the HBA module is loaded.
  2.  # esxcli system module list | grep lpfc
  3. Change queue depth.
  4. # esxcli system module parameters set –p lpfc_lun_queue_depth=64 –m lpfc
  5. Reboot host.
  6. Confirm the LUN queue depth has changed.
  7.  # esxcli system module parameters list –m lpfc |grep lun_queue_depth

lpfc_lun_queue_depth int 64 Max number of FCP commands we can queue to a specific LUN

Best practice:

  • If still installed, remove the legacy lpfc820 FC driver and update to the latest native mode lpfc driver.
  • Update the LUN queue depth if recommended by the storage vendor array.
  • Install the latest Emulex Common Information Model (CIM) provider to use the Emulex OneCommand® Manager application.

To view more best practices for VMware View 6.0, you can download our new whitepaper, Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters or visit the Implementer’s Lab where you’ll find a plethora of white papers teeming with best practices.


How to Build an OpenStack Cloud Computing Environment in High Performance 10GbE and 40GbE Networks

Posted March 9th, 2015 by Roy Hughes

OpenStack is a free set of software and tools for building and managing cloud computing environments for public and private clouds. It is considered a cloud operating system that has the ability to control large pools of compute, networks and storage resources throughout a data center, and provides the following capabilities:

  • Networks
  • Virtual machines (VMs) on demand
  • Storage for VMs and arbitrary files
  • Multi-tenancy

If you’re a regular follower of our Implementer’s Lab Blog, however, chances are you’re technically savvy and already understand the benefits that OpenStack brings to the table for building private clouds and Infrastructure as a Service (IaaS) offerings. Our guess is that many of you have found yourself at the next stage of analyzing how to build an OpenStack cloud computing environment in a high performance 10Gb Ethernet (10GbE)or 40GbE network.


In anticipation of this, our engineers set out to configure OpenStack (Icehouse release) in Red Hat Enterprise 6.5 with Emulex OneConnect® OCe14100 10GbE adapters using Emulex Network Interface Card (NIC) partitioning technology. The Emulex OneConnect OCe14000 family of 10GbE and 40GbE Network adapters are optimized for virtualized data centers that have increased demand for accommodating multiple tenants in cloud computing applications. And with Emulex Universal Multi-Channel™ (UMC) and Emulex OneCommand™ Manager technology as the underlying networking essentials and tools, Emulex provides an ideal solution for building cloud computing environments.

OpenStack Cloud Convergence with Emulex OCe14000 Ethernet Adapters

After months of tests and validation, we created a solution design guide, “OpenStack Cloud Convergence with Emulex OCe14000 Ethernet Adapters”,to walk you through the steps to configure Emulex OneConnect OCe14000 adapters in a basic three-node OpenStack cloud configuration.  It provides an easy to follow blueprint leveraging unique Emulex I/O connectivity capabilities for allocating bandwidth, converging multiple protocols, and safely isolating OpenStack core networks or applications.

This rest of this blog will give you an overview of the initial requirements for deploying OpenStack cloud convergence with UMC as the underlying technology, while the 47-page paper provides step-by-step instructions to configure controller, compute and network nodes for OpenStack. You can download the complete design guide here.


Configure Emulex adapters with UMC

Universal Multi-Channel (UMC) is an adapter partitioning technology developed by Emulex that provides powerful traffic management and provisioning capabilities such as dynamic rate control, priorities, MAC configuration, and Virtual Local Area Network (VLAN) assignment. With UMC, the physical functions are presented to an operating system or hypervisor as independent adapters. UMC channels are presented to the operating system as a physical port with separate MAC address and bandwidth assignments.

Servers deployed with Emulex Oce14000 10GbE and 40GE adapters can utilize Emulex UMC technology for an OpenStack network providing both traffic separation and bandwidth optimization. UMC provides data centers with significant cost savings for cabling, adapters, switches, and power. UMC is switch-agnostic and works with any 10GbE or 40GbE switch.

UMC can provide up to 16 NIC functions per adapter depending on the adapter model. In this document, however, a minimum number of NIC functions assignments are defined for a basic OpenStack three-node solution shown here:


Deploy a three-node OpenStack environment

Before configuring OpenStack, your physical network requirements will need to be determined, including switches, routers, subnets, IP addresses and VLAN assignments, etc.

The following figure illustrates the basic three-node OpenStack network configuration we used in our own lab. Red Hat Enterprise Linux 6.5 is installed on all three servers. The nodes are accessed via a Secure Shell (SSH) from a remote server. The nodes can also be accessible via KVM over IP (IPKVM) device providing an alternate connection (not shown) to manage the nodes.

Look to RoCE with OFED to Increase Data Center Throughput and Efficiency

Posted March 2nd, 2015 by Emulex

As the world of networking continues to advance at faster rates than ever before, more and more demand has been placed on data centers and compute clusters to keep up with the vast amount of data traveling on their networks. In high performance computing environments today, more traditional network protocols are not capable of providing the transfer speed required for the smooth operation of a large cluster. Enter the network protocol RDMA over Converged Ethernet (RoCE). RoCE, when used in conjunction with OpenFabrics Enterprise Distribution (OFED) software and the Emulex OneConnect® OCe14000 Ethernet network adapter, combines high throughput with low latency to provide the extreme transfer speeds coveted by these environments.

Enabling RoCE requires many separate components and technologies, both software and hardware, so it’s helpful to briefly review what some of these key parts are. The network technology RoCE uses is Remote Direct Memory Access (RDMA), which is the direct memory access from the memory of one host into that of another one without involvement from the operating system. This is the main principle of how RoCE achieves faster speeds than traditional networking. By itself however, RoCE doesn’t cover all the networking steps to complete a successful data transfer because it only works with the data at low network levels, which are more in contact with the hardware (adapter). This is where OFED software is needed. OFED is open source software for RDMA and kernel bypass applications from the OpenFabrics Alliance. It providessoftware drivers, core kernel code, middleware and user level interfaces for multiple operating systems. In other words, RoCE can only get the data so far and then certain OFED software carries it the rest of the way.roce

Another important aspect of RoCE is that it uses the Ethernet medium. Traditionally, Ethernet does not account for data loss, so it relies on upper layer protocols in standard networking models such as TCP/IP to do it. Since RoCE does not use these models, it relies on switches with Priority Flow Control (PFC) to take care of data reliability in configurations with many hosts. Since no switches currently available have a priority group for RoCE traffic, the configuration of switches must be performed manually.

In summary, this blog explains how to generally set up RoCE and OFED on Linux systems featuring the OneConnect OCe14000 Ethernet network adapter. Components such as OFED and the Emulex adapter and driver have to be specifically configured to work with each other, as well as various other pieces, including the switch, user interfaces and the host systems themselves. Since this blog is focused on Linux systems, it details a Network File System (NFS) over RDMA proof of concept. For thorough, step-by-step guidelines on the complete configuration process, download the tech note here.

Redesigned Implementer’s Lab helps you find technical content for easier I/O implementation

Posted January 30th, 2015 by Alex Amaya

Today, I’d like to call your attention to the redesigned Implementer’s Lab, with a new logo and new organization of technical content to help you successfully implement I/O on leading server and storage technologies.

ilab logo

Continue reading…

Why Do We Need Hadoop?

Posted January 19th, 2015 by Alpika Singh

Why do we need Hadoop or what problem does Hadoop solve in the current data centers? The simple answer is the rapid growth of social media, cellular advances and requirements for data analytics has challenged the traditional methods of data storage and data processing for many large business and government entities. To solve the data storage and processing challenges, organizations are starting to deploy large clusters of Apache Hadoop—a solution that utilizes parallel processing of large data sets commonly referred to as big data and creating multiple replications of the data to avoid any data loss. This is done across inexpensive, industry-standard servers that are used for both storing and processing the data.

Continue reading…

Configuring SMB Direct with the Emulex OCe14000 Adapters

Posted November 6th, 2014 by Alpika Singh

Beginning with the Emulex 10.2 software release, all current Emulex OneConnect® OCe14000 adapters gained a new FREE feature called SMB Direct, which is a form of RDMA over Converged Ethernet (RoCE).

Remote Direct Memory Access, or RDMA, in computing is direct memory access from the memory of one computer into to another without involving any operating system. The figure below explains the direct memory access from another computer in a very simplified way.

Continue reading…

NVGRE with the Emulex OCe14000 Adapters: A peek under the hood

Posted October 28th, 2014 by Alpika Singh

Large scale virtualization and cloud computing, along with the need to reduce the costs of deploying and managing new servers, are driving the popularity of overlay networks.

Network Virtualization using Generic Routing Encapsulation (NVGRE) is a virtualized overlay network architecture, which is designed to support the multi-tenant infrastructure in public/private/hybrid clouds using encapsulation and tunneling to create large numbers of virtual LANs (VLANs) for subnets that can extend across dispersed data centers and layer 2 (the data link layer) and layer 3 (the network layer) networks.

Continue reading…

What is a native mode driver in VMware vSphere ESXi 5.5?

Posted July 29th, 2014 by Alex Amaya

VMware recently introduced a new driver model called native mode in vSphere 5.5. VMware ESXi 5.5 has two driver models, one called “vmklinux,” which is the legacy driver model and the second is the new “native” mode driver model. Moving forward, Emulex supports the native mode driver model for ESXi 5.5. Emulex Fibre Channel (FC) adapters for FC/ Fibre Channel over Ethernet (FCoE) storage protocol support the inbox native mode “lpfc” driver. The Emulex Ethernet (or Network Interface Card (NIC)) functionality has an inbox native mode driver called “elxnet.” The only driver as of this writing that supports legacy mode or vmklinux-based is the “be2iscsi” driver for iSCSI support.

Continue reading…

Emulex OCe14000 family of Ethernet and Converged Network Adapters bring new levels of performance and efficiency

Posted July 8th, 2014 by Mark Jones

When we launched the OneConnect® OCe14000, our latest Ethernet and Converged Network Adapters (CNAs), we touched on a number of performance and data center efficiency claims that are significant enough to expand on.  The design goals of the OCe14000 family of adapters was to take the next step beyond what we have already delivered with our three previous generations of adapters, by providing the performance, efficiency and scalability needs of the data center, Web-scale computing and evolving cloud networks.  We believe we delivered on those goals and can claim some very innovative performance benchmarks.


Continue reading…

«Older Posts