Emulex Blogs

How to Build an OpenStack Cloud Computing Environment in High Performance 10GbE and 40GbE Networks

Posted March 9th, 2015 by Roy Hughes

OpenStack is a free set of software and tools for building and managing cloud computing environments for public and private clouds. It is considered a cloud operating system that has the ability to control large pools of compute, networks and storage resources throughout a data center, and provides the following capabilities:

  • Networks
  • Virtual machines (VMs) on demand
  • Storage for VMs and arbitrary files
  • Multi-tenancy

If you’re a regular follower of our Implementer’s Lab Blog, however, chances are you’re technically savvy and already understand the benefits that OpenStack brings to the table for building private clouds and Infrastructure as a Service (IaaS) offerings. Our guess is that many of you have found yourself at the next stage of analyzing how to build an OpenStack cloud computing environment in a high performance 10Gb Ethernet (10GbE)or 40GbE network.

openstack

In anticipation of this, our engineers set out to configure OpenStack (Icehouse release) in Red Hat Enterprise 6.5 with Emulex OneConnect® OCe14100 10GbE adapters using Emulex Network Interface Card (NIC) partitioning technology. The Emulex OneConnect OCe14000 family of 10GbE and 40GbE Network adapters are optimized for virtualized data centers that have increased demand for accommodating multiple tenants in cloud computing applications. And with Emulex Universal Multi-Channel™ (UMC) and Emulex OneCommand™ Manager technology as the underlying networking essentials and tools, Emulex provides an ideal solution for building cloud computing environments.

OpenStack Cloud Convergence with Emulex OCe14000 Ethernet Adapters

After months of tests and validation, we created a solution design guide, “OpenStack Cloud Convergence with Emulex OCe14000 Ethernet Adapters”,to walk you through the steps to configure Emulex OneConnect OCe14000 adapters in a basic three-node OpenStack cloud configuration.  It provides an easy to follow blueprint leveraging unique Emulex I/O connectivity capabilities for allocating bandwidth, converging multiple protocols, and safely isolating OpenStack core networks or applications.

This rest of this blog will give you an overview of the initial requirements for deploying OpenStack cloud convergence with UMC as the underlying technology, while the 47-page paper provides step-by-step instructions to configure controller, compute and network nodes for OpenStack. You can download the complete design guide here.

guide

Configure Emulex adapters with UMC

Universal Multi-Channel (UMC) is an adapter partitioning technology developed by Emulex that provides powerful traffic management and provisioning capabilities such as dynamic rate control, priorities, MAC configuration, and Virtual Local Area Network (VLAN) assignment. With UMC, the physical functions are presented to an operating system or hypervisor as independent adapters. UMC channels are presented to the operating system as a physical port with separate MAC address and bandwidth assignments.

Servers deployed with Emulex Oce14000 10GbE and 40GE adapters can utilize Emulex UMC technology for an OpenStack network providing both traffic separation and bandwidth optimization. UMC provides data centers with significant cost savings for cabling, adapters, switches, and power. UMC is switch-agnostic and works with any 10GbE or 40GbE switch.

UMC can provide up to 16 NIC functions per adapter depending on the adapter model. In this document, however, a minimum number of NIC functions assignments are defined for a basic OpenStack three-node solution shown here:

pic2

Deploy a three-node OpenStack environment

Before configuring OpenStack, your physical network requirements will need to be determined, including switches, routers, subnets, IP addresses and VLAN assignments, etc.

The following figure illustrates the basic three-node OpenStack network configuration we used in our own lab. Red Hat Enterprise Linux 6.5 is installed on all three servers. The nodes are accessed via a Secure Shell (SSH) from a remote server. The nodes can also be accessible via KVM over IP (IPKVM) device providing an alternate connection (not shown) to manage the nodes.

Look to RoCE with OFED to Increase Data Center Throughput and Efficiency

Posted March 2nd, 2015 by Alex Amaya

As the world of networking continues to advance at faster rates than ever before, more and more demand has been placed on data centers and compute clusters to keep up with the vast amount of data traveling on their networks. In high performance computing environments today, more traditional network protocols are not capable of providing the transfer speed required for the smooth operation of a large cluster. Enter the network protocol RDMA over Converged Ethernet (RoCE). RoCE, when used in conjunction with OpenFabrics Enterprise Distribution (OFED) software and the Emulex OneConnect® OCe14000 Ethernet network adapter, combines high throughput with low latency to provide the extreme transfer speeds coveted by these environments.

Enabling RoCE requires many separate components and technologies, both software and hardware, so it’s helpful to briefly review what some of these key parts are. The network technology RoCE uses is Remote Direct Memory Access (RDMA), which is the direct memory access from the memory of one host into that of another one without involvement from the operating system. This is the main principle of how RoCE achieves faster speeds than traditional networking. By itself however, RoCE doesn’t cover all the networking steps to complete a successful data transfer because it only works with the data at low network levels, which are more in contact with the hardware (adapter). This is where OFED software is needed. OFED is open source software for RDMA and kernel bypass applications from the OpenFabrics Alliance. It providessoftware drivers, core kernel code, middleware and user level interfaces for multiple operating systems. In other words, RoCE can only get the data so far and then certain OFED software carries it the rest of the way.roce

Another important aspect of RoCE is that it uses the Ethernet medium. Traditionally, Ethernet does not account for data loss, so it relies on upper layer protocols in standard networking models such as TCP/IP to do it. Since RoCE does not use these models, it relies on switches with Priority Flow Control (PFC) to take care of data reliability in configurations with many hosts. Since no switches currently available have a priority group for RoCE traffic, the configuration of switches must be performed manually.

In summary, this blog explains how to generally set up RoCE and OFED on Linux systems featuring the OneConnect OCe14000 Ethernet network adapter. Components such as OFED and the Emulex adapter and driver have to be specifically configured to work with each other, as well as various other pieces, including the switch, user interfaces and the host systems themselves. Since this blog is focused on Linux systems, it details a Network File System (NFS) over RDMA proof of concept. For thorough, step-by-step guidelines on the complete configuration process, download the tech note here.

Redesigned Implementer’s Lab helps you find technical content for easier I/O implementation

Posted January 30th, 2015 by Alex Amaya

Today, I’d like to call your attention to the redesigned Implementer’s Lab, with a new logo and new organization of technical content to help you successfully implement I/O on leading server and storage technologies.

ilab logo

Continue reading…

Why Do We Need Hadoop?

Posted January 19th, 2015 by Alpika Singh

Why do we need Hadoop or what problem does Hadoop solve in the current data centers? The simple answer is the rapid growth of social media, cellular advances and requirements for data analytics has challenged the traditional methods of data storage and data processing for many large business and government entities. To solve the data storage and processing challenges, organizations are starting to deploy large clusters of Apache Hadoop—a solution that utilizes parallel processing of large data sets commonly referred to as big data and creating multiple replications of the data to avoid any data loss. This is done across inexpensive, industry-standard servers that are used for both storing and processing the data.

Continue reading…

Configuring SMB Direct with the Emulex OCe14000 Adapters

Posted November 6th, 2014 by Alpika Singh

Beginning with the Emulex 10.2 software release, all current Emulex OneConnect® OCe14000 adapters gained a new FREE feature called SMB Direct, which is a form of RDMA over Converged Ethernet (RoCE).

Remote Direct Memory Access, or RDMA, in computing is direct memory access from the memory of one computer into to another without involving any operating system. The figure below explains the direct memory access from another computer in a very simplified way.

Continue reading…

NVGRE with the Emulex OCe14000 Adapters: A peek under the hood

Posted October 28th, 2014 by Alpika Singh

Large scale virtualization and cloud computing, along with the need to reduce the costs of deploying and managing new servers, are driving the popularity of overlay networks.

Network Virtualization using Generic Routing Encapsulation (NVGRE) is a virtualized overlay network architecture, which is designed to support the multi-tenant infrastructure in public/private/hybrid clouds using encapsulation and tunneling to create large numbers of virtual LANs (VLANs) for subnets that can extend across dispersed data centers and layer 2 (the data link layer) and layer 3 (the network layer) networks.

Continue reading…

What is a native mode driver in VMware vSphere ESXi 5.5?

Posted July 29th, 2014 by Alex Amaya

VMware recently introduced a new driver model called native mode in vSphere 5.5. VMware ESXi 5.5 has two driver models, one called “vmklinux,” which is the legacy driver model and the second is the new “native” mode driver model. Moving forward, Emulex supports the native mode driver model for ESXi 5.5. Emulex Fibre Channel (FC) adapters for FC/ Fibre Channel over Ethernet (FCoE) storage protocol support the inbox native mode “lpfc” driver. The Emulex Ethernet (or Network Interface Card (NIC)) functionality has an inbox native mode driver called “elxnet.” The only driver as of this writing that supports legacy mode or vmklinux-based is the “be2iscsi” driver for iSCSI support.

Continue reading…

Emulex OCe14000 family of Ethernet and Converged Network Adapters bring new levels of performance and efficiency

Posted July 8th, 2014 by Mark Jones

When we launched the OneConnect® OCe14000, our latest Ethernet and Converged Network Adapters (CNAs), we touched on a number of performance and data center efficiency claims that are significant enough to expand on.  The design goals of the OCe14000 family of adapters was to take the next step beyond what we have already delivered with our three previous generations of adapters, by providing the performance, efficiency and scalability needs of the data center, Web-scale computing and evolving cloud networks.  We believe we delivered on those goals and can claim some very innovative performance benchmarks.

 

Continue reading…

Microsoft Windows 2012/2012 R2 Hyper-V VMs losing network connectivity: a workaround

Posted June 19th, 2014 by Mark Jones

UPDATE as of 10/21/14: We have made some VMQ updates and as a result posted the new 10.2.413.1 certified NIC driver  on Emulex.com.   The links below are for Emulex branded customers only. Please read the release notes carefully for important implementation details.  Should you have any questions or need assistance contact Emulex tech support here.

Windows 2012 R2 page: http://www.emulex.com/downloads/emulex/drivers/windows/windows-server-2012-r2/drivers/

Windows 2012 page: http://www.emulex.com/downloads/emulex/drivers/windows/windows-server-2012/drivers/

Below are the driver and firmware combinations that should be used for our OEM products supplied by HP and IBM. Please read and follow the specific instructions supplied by the OEM.  Should you have any questions or need assistance contact the OEM technical support.

HP Customers: NIC driver 10.2.413.1, FW 10.2.431.2

IBM Customers: NIC driver 10.2.413.1, FW 10.2.377.24

UPDATE as of 9/9/14: For HP customers using Emulex 10GbE adapters, HP has made publicly available the latest code that addresses the VM disconnect issue when VMQ is enabled among other enhancements.  The download portal is currently located here:  http://ow.ly/Bi7Yt.  Please read, understand and follow the update documentation provided by HP and contact HP tech support for further information. Thank you for your continued patience.

~~

UPDATE as of 8/4/14: We are pleased to inform you that the July 2014 Special Release for Windows Server 2012 and Windows Server 2012 R2 CNA Ethernet Driver is now available for Emulex branded (non OEM) OCe111xx model adapters. Please refer to this link to download the driver kit and firmware. Please read and follow the special instructions within the Release Notes.  For non-Emulex branded adapters, please contact Emulex Tech Support here.

~~

UPDATE AS OF 7/23/14: Emulex is in the process of rolling out updated Microsoft Windows 2012 and 2012 R2 VMQ solutions for our customers.  Testing of a Windows WHCK certified NIC driver update will be completed in 1-2 weeks. This initial “hotfix” will be for Emulex branded OCe11102 and OCe11101 products and will include a required firmware update.  As testing completes on hotfix solutions for additional product configurations, notices and links will be posted on this blog.  Thanks for your continued patience.

~~

Continue reading…

Virtual Network Fabric Performance Improvements Using Emulex VNeX Technology

Posted January 22nd, 2014 by Mark Jones

The Emulex OneConnect OCe14000 family of 10Gb and 40Gb Ethernet (10GbE and 40GbE) Network Adapters and Converged Network Adapters (CNAs) are the first of their kind to be designed and optimized for Virtual Network Fabrics (VNFs).  Key to this claim is Emulex Virtual Network ExcelerationTM (VNeX) technology which, among other things, restores the hardware offloads that are normally lost because of the encapsulation that takes place with VNFs.  For a VMware environment that is utilizing a Virtual Extensible LAN (VXLAN) VirtualWire interface, most Network Interface Cards (NICs) will see a significant reduction in throughput performance due to losing the NIC hardware offloads, and a loss of hypervisor CPU efficiency, due to it having to now perform much of the computation that the NIC otherwise would have done. The OneConnect OCe14000 adapters by default use VNeX to restore the offload processing in the hardware, thus providing non-virtual network levels of throughput and hypervisor CPU efficiency in VNF environments.

To prove this point, we setup a VXLAN working model using two VMware ESXi5.5 host hypervisors and configured a VXLAN network connection between them.  Each server hosted eight RHEL6.3 guest virtual machines (VMs) with network access between the hypervisors using the VMware VirtualWire interface.  As a network load generator, we used IXIA IxChariot to perform a network performance tests between the VMs.  We compared two test cases, one with the hardware offloads enabled on the OCe14000 (this is the default behavior) and another to a NIC that does not utilize hardware offloads for VXLAN.

You can see in chart 1 that the bi-directional throughput with hardware offloads is as much as 70 percent greater when compare to a NIC without the hardware offloads.

In chart 2, you can see the impact that hardware offloads have on hypervisor CPU utilization, the OCe14000 adapter with VNeX that can process the offloads in hardware reduces CPU utilization as much as half when compared to standard NICs when used for VMware VirtualWire connections, increasing the number of VMs supported per server.

«Older Posts