As the world of networking continues to advance at faster rates than ever before, more and more demand has been placed on data centers and compute clusters to keep up with the vast amount of data traveling on their networks. In high performance computing environments today, more traditional network protocols are not capable of providing the transfer speed required for the smooth operation of a large cluster. Enter the network protocol RDMA over Converged Ethernet (RoCE). RoCE, when used in conjunction with OpenFabrics Enterprise Distribution (OFED) software and the Emulex OneConnect® OCe14000 Ethernet network adapter, combines high throughput with low latency to provide the extreme transfer speeds coveted by these environments.
Enabling RoCE requires many separate components and technologies, both software and hardware, so it’s helpful to briefly review what some of these key parts are. The network technology RoCE uses is Remote Direct Memory Access (RDMA), which is the direct memory access from the memory of one host into that of another one without involvement from the operating system. This is the main principle of how RoCE achieves faster speeds than traditional networking. By itself however, RoCE doesn’t cover all the networking steps to complete a successful data transfer because it only works with the data at low network levels, which are more in contact with the hardware (adapter). This is where OFED software is needed. OFED is open source software for RDMA and kernel bypass applications from the OpenFabrics Alliance. It providessoftware drivers, core kernel code, middleware and user level interfaces for multiple operating systems. In other words, RoCE can only get the data so far and then certain OFED software carries it the rest of the way.
Another important aspect of RoCE is that it uses the Ethernet medium. Traditionally, Ethernet does not account for data loss, so it relies on upper layer protocols in standard networking models such as TCP/IP to do it. Since RoCE does not use these models, it relies on switches with Priority Flow Control (PFC) to take care of data reliability in configurations with many hosts. Since no switches currently available have a priority group for RoCE traffic, the configuration of switches must be performed manually.
In summary, this blog explains how to generally set up RoCE and OFED on Linux systems featuring the OneConnect OCe14000 Ethernet network adapter. Components such as OFED and the Emulex adapter and driver have to be specifically configured to work with each other, as well as various other pieces, including the switch, user interfaces and the host systems themselves. Since this blog is focused on Linux systems, it details a Network File System (NFS) over RDMA proof of concept. For thorough, step-by-step guidelines on the complete configuration process, download the tech note here.
Why do we need Hadoop or what problem does Hadoop solve in the current data centers? The simple answer is the rapid growth of social media, cellular advances and requirements for data analytics has challenged the traditional methods of data storage and data processing for many large business and government entities. To solve the data storage and processing challenges, organizations are starting to deploy large clusters of Apache Hadoop—a solution that utilizes parallel processing of large data sets commonly referred to as big data and creating multiple replications of the data to avoid any data loss. This is done across inexpensive, industry-standard servers that are used for both storing and processing the data.
Beginning with the Emulex 10.2 software release, all current Emulex OneConnect® OCe14000 adapters gained a new FREE feature called SMB Direct, which is a form of RDMA over Converged Ethernet (RoCE).
Remote Direct Memory Access, or RDMA, in computing is direct memory access from the memory of one computer into to another without involving any operating system. The figure below explains the direct memory access from another computer in a very simplified way.
VMware recently introduced a new driver model called native mode in vSphere 5.5. VMware ESXi 5.5 has two driver models, one called “vmklinux,” which is the legacy driver model and the second is the new “native” mode driver model. Moving forward, Emulex supports the native mode driver model for ESXi 5.5. Emulex Fibre Channel (FC) adapters for FC/ Fibre Channel over Ethernet (FCoE) storage protocol support the inbox native mode “lpfc” driver. The Emulex Ethernet (or Network Interface Card (NIC)) functionality has an inbox native mode driver called “elxnet.” The only driver as of this writing that supports legacy mode or vmklinux-based is the “be2iscsi” driver for iSCSI support.
Emulex OCe14000 family of Ethernet and Converged Network Adapters bring new levels of performance and efficiencyPosted July 8th, 2014 by Mark Jones
When we launched the OneConnect® OCe14000, our latest Ethernet and Converged Network Adapters (CNAs), we touched on a number of performance and data center efficiency claims that are significant enough to expand on. The design goals of the OCe14000 family of adapters was to take the next step beyond what we have already delivered with our three previous generations of adapters, by providing the performance, efficiency and scalability needs of the data center, Web-scale computing and evolving cloud networks. We believe we delivered on those goals and can claim some very innovative performance benchmarks.
UPDATE as of 10/21/14: We have made some VMQ updates and as a result posted the new 10.2.413.1 certified NIC driver on Emulex.com. The links below are for Emulex branded customers only. Please read the release notes carefully for important implementation details. Should you have any questions or need assistance contact Emulex tech support here.
Windows 2012 R2 page: http://www.emulex.com/downloads/emulex/drivers/windows/windows-server-2012-r2/drivers/
Below are the driver and firmware combinations that should be used for our OEM products supplied by HP and IBM. Please read and follow the specific instructions supplied by the OEM. Should you have any questions or need assistance contact the OEM technical support.
HP Customers: NIC driver 10.2.413.1, FW 10.2.431.2
IBM Customers: NIC driver 10.2.413.1, FW 10.2.377.24
UPDATE as of 9/9/14: For HP customers using Emulex 10GbE adapters, HP has made publicly available the latest code that addresses the VM disconnect issue when VMQ is enabled among other enhancements. The download portal is currently located here: http://ow.ly/Bi7Yt. Please read, understand and follow the update documentation provided by HP and contact HP tech support for further information. Thank you for your continued patience.
UPDATE as of 8/4/14: We are pleased to inform you that the July 2014 Special Release for Windows Server 2012 and Windows Server 2012 R2 CNA Ethernet Driver is now available for Emulex branded (non OEM) OCe111xx model adapters. Please refer to this link to download the driver kit and firmware. Please read and follow the special instructions within the Release Notes. For non-Emulex branded adapters, please contact Emulex Tech Support here.
UPDATE AS OF 7/23/14: Emulex is in the process of rolling out updated Microsoft Windows 2012 and 2012 R2 VMQ solutions for our customers. Testing of a Windows WHCK certified NIC driver update will be completed in 1-2 weeks. This initial “hotfix” will be for Emulex branded OCe11102 and OCe11101 products and will include a required firmware update. As testing completes on hotfix solutions for additional product configurations, notices and links will be posted on this blog. Thanks for your continued patience.
The Emulex OneConnect OCe14000 family of 10Gb and 40Gb Ethernet (10GbE and 40GbE) Network Adapters and Converged Network Adapters (CNAs) are the first of their kind to be designed and optimized for Virtual Network Fabrics (VNFs). Key to this claim is Emulex Virtual Network ExcelerationTM (VNeX) technology which, among other things, restores the hardware offloads that are normally lost because of the encapsulation that takes place with VNFs. For a VMware environment that is utilizing a Virtual Extensible LAN (VXLAN) VirtualWire interface, most Network Interface Cards (NICs) will see a significant reduction in throughput performance due to losing the NIC hardware offloads, and a loss of hypervisor CPU efficiency, due to it having to now perform much of the computation that the NIC otherwise would have done. The OneConnect OCe14000 adapters by default use VNeX to restore the offload processing in the hardware, thus providing non-virtual network levels of throughput and hypervisor CPU efficiency in VNF environments.
To prove this point, we setup a VXLAN working model using two VMware ESXi5.5 host hypervisors and configured a VXLAN network connection between them. Each server hosted eight RHEL6.3 guest virtual machines (VMs) with network access between the hypervisors using the VMware VirtualWire interface. As a network load generator, we used IXIA IxChariot to perform a network performance tests between the VMs. We compared two test cases, one with the hardware offloads enabled on the OCe14000 (this is the default behavior) and another to a NIC that does not utilize hardware offloads for VXLAN.
You can see in chart 1 that the bi-directional throughput with hardware offloads is as much as 70 percent greater when compare to a NIC without the hardware offloads.
In chart 2, you can see the impact that hardware offloads have on hypervisor CPU utilization, the OCe14000 adapter with VNeX that can process the offloads in hardware reduces CPU utilization as much as half when compared to standard NICs when used for VMware VirtualWire connections, increasing the number of VMs supported per server.
Why am I unable to add a VMware vSphere ESXi 4.x and 5.x host to Emulex OneCommand® Manager for Windows?Posted October 8th, 2013 by Alex Amaya
I recently talked to a few customers and OEM partners who are somewhat fairly new to Emulex OneCommand Manager for Windows. As of this writing, we are using Emulex OneCommand Manager for Windows version 6.3. An issue that keeps popping up is about adding a vSphere ESXi 5.1 hosts in order to manage our Emulex adapters. The process seems fairly straightforward, but at times with all of the management tools out there it can be confusing.
The issue I hear is, “how do I add a vSphere ESXi host to OneCommand Manager because it will not appear in the table of contents for hosts discovered or I get a host unreachable error?” These problems can be overcome if you use the correct login information on the ESXi host and/or use the correct namespace. We hope this blogs helps answers these questions.
When you start Emulex’s OneCommand Manager for Windows, it will show the hosts that were previously added as it is shown in Figure 1. To add the new host you select Discovery -> TCP/IP-> Add Host …
Figure 1 shows four hosts in the managed host section in the Emulex OneCommand Manager application.
If you leave the default values for both options “Add using default credentials” or “Add using specific CIM credentials” you will see a similar error message as the figure below stating the host is unreachable.
Figure 2. Host unreachable due to default and incorrect login information
In order to successfully add a VMware vSphere host to Emulex OneCommand Manager for Windows you need to know a few things:
- Protocol (http or https)
- Port (Default: 5988 or 5989)
- Host name or IP address of the host
- The root login name and password
The protocol to use will either be http or https for ESX hosts. For ESXi hosts the protocol will be https, since by default sfcb is disabled. The default port numbers used for http and https are 5988 and 5989. We will use https with port 5989 as the default. The root and password are the ones you entered during the initial install of VMware vSphere. Do not use the standard default root and password that shows up automatically when you try to discover the hosts.
As for namespaces, there are two namespaces you need to be aware of and it depends on the provider you use. You are either going to use the inbox or out-of-box provider. For example, If the latest Emulex CIM provider and VMware adapter driver are installed on the host you need to make sure to add the correct namespace for out-of-box provider. See Figure 3 which demonstrates the namespace for the out-of-box provider to use.
Figure 3. The red outline shows the fields which are important for adding the host to the table of contents besides the IP address or host name.
When you use the in-box driver with ESX/ESXi 4.x you would want to make sure the namespace field has the correct information. The table below from Emulex OneCommand Manager manual shows the namespace and recommended provider to use.
Table 1. Namespaces Used for Providers
If you need to know what provider is being used? Login to the vSphere host and check the name of the provider being used. If the provider begins with deb_ it’s the inbox provider. If its cross_ , then it will be the out-of-box provider.
The example below shows the how find the provider being used.
~ # esxupdate --vib-view query | grep emulex-cim-provider deb_emulex-cim-provider_410.2.0.32.1-207424 installed 2010-04-01T07:00:00+00:00 cross_emulex-cim provider_410.3.1.16.1235786 installed 2010-10-11T09:39:04.047082+00:00
Last, when the correct user name, password and namespace are used you will get a successful message window from OneCommand Manager stating the host has been added successfully as shown in Figure 5.
Figure 4 shows the new vSphere ESXi 5.1 host on the discovered host table of contents
Hopefully this clarifies the issue or challenge of adding a VMware vSphere ESXi host with Emulex OneCommand Manager for Windows. If you any questions please feel free to reach out to us at email@example.com