Emulex Blogs

Emulex OCe14000 family of Ethernet and Converged Network Adapters bring new levels of performance and efficiency

Posted July 8th, 2014 by Mark Jones

When we launched the OneConnect® OCe14000, our latest Ethernet and Converged Network Adapters (CNAs), we touched on a number of performance and data center efficiency claims that are significant enough to expand on.  The design goals of the OCe14000 family of adapters was to take the next step beyond what we have already delivered with our three previous generations of adapters, by providing the performance, efficiency and scalability needs of the data center, Web-scale computing and evolving cloud networks.  We believe we delivered on those goals and can claim some very innovative performance benchmarks.

 

Continue reading…

Microsoft Windows 2012/2012 R2 Hyper-V VMs losing network connectivity: a workaround

Posted June 19th, 2014 by Mark Jones

UPDATE AS OF 7/23/14: Emulex is in the process of rolling out updated Microsoft Windows 2012 and 2012 R2 VMQ solutions for our customers.  Testing of a Windows WHCK certified NIC driver update will be completed in 1-2 weeks. This initial “hotfix” will be for Emulex branded OCe11102 and OCe11101 products and will include a required firmware update.  As testing completes on hotfix solutions for additional product configurations, notices and links will be posted on this blog.  Thanks for your continued patience.

~~

It has been reported that some customers using Microsoft Windows 2012 or Windows 2012 R2 and Hyper-V may experience a loss of network connectivity with one or more virtual machines (VMs).  Emulex has identified some VMQ specific cases involving our network adapters and the root causes for the issues.  We are working closely with our OEM partners to prepare a field upgradeable solution.  As we track the releases of the update by Emulex and our server OEMs, we will use this blog to inform you of the current status.  In the interim, we will advise on operational alternatives to ensure VM connectivity and stability.

Continue reading…

Virtual Network Fabric Performance Improvements Using Emulex VNeX Technology

Posted January 22nd, 2014 by Mark Jones

The Emulex OneConnect OCe14000 family of 10Gb and 40Gb Ethernet (10GbE and 40GbE) Network Adapters and Converged Network Adapters (CNAs) are the first of their kind to be designed and optimized for Virtual Network Fabrics (VNFs).  Key to this claim is Emulex Virtual Network ExcelerationTM (VNeX) technology which, among other things, restores the hardware offloads that are normally lost because of the encapsulation that takes place with VNFs.  For a VMware environment that is utilizing a Virtual Extensible LAN (VXLAN) VirtualWire interface, most Network Interface Cards (NICs) will see a significant reduction in throughput performance due to losing the NIC hardware offloads, and a loss of hypervisor CPU efficiency, due to it having to now perform much of the computation that the NIC otherwise would have done. The OneConnect OCe14000 adapters by default use VNeX to restore the offload processing in the hardware, thus providing non-virtual network levels of throughput and hypervisor CPU efficiency in VNF environments.

To prove this point, we setup a VXLAN working model using two VMware ESXi5.5 host hypervisors and configured a VXLAN network connection between them.  Each server hosted eight RHEL6.3 guest virtual machines (VMs) with network access between the hypervisors using the VMware VirtualWire interface.  As a network load generator, we used IXIA IxChariot to perform a network performance tests between the VMs.  We compared two test cases, one with the hardware offloads enabled on the OCe14000 (this is the default behavior) and another to a NIC that does not utilize hardware offloads for VXLAN.

You can see in chart 1 that the bi-directional throughput with hardware offloads is as much as 70 percent greater when compare to a NIC without the hardware offloads.

In chart 2, you can see the impact that hardware offloads have on hypervisor CPU utilization, the OCe14000 adapter with VNeX that can process the offloads in hardware reduces CPU utilization as much as half when compared to standard NICs when used for VMware VirtualWire connections, increasing the number of VMs supported per server.

Why am I unable to add a VMware vSphere ESXi 4.x and 5.x host to Emulex OneCommand® Manager for Windows?

Posted October 8th, 2013 by Alex Amaya

I recently talked to a few customers and OEM partners who are somewhat fairly new to Emulex OneCommand Manager for Windows. As of this writing, we are using Emulex OneCommand Manager for Windows version 6.3. An issue that keeps popping up is about adding a vSphere ESXi 5.1 hosts in order to manage our Emulex adapters. The process seems fairly straightforward, but at times with all of the management tools out there it can be confusing.

The issue I hear is, “how do I add a vSphere ESXi host to OneCommand Manager because it will not appear in the table of contents for hosts discovered or I get a host unreachable error?” These problems can be overcome if you use the correct login information on the ESXi host and/or use the correct namespace. We hope this blogs helps answers these questions.

When you start Emulex’s OneCommand Manager for Windows, it will show the hosts that were previously added as it is shown in Figure 1. To add the new host you select Discovery -> TCP/IP-> Add Host …

Figure 1 shows four hosts in the managed host section in the Emulex OneCommand Manager application.

If you leave the default values for both options “Add using default credentials” or “Add using specific CIM credentials” you will see a similar error message as the figure below stating the host is unreachable.

Figure 2. Host unreachable due to default and incorrect login information

In order to successfully add a VMware vSphere host to Emulex OneCommand Manager for Windows you need to know a few things:

  1. Protocol (http or https)
  2. Port (Default: 5988 or 5989)
  3. Host name or IP address of the host
  4. The root login name and password

The protocol to use will either be http or https for ESX hosts. For ESXi hosts the protocol will be https, since by default sfcb is disabled.  The default port numbers used for http and https are 5988 and 5989. We will use https with port 5989 as the default. The root and password are the ones you entered during the initial install of VMware vSphere. Do not use the standard default root and password that shows up automatically when you try to discover the hosts.

As for namespaces, there are two namespaces you need to be aware of and it depends on the provider you use. You are either going to use the inbox or out-of-box provider. For example, If the latest Emulex CIM provider and VMware adapter driver are installed on the host you need to make sure to add the correct namespace for out-of-box provider. See Figure 3 which demonstrates the namespace for the out-of-box provider to use.

Figure 3. The red outline shows the fields which are important for adding the host to the table of contents besides the IP address or host name.

When you use the in-box driver with ESX/ESXi 4.x  you would want to make sure the namespace field has the correct information. The table below from Emulex OneCommand Manager manual shows the namespace and recommended provider to use.

Table 1. Namespaces Used for Providers

If you need to know what provider is being used? Login to the vSphere host and check the name of the provider being used. If the provider begins with deb_ it’s the inbox provider. If its cross_ , then it will be the out-of-box provider.

The example below shows the how find the provider being used.

~ # esxupdate --vib-view query | grep emulex-cim-provider
deb_emulex-cim-provider_410.2.0.32.1-207424 installed 2010-04-01T07:00:00+00:00
cross_emulex-cim provider_410.3.1.16.1235786 installed 2010-10-11T09:39:04.047082+00:00

Last, when the correct user name, password and namespace are used you will get a successful message window from OneCommand Manager stating the host has been added successfully as shown in Figure 5.

Figure 4 shows the new vSphere ESXi 5.1 host on the discovered host table of contents

Hopefully this clarifies the issue or challenge of adding a VMware vSphere ESXi host with Emulex OneCommand Manager for Windows. If you any questions please feel free to reach out to us at implementerslab@emulex.com

FreeBSD Networking with Emulex OneConnect® Ethernet Adapters

Posted August 20th, 2013 by Emulex

By Steve Perkins

A few months back, the question of “tuning” the Emulex FreeBSD driver came up and it took me back to the days when I would spend time “tuning” largely unroadworthy 1960’s and 1970’s British cars on weekends and evenings (I live in the UK so it seemed like a good idea at the time!). It was a belief that this was “performance tuning” but in reality, if the thing started without a push, it was a bonus. But it always felt like hours spent tweaking timings, gapping spark plugs and balancing Skinner Unions(SU) carburettors with a variety of tubes and tuning gadgets was worth all the time and blood lost. Network card driver tuning – what could be more fun?

If we look at the traditional customer base for Emulex products, it has been the sort of enterprise level data centres who usetraditional operating systems (OS) from the likes of Red Hat, SuSE, Microsoft, VMware and OEM UNIX derivatives. These “paid for” OSes (money up front and continuing support) have formed the backbone of our IT world and have been the focus of our driver development for Fibre Channel and Ethernet products.

But the IT world is changing and we are seeing new dynamic types of customer who are willing and able to take open source software to build new data centres for the world of big data and cloud solutions. One way Emulex has responded to this is to increase OS support outside of the “usual suspects” to embrace not only the community versions of Red Hat (Centos) but also Debian, Ubuntu and FreeBSD.

FreeBSD is an interesting OS that is often seen as a less showy alternative to the myriad of Linux distributions. Just getting its’ head down and getting on with the job, FreeBSD is quietly running everything from small home routers through TVs, switches, storage systems to data centres as well as being the basis for Apple’s OSX.

Emulex has Ethernet driver support for FreeBSD for our range of 10GbE OneConnect (OCe1110x) Network Interface Cards (NICs) available for download here. But do the drivers “just” work or do they “really” work? Have we kept the performance capabilities we have built into the OneConnect NICs for Linux, Windows, ESXi etc. for FreeBSD users, or are we just another NIC port? This was a question we’ve had a few times recently in EMEA from customers looking for FreeBSD support. Well, actually the conversation usually goes along the lines of:

Customer: We use FreeBSD,- do you have any drivers for your NICs?

Emulex: Yes we do, you can get them at ….

Customer: Are they any good?

Emulex: Yes, they’re good quality and dependable drivers.

Customer: No, really?

Emulex: Yes, we have embedded RISC CPUs which offload TCP/IP …

Customer: Hmmm…

So fuelled with the customers’ healthy scepticism that we are simply paying lip service to FreeBSD and all they get is basic NIC support, we thought it would be useful to check out and document not only the installation and configuration of the Emulex OneConnect NICs, but demonstrate the sort of performance that could be achieved.

Setting up a test environment always risks an argument, as we are faced with an almost infinite world of network configurations. Are we looking for database performance in a data centre, setting up video streaming servers, running a cloud data centre – the list goes on. Accepting that we’re not going to build a model of the Internet in the humble lab of the Emulex UK office, we settled on the very simplest configuration of 2x servers connected back-to-back, so we could simply look at the raw capabilities of the OneConnect NICs with FreeBSD. Using the industry standard Netperf tool is a bit like using IOmeter for testing storage I/O. There is a view that it is irrelevant to a real-world application – just a trade show trick – but I believe that it is of real value to strip back the complexities of the whole ecosystem to the raw components of performance. If the basic connectivity is broken, Netperf (or IOmeter) very quickly shows that something is not set up correctly. For example, if you have plugged a NIC in to a PCIe x1 slot by mistake, it is just never going to give you full 10GbE line rate. Drive your car in first gear, your journey is going to be long and loud.

This type of testing is important as we needed to understand the baseline capabilities before analysing the wider network environment in the final system. We aimed to have a very simple and repeatable setup that could be quickly replicated as a starting point for broader system tuning.

We produced an application note on the whole exercise at www.ImplementersLab.com which goes into detail on the installation of the driver, how the tests were done and the results. We’ve even included the script we used to run the tests so we could hit “go” and come back from lunch to a nice set of results.

I have already alluded to the “humble” nature of the Emulex UK labs so the tests were run on fairly average level systems – nothing exotic levered from Intel’s back door here! Even so, we could easily get very close to line rate transfers without any great effort (see chart)

Basic Performance chartInterestingly enough, the default Maximum Transmission Unit (MTU) size of 1500 bytes, which showed very respectable performance compared to enabling jumbo (9K) frames under TCP tests, although streaming with UDP really got value from larger frames.

Performance tuning of a system is a subject that can generate long email trails, but we need to consider the difference between tuning driver parameters and the broader OS parameters that are typically more relevant to the final application. Fortunately, the Emulex driver defaults are optimised as a default to maximise the use of hardware offload within the NIC CPU. Although the OneConnect NICs are theoretically capable of full TCP Offload Engine (TOE), this has fallen out of favour, and so we use stateless TCP offloads to effectively grab subroutines required in TCP/IP and process them in hardware. This approach still gives offload performance but compared to a full TOE implementation, allows the OS and applications access to all layers of the stack without a rewrite of the OS kernel. These stateless offload functions, such as hardware checksum calculations, VLAN tagging, TCP Segmentation Offload are managed using the good old ifconfig command on the NIC. Running “ifconfig –m” on a NIC port will give you the capabilities and which ones are enabled. For example:


root@ELXUKBSD91:/root # ifconfig -m oce0
oce0: flags=8002<BROADCAST,MULTICAST> metric 0 mtu 1500
options=507bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO, VLAN_HWFILTER,VLAN_HWTSO>

capabilities=507bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO, VLAN_HWFILTER,VLAN_HWTSO>

ether 00:00:c9:e6:67:a8

nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>

media: Ethernet autoselect

status: no carrier

supported media:

media autoselect

This is the Emulex oce driver default configuration, and we can see that all hardware offload options are enabled. Apart from playing with the MTU, there is no point in changing things as all you are going to do is load up your CPU by moving hardware accelerated functions back on to the host system.

So that was easy. To the car analogy, these days my car just works. It is all computer controlled to run at optimum performance, the right idle speed, perfect timing, and it starts the first time! Somewhere in my garage, I have a collection of timing lights, Colortune plugs, tubes for balancing carbs etc. but evenings of frozen and skinned knuckles spent tinkering just to stand a chance of completing a journey are thankfully over. My car is German.

For more information on going beyond the basic performance validation of a system, there is some good background information around here which is based on work done by the developer of the Nginx web server and the FreeBSD wiki . This should all be considered “work in progress” as we all know that the world of IT never stops evolving, so in the field of performance, the raw capabilities will be the building blocks of understanding the whole system.

To complete the analogy (in the style of a blog!), getting from A to B in the fastest time is not about “does the car start” (without a push) and “can I keep most of the cylinders firing to complete a journey”. The car works as it should do just as the NIC runs predictably and reliably, but journey times are about how I choose the best route. I have GPS to find routes and live traffic data to guide me through the crowded road network and it is these tools that make the difference. Happy tinkering!


Some of these products may not be available in the U.S. Please contact your supplier for more information.

vExpert 2013!

Posted May 31st, 2013 by Alex Amaya

I recently attended the VMware Professional Community vBrownBag for Latin America when Larry Gonzalez (@virtualizecr) mentioned that I have been selected as a vExpert for 2013! The selections where announced on May 28, 2013.

And what is a vExpert? vExperts have demonstrated significant contributions to the virtualization community and a willingness to share their virtualization evangelism with others around the world. This could be in the form of books, blogs, online forums, and VMUGs; and privately with customers and VMware partners.

I am honored to have been selected for the first time for doing something I love to do and that’s to help contribute to an amazing and expanding VMware community. The VMware social media, and  community team, including VMware’s John Troyer and Cory Romero selected 581 vExperts for 2013 and I wouldlike to congratulate all of the other vExperts selected!

The vExperts list for 2013 can be found here.

Blog Series Part 2: Can the global advance disk parameter Disk.DiskIOMaxSize make a difference with software or hardware FCoE adapters running large block I/O in VMware vSphere® 5.1?

Posted May 29th, 2013 by Alex Amaya

This blog is the second in a two part series that examines Fibre Channel over Ethernet (FCoE) implementations with VMware vSphere 5.1 using VMware’s software FCoE and hardware FCoE adapter. These blogs are intended to share our findings regarding the relative performance of software and hardware FCoE adapters when working with large-block, sequential I/O – in particular, the impact of the Disk.DiskIOMaxSize setting on storage performance. Keep in mind that your results will be different, as not all environments are the same. Testing should be done to experience the behavior in your own lab environment.

In previous lab tests with software FCoE and a few virtual machines (VMs), we encountered an unexpected drop in throughput (MB/s) starting at around 64K block I/O.  Once we made a change to Disk.DiskIOMaxSize, we were able to improve throughput; however, we continued to see poor latency response times.

As the second part of our experiment, we installed and tested a supported converged network adapter (CNA) featuring hardware FCoE (offload) using a single port. We left the default setting of 32727 in the advanced parameter settings. After running the tests, we looked at I/O operations per second (IOPS, throughput, CPU utilization and latency.

We first looked at the IOPS and throughput measurement. The chart below show a similar sloping curve in which IOPS are high with smaller block and high CPU utilization on the VM. Both software FCoE and hardware FCoE had similar slopes, but hardware FCoE produced more I/O operations with smaller block sizes. Both hardware and software FCoE offered similar IOPS performance for larger block sizes.

Figure 1 shows the results for hardware FCoE sequential I/O tests when we used the default setting for Disk.DiskMaxIOSize.

Figure 1. Hardware FCoE adapter I/Os with default Disk.DiskMaxIOSize setting

Next, we wanted to know if there was a difference in behavior for hardware FCoE versus software FCoE in terms of throughput, especially since this is where the software FCoE implementation had demonstrated some problems. We were pleased to find that hardware FCoE throughput results, for the VMs were close to line rate, at around 2300MB/sec starting at the 32K block tests and forward. The chart below shows a nice uprising slope where it flattens out at line rate. A single port with hardware FCoE sequential should give you about 1150MB/sec. This is where the adapter stops, but in our testing with 50/50 duplex mode, we were then able to reach line rate at 2300MB/sec, as the  targets were able to handle the throughput and the VM was able to keep up with the block tests.

Figure 2 shows throughput during the same hardware FCoE adapter test and, in particular, the drop-off that occurred before with larger block sizes is not seen.

Figure 2. Results for hardware FCoE throughput with default Disk.DiskMaxIOSize setting

To be fair, we looked at the latency times using esxtop on the host to see if there might be a concern. We looked at device average read and write at the different block size results.  Results are shown in Table 1, which provides average, rather than median values.

For good information on storage performance in vSphere, check out this VMware vSphere Blog.

Table 1. Average latency values with default setting

Block size DAVG read DAVG write
256K 5 ms 6 ms
512K 11 ms 12 ms
1M 12 ms 13 ms

In the test case, we see the latency times are much lower than with software FCoE. With hardware FCoE we did not  need to change any parameters since the CNA yielded good results.  In addition,according to VMware’s esxtop,  the core CPU utilization, at the three block sizes tests, averaged around 3 percent..

Here are the key takeaways:

  • Hardware FCoE with the correct driver and firmware can handle large I/O block requests, resulting in higher throughput and keeping device latency for both read and write at an acceptable range.
  • With software FCoE, we needed to use Disk.DiskMaxIOSize to get past the throughput hurdle and it came with some latency challenges.
  • With hardware FCoE, there was no need to change the parameter; the default setting, as VMware suggests, should already be tuned. Core CPU utilization as expected in large block size was averaging around 3 percent core utilization.

In conclusion, our testing was really to understand the difference between software and hardware FCoE adapters. We decided to do this test when we noticed a sudden drop in throughput at the larger block I/O size when using a software FCoE adapter. By the way, we performed the same test with a Microsoft Windows Server and experienced similar behavior.  We found a few suggestions online from bloggers with an advanced parameter setting and checked to see if it would make a difference with software FCoE, which in our case it did. We wanted to compare it to a hardware FCoE adapter. The results that we’ve laid out in this blog series demonstrate the differences in throughput behavior for the adapters.

Overall, this testing should not keep your from your own internal tests, but encourage you to do them. The application and the storage array will have an impact on your results which is only part of the infrastructure.

Blog Series Part 1: Can disk parameter Disk.DiskIOMaxSize make a difference with large I/Os in VMware vSphere® 5.1?

Posted April 24th, 2013 by Alex Amaya

This blog is the first in a two part series that examines Fibre Channel over Ethernet (FCoE) implementations with VMware vSphere 5.1 using VMware’s software FCoE and a hardware FCoE adapter. These blogs are intended to share our findings regarding the relative performance of software and hardware FCoE adapters when working with large-block, sequential I/O – in particular, the impact of the Disk.DiskIOMaxSize setting on storage performance.

In recent lab tests with software FCoE and a few virtual machines (VMs), we encountered an unexpected drop in throughput (MB/s) with large block I/O. We were using sequential I/O through a single physical 10Gb Ethernet (10GbE) port. The VMs were running Microsoft Windows 2008 R2; each was configured with four virtual CPUs (vCPUs) and 8GB of memory. Two raw device mapping (RDM) disks were mapped to each host. We enabled the software FCoE driver that comes with the hypervisor and made appropriate LUN mappings.

The IOmeter software tool was used to test a range of block sizes (512B – 1MB) across all RDM drives, with two workers per VM – one set to test 50% reads and the other to test 50% writes for full duplex mode. The targets used in this case were four Linux-based storage memory emulators with four targets each, for a total of 16 targets.

Figure 1 shows the results for these sequential I/O tests when we used the default setting for Disk.DiskMaxIOSize. This figure represents the baseline performance for software FCoE.

Figure 1. I/Os with default Disk.DiskMaxIOSize setting. using software FCoE.

With larger block sizes, the array was unable to perform any I/Os.

Figure 2 shows throughput during the same test of software FCoE and, in particular, the drop-off that occurred with larger block sizes. At this point, we theorized that the array became stressed with blocks that were 64KB or larger.

Figure 2. Throughput with default Disk.DiskMaxIOSize setting running software FCoE.

We also observed latency times using esxtop on the host to see if they might be a concern. Results are shown in Table 1, which provides average rather than median values.

For more information on storage performance in vSphere, refer to the VMware vSphere Blog.

Table 1. Average latency values with default setting

Block size DAVG read DAVG write
256K 16 ms 16 ms
512K 17 ms 43 ms
1M 19 ms 50 ms

Note that, with the default Disk.DiskMaxIOSize setting, no I/Os were taking place with larger block sizes, as demonstrated by Figures 1 and 2. DAVG represents the latency between the adapter and the target device. Note that according to VMware, latency of 20 ms or more are major storage performance concerns.

To address this storage performance issue with large block sizes, we turned to VMware KB article (kb:1003469), which suggests reducing the size of I/O requests passed to the storage device in order to enhance storage performance. You can achieve this size reduction by tuning the global parameter Disk.DiskMaxIOSize, which is found on the host under Configuration→Software→Advance Settings→Disk. As shown in Figure 3, this parameter is defined as the Max Disk READ/WRITE I/O size before splitting (in KB); thus, larger blocks are split into multiples of the Disk.DiskMaxIOSize setting.

Kudos to Erik Zandboer, VMware expert and VMdamentals blogger, for bringing this article to our attention!

Figure 3. Displaying the default Disk.DiskMaxIOSize setting, which is 32MB

After reading this KB article, we decided to vary the setting of Disk.DiskMaxIOSize to determine if this would, indeed, enhance storage performance. Since we had noticed that performance was beginning to deteriorate with 64KB blocks, we restricted the maximum block size to 64KB, as shown in Figure 4.

Figure 4. Changing the Disk.DiskMaxIOSize setting

Next, we re-ran the test to see if there was any impact on IOPS, throughput and latency.

Note that we did not monitor CPU utilization, which should not be overlooked if you plan to tune Disk.DiskMaxIOSize.

Figure 5 shows that reducing the Disk.DiskMaxIOSize setting had little impact on read/write I/Os.

Figure 5. I/O performance with the new Disk.DiskMaxIOSize setting when running software FCoE

Figure 6 shows the throughput achieved with the new Disk.DiskMaxIOSize setting. Throughput was now able to approach line rate (2300mb) and, rather than crashing as before, only dropped slightly with large-block I/Os (512KB and 1MB).

Figure 6. Throughput with the new Disk.DiskMaxIOSize setting

Table 2 shows that, with the new Disk.DiskMaxIOSize setting, latency began to average out between read and write I/Os, with 33ms for 512KB blocks and 68ms – 72ms for 1MB blocks.  However, these latency timings are still in the range of sever storage performance conditions.

Table 2. Average latency values with 64KB blocks

Block size DAVG read DAVG write
256K 13 ms 13 ms
512K 33 ms 33 ms
1M 68 ms 72 ms

Please note, these results are specific to our lab environment. You should perform your own tests to determine if changing the default Disk.DiskMaxIOSize setting would be beneficial in your particular environment. In addition, there may be trade-offs elsewhere in the storage stack that we are still investigating; we’ll also be comparing these software FCoE results with a hardware FCoE implementation.

So, do you really need to change the Disk.DiskMaxIOSize setting? We agree with Erik that you first need to determine the block size your VMs are executing and, if you are getting poor storage performance with large blocks, then tuning Disk.DiskMaxIOSize might be a consideration. Note that we performed these tests in order to validate that tuning Disk.DiskMaxIOSize would enhance storage performance in a lab environment with sequential reads and writes. However, in many real-world cases, traffic between ESX/ESXi hosts and the array tends to be more random.

Here are the key takeaways:

  • Software FCoE out of the box does not handle large block I/O requests, resulting in lower throughput and latency outside of the range of recommended by VMware.  Large block performance can negatively impact applications such as backup, streaming media and other large block applications.
  • Using VMware ESXi’s  Disk.DiskMaxIOSize, we could change the performance dynamics.  However, latency still measured outside the acceptable range.

In part two of this blog, we will repeat this testing to evaluate the impact of Disk.DiskMaxIOSize on storage performance with a hardware FCoE implementation.  We will note that hardware FCoE has many advantages including better CPU efficiencies.  Stay tuned…

Emulex VMware vSphere® 5.1 Web Client plug-in and the missing step…

Posted January 18th, 2013 by Alex Amaya

Emulex recently announced support for the new VMware vSphere® 5.1 Web Client with Emulex OneCommand Manager plug-in for VMware vCenter ™ version 1.4.10. So of course, I download the plug-in and replaced my older version. I found out the original OneCommand Manager plug-in for the VMware vCenter desktop client works and installs the same way. But the Web Client is a bit different. I found out I need an extra step to have this puppy working with my Web Client.

My intent in this blog is to inform you of a step that’s different in the configuration process for the plug-in. After trying a few times to get it to appear correctly, I gave in and searched VMware’s documentation. That’s right – I read the manual – in my case. I came across VMware vSphere 5.1 API/SDK Documentation (By default, the plug-in is disabled and does not show up in the Web Client.) When you install the OneCommand Manager plug-in for VMware vCenter version 1.4.10, it will have the plug-in for the Web Client. If you are able to get the plug-in to work through the VMware vCenter desktop client, you should be able to install it for the Web Client. Of course, you must have VMware single sign-on working, the VMware vSphere 5.1 Web Client installed and working, your credentials all taken care off and the correct CIM providers installed to get the plug-in registered and running.

So here’s what we had to do to get the Web Client to appear under “Classic Solutions: for both cluster and host.”

First, the file called webclient.properties in the VMware vSphere Web Client install directory needs to be unhidden, To do that, we need to unhide the %Program Data% directory.

  1. Open windows explorer
  2. Select C drive
  3. Press the alt key to bring up the conventional menu bar
  4. Click tools
  5. Click folder options
  6. Click view
  7. Check show hidden files, folders and drives
  8. Click OK

By the way, if you don’t capitalize the ‘P’ in plug-in, the Web Client won’t launch.

To activate the plug-in in the Web Client, a properties file needs to be modified on the server where the vSphere web client is installed.

Steps:

  • Locate the webclient.properties file in theVMware vSphere Web Client install directory, typically %ProgramData%\VMware\vSphere Web Client, and add the following line.
    scriptPlugin.enabled = true
  • Save and close the file.
  • Restart the VMware vSphere Web Client service.
  • Once the above change is made, log back into the Web Client.
  • Select Host and Clusters from the Home Tab.
  • Open by clicking the right arrow pointing to the name of your VMware vCenter server in the domain to see your cluster and hosts.
  • Select the host or the cluster and at the top, you should see a security certificate error. That’s where the plug-in will be registered.
  • After registering the plugin, you should now see a tab called “Classic Solutions” for either cluster or Host. See image below.

Can you run both of the plug-ins? Sure! See the image below.

VMware vSphere- 5 Web Plugin

The process to unlock and enable Fibre Channel over Ethernet (FCoE) capabilities with IBM BladeCenter HS23 using IBM’s Feature on Demand

Posted December 19th, 2012 by Alex Amaya

The purpose of the blog is to inform you of a new application note written by the Implementer’s Lab as how to enable FCoE with IBM’s Feature on Demand (FoD).

This past October, IBM announced an IBM BladeCenter HS23 which was one of the first IBM BladeCenter servers to offer four integrated LAN ports. The Emulex 10Gb Ethernet (10GbE) Virtual Fabric Adapter II (VFA II) for IBM BladeCenter HS23 LAN on Motherboard (LOM) is integrated into select IBM BladeCenter HS23 blade servers. It features two physical 10GbE ports and two physical 1GbE ports. This LOM solution can be configured to provide up to eight virtual ports (four per physical 10GbE port) each of which can operate at 100Mb – 10Gb with a maximum of 10Gb per physical port.

Emulex Technical Marketing received a pair of new IBM HS23 Blades to test with FCoE. Given the Emulex 10GbE VFAII capabilities and flexibility, two virtual ports can be configured for storage connectivity for iSCSI or FCoE. Because this capability is disabled out of the box and must be unlocked using IBM’s FoD). The Implementer’s Lab team decided to write an application note to help with the unlocking process. Please check out the application note here for more detail on how to enable FCoE with IBM BladeCenter HS23 using IBM FoD and let us know what you think!

«Older Posts