HP Lefthand Networks – New Patch to Address RAID Controller Issues

There is a new SAN/iQ patch (10056-001) which is an ISO, that you burn to a CD, boot each storage module off of it and then it’ll patch the modules (includes a firmware update plus SAN/iQ software patch). The patch is for both SAN/iQ v8.0 and 8.1. This patch is a recommended patch from HP Lefthand to address certain instances where the RAID controller in the module can have issues and in some cases make the storage node unresponsive.

As a reminder, a list of FTP sites to get the code upgrades can be found on our blog:

How To Configure vSphere 4.0 Software iSCSI with 2 Paths

GoingVirtual.wordpress.com has posted a great step by step How-To for configuring the software iSCSI initiator in vSphere 4 with multiple paths. The is a huge reason to go to vSphere 4 from ESX 3.x since ESX 3.x did not support multiple paths per target.

The How-To can be found here:

Hope it helps!

Easy FTP Links to HP Lefthand Networks Software and Code

Here are some easy links and FTP information to use when you need code for your HP Lefthand Networks modules. The code is available on the HP Lefthand Networks website but since it’s difficult to manage sometimes, I thought this info might be useful in a central concise location. Enjoy!

SAN/iQ 6.6 and Earlier
FTP System:         hprc.external.hp.com  (
     Login:              anonymous
     Password:           your_email@address.com


SAN/iQ 7.0 SP1
FTP System:         hprc.external.hp.com  (
     Login:              saniq7_1
     Password:           SANIQ7_1   (NOTE:  CASE-sensitive)


SAN/iQ 8.0
     FTP System:         hprc.external.hp.com  (
     Login:              saniq8_0
     Password:           SANIQ8_0   (NOTE:  CASE-sensitive)


SAN/iQ 8.1
     FTP System:         hprc.external.hp.com  (
     Login:              saniq8_1
     Password:           SANIQ8_1   (NOTE:  CASE-sensitive)


SAN/iQ 8.5
** Well, I thought this was too good while it lasted. The link for 8.5 no longer works. It looks like you’ll have to use the ITRC link in order to get the software. If this changes, we’ll let you know!**

     FTP System:         ftp.usa.hp.com  
     Login:         saniq85
     Password:           SAN85eap   (NOTE:  CASE-sensitive)


SAS vs. SATA Differences, Technology and Cost

Updated 2/4/13
One of the resources at HP (thanks Ben!) made the following comment to one of our customers and I thought it’d be a perfect post for the blog as it contains some useful information that some might not be aware of.

Here are the high-level differences between SAS and SATA disk drives:


  • SATA (or now called NL-SAS for Nearline SAS) disk drives are the largest on the market.  The largest SATA/NL-SAS drives available with widespread distribution today are 3TB.
  • SAS disk drives are typically smaller than SATA.  The largest SAS drives available with widespread distribution today are 600GB or 900GB.
  • So, for capacity, a SATA/NL-SAS disk drive is 4X-5x as dense for capacity than SAS.
  • A good way to quantify capacity comparison is $/GB.  SATA will have best $/GB.


  • SATA/NL-SAS disk drives spin at 7.2k RPMs.  Average seek time on SATA/NL-SAS is 9.5msec.  Raw Disk IOPS (IOs per second) are 106.
  • SAS disk drives spin at 15k RPMs.  Average seek time on SAS is 3.5msec.  Raw Disk IOPS (IOs per second) are 294.
  • So, for performance, a SAS hard drive is nearly 3X as fast as SATA.
  • A good way to quantify performance comparison is $/IOP.  SAS will have best $/IOP.

Reliability: there are two reliability measures – MTBF and BER.

  • MTBF is mean time between failure.  MTBF is a statistical measure of drive reliability.
  • BER is Bit Error Rate.  BER is a measure of read error rates for disk drives.
  • SATA/NL-SAS drives have a MTBF of 1.2 million hours.  SAS drives have a MTBF of 1.6 million hours.  SAS drives are more reliable than SATA when looking at MTBF.
  • SATA drives have a BER of 1 read error in 10^15 bits read.  SAS drives have a BER of 1 read error in 10^16 bits read.  SAS drives are 10x more reliable for read errors.  Keep in mind a read error is data loss without other mechanisms (RAID or Network RAID) in place to recover the data.

Here are some good links for comparing disk types:








List of the Remaining Fortune 1000 Not Using VMware in Production

This was brought out in the keynote at VMworld 2009 last week and I thought it was kind of interesting. Here’s a list of the remaining companies part of the Fortune 1000 that are not using VMware in production. VMware offered the same bounty as last year to their partners/resellers that if they can sell VMware into any of the accounts below, that they would get a free VMworld registration for next year.

33 Only Ones Fortune 1000 Not Running VMware

VMworld session TA3438 – Top 10 Performance improvements in vSphere 4



  • IO overhead has been cut in half. Also, IO for a VM can execute on a different core than the VM Monitor is running on. This means a single CPU VM can actually use two CPUs.
  • The CPU scheduler is much better at scheduling SMP workloads. 4-way SMP VMs perform 20% petter, and 8-way is about 2x the performance of a 4-way with an Oracle OLTP workload, so performance scales well.
  • EPT improves performance a LOT. Turning it on also enables Large Pages by default (which can negatively affect TPS). Applications need to have Large Pages turned on, like SQL (which gains 7% performance)
  • Hardware iSCSI is 30% less overhead across the board, Software iSCSI is 30% better on reads, 60% better on writes!
  • Storage VMotion is significantly faster, because of block change tracking and no need to do a self-VMotion (Which also means it doesn’t need 2x RAM)
  • In vSphere performance between RDM and VMFS is less than 5%, and while this is the same as ESX3.5, performance of a VM on a VMFS volume where another operation (like a VM getting cloned) has improved.
  • Big improvement in VDI workloads – a boot storm of 512 VMs is five times faster in vSphere. 20 minutes reduced to 4.
  • PVSCSI does some very clever things like sharing the I/O queue depth with the underlying hypervisor, so you have one less queue.
  • vSphere TCP stack is improved (I know from other sessions they’re using the new tcpip2 stack end-to-end.
  • VMXNET3 gives big network I/O improvements, especially in Windows SMP VMs.
  • Network throughput scales much better, 80% performance improvement with 16 VMs running full blast.
  • VMotion 5x faster on active workloads, 2x faster at idle.
  • 350K IOPS per ESX Host, 120K IOPS per VM.
  • VMware ESX 3.x to vSphere 4 Upgrade Path Overview

    We’ve already given out some useful videos on how to perform an upgrade from ESX 3.x to vSphere 4 located on this blog here:

    But here’s a great graphical overview of the upgrade process which should help you during an upgrade to follow the steps and make sure you don’t miss any. Hope it helps!

    vSphere Upgrade Overview Path