Hi
I was conducting a health check today at a site using some very well spected hardware. The site was using dual chase IBM x Servers. These are an amazing bit of kit with a second chase attached which is filled with CPU and RAM. Over all each host had 32 cores and 128GB of RAM. I was onsite conducting a second health check after all the recommendations I had provided from the first health check has been implemented. Everything with the site is looking good until i started conducting a performance analysis of the disk, CPU and MEM. All the hosts are sitting around 40-60% CPU and 10-20% or RAM so we are talking around 23500MHz of CPU and 20 GB of RAM. But when looking at the disk performance of the FC I was seeing some serious delays in storage command processing. The hosts had been built and had been paired with 4 single port 4GB FC cards...... The customer had only connected two and was only pushing traffic down a single HBA. This was causing some serious problems with hi IO VMs such as SQL and exchange.
This is not the first customer I have seen this and it would not appear that the limiting factor in the majority of enterprise size VMware environments is disk IO. I know there are now 8GB FC systems available (i have worked with a Compellent 8GB installation) but It would now appear that storage IO needs to be the first resource investigated when looking into performance problems. This is where I will be looking from now on.