Jump to content

VMWare ESXi 3.5 u4 suffering slow performance in iSCSI RAID-5 of 15k rpm SAS SAN


h4x0r
 Share

Recommended Posts

Hi All,

I'm suffering very slow performance in using my VM deployed on the iSCSI SAN-VMFS datastore, the following attachment shows the deployment diagram which i believe already according to the best practice around the net by segregating the network from SAN into the server.

http://virtualgeek.typepad.com/virtual_geek/2009/01/a-multivendor-post-to-help-our-mutual-iscsi-customers-using-vmware.html

However, after reading the quoted article, it seems that no matter how fast the disk is, using SAN

in a VMWare environment it will always be slow around 160 MBps :-|

This usually means that customers find

that for a single iSCSI target (and however many LUNs that may be

behind that target 1 or more), they can't drive more than 120-160MBps.

is there anything that i should do to boost performance ?

Thanks.

--------------------------------------------------------------------------------------------------------------------------------------

FYI: This is a continuation from this thread:

post-13561-1242111392_thumb.jpg

Link to comment
Share on other sites

Hi All,

I'm suffering very slow performance in using my VM deployed on the iSCSI SAN-VMFS datastore, the following attachment shows the deployment diagram which i believe already according to the best practice around the net by segregating the network from SAN into the server.

http://virtualgeek.typepad.com/virtual_geek/2009/01/a-multivendor-post-to-help-our-mutual-iscsi-customers-using-vmware.html

However, after reading the quoted article, it seems that no matter how fast the disk is, using SAN

in a VMWare environment it will always be slow around 160 MBps :-|

This usually means that customers find

that for a single iSCSI target (and however many LUNs that may be

behind that target 1 or more), they can't drive more than 120-160MBps.

is there anything that i should do to boost performance ?

Thanks.

--------------------------------------------------------------------------------------------------------------------------------------

FYI: This is a continuation from this thread:

You are quoting mega BYTES of data. 160 mega BYTES is the equivalent of 1280 mega BITS per second. That comes out to approximately 1.2 gigabits per second. We all know it's almost impossible to get a full gigabit of throughput on a gigabit switch (overhead, packet drops, etc), so I'd say that's pretty good. They used link aggregation to get that 1.2gbps of throughput to the disks, so that's how it's over 1gbps.

You'll never get that high on fibre channel (nothing close to link aggregation or etherchannel), so I'd say that example just showed iSCSI as a faster solution.

Link to comment
Share on other sites

You are quoting mega BYTES of data. 160 mega BYTES is the equivalent of 1280 mega BITS per second. That comes out to approximately 1.2 gigabits per second. We all know it's almost impossible to get a full gigabit of throughput on a gigabit switch (overhead, packet drops, etc), so I'd say that's pretty good. They used link aggregation to get that 1.2gbps of throughput to the disks, so that's how it's over 1gbps.

You'll never get that high on fibre channel (nothing close to link aggregation or etherchannel), so I'd say that example just showed iSCSI as a faster solution.

Hi Decepticon,

I was thinking to redesign the network all over again from scratch,

instead of having different subnet for each cable would it be better off to perform trunking directly from the ESXi Servers into the SAN to utilize 2x 1 GB Ethernet cable to boost the data performance

for the VM without the use of any switch in between the SAN and Servers.

well, the case is that:

in total 15 x 300 GB SAS HDD

I've created large RAID-5 LUN from 14 x 300 GB and then i created 1 TB of

VMFS partition and the SQLIO benchmark is really horrible for the VM on

the SAN as oppose to the local SATA 7200k rpm.

see the result below:

*Local HDD: 4x 500 GB SATA 7200 rpm RAID - 5 *

C:\SQLTEST>sqlio.exe

sqlio v1.5.SG

1 thread reading for 30 secs from file testfile.dat

using 2KB IOs over 128KB stripes with 64 IOs per run

initialization done

CUMULATIVE DATA:

throughput metrics:

IOs/sec: 8826.73

_MBs/sec: 17.23 _

while

*SAN HDD: 14x 300 GB SAS 15000 rpm RAID - 5 *

C:\SQLTEST>sqlio.exe

sqlio v1.5.SG

1 thread reading for 30 secs from file testfile.dat

using 2KB IOs over 128KB stripes with 64 IOs per run

initialization done

CUMULATIVE DATA:

throughput metrics:

IOs/sec: 2314.03

*MBs/sec: 4.51 *

Edited by h4x0r
Link to comment
Share on other sites

Hi Decepticon,

I was thinking to redesign the network all over again from scratch,

instead of having different subnet for each cable and then perform LAN

teaming so that 2x 1 GB Ethernet cable can boost the data performance

for the VM without the use of any switch in between the SAN and Servers.

well, the case is that:

in total 15 x 300 GB SAS HDD

I've created large RAID-5 LUN from 14 x 300 GB and then i created 1 TB of

VMFS partition and the SQLIO benchmark is really horrible for the VM on

the SAN as oppose to the local SATA 7200k rpm.

see the result below:

*Local HDD: 4x 500 GB SATA 7200 rpm RAID - 5 *

C:\SQLTEST>sqlio.exe

sqlio v1.5.SG

1 thread reading for 30 secs from file testfile.dat

using 2KB IOs over 128KB stripes with 64 IOs per run

initialization done

CUMULATIVE DATA:

throughput metrics:

IOs/sec: 8826.73

_MBs/sec: 17.23 _

while

*SAN HDD: 14x 300 GB SAS 15000 rpm RAID - 5 *

C:\SQLTEST>sqlio.exe

sqlio v1.5.SG

1 thread reading for 30 secs from file testfile.dat

using 2KB IOs over 128KB stripes with 64 IOs per run

initialization done

CUMULATIVE DATA:

throughput metrics:

IOs/sec: 2314.03

*MBs/sec: 4.51 *

Interesting results! I would assume the difference is the processing done to iSCSI on the kernel. There's extra overhead on the CPU and OS (ESX) when iSCSI is used with no iSCSI hardware initiator. It's doing a software conversion from disk I/Os to IP packets, but I didn't think it would be so apparent.

If you could get ahold of an iSCSI card and put that in, I'd love to see the same test done and see how much that changes things. QLE4050C is the most popular/common one I see at work.

What do you get for CPU utilization during these tests?

Link to comment
Share on other sites

Interesting results! I would assume the difference is the processing done to iSCSI on the kernel. There's extra overhead on the CPU and OS (ESX) when iSCSI is used with no iSCSI hardware initiator. It's doing a software conversion from disk I/Os to IP packets, but I didn't think it would be so apparent.

If you could get ahold of an iSCSI card and put that in, I'd love to see the same test done and see how much that changes things. QLE4050C is the most popular/common one I see at work.

What do you get for CPU utilization during these tests?

Hi Decepticon,

Yes, that'd be a nother option though, apart from getting a managed switch which can support VLAN trunking. Until now, I just couldn't believe why direct connection to my iSCSI SAN could not give me the same result compare to put a managed switch in between and implementing VLAN trunking.

Basically In my Windows 2003 x64 Std VM, I'm using enhanced VMXnet NIC and i only use ESXi software initiator.

I wonder if i install MS iSCSI initiator would that be able to boost performance ?

Link to comment
Share on other sites

Hi Decepticon,

Yes, that'd be a nother option though, apart from getting a managed switch which can support VLAN trunking. Until now, I just couldn't believe why direct connection to my iSCSI SAN could not give me the same result compare to put a managed switch in between and implementing VLAN trunking.

Basically In my Windows 2003 x64 Std VM, I'm using enhanced VMXnet NIC and i only use ESXi software initiator.

I wonder if i install MS iSCSI initiator would that be able to boost performance ?

OK, Thanks for the reply Decepticon,

For your info, I share my hard to believe experience in configuring my iSCSI SAN with you here:

MD3000i is just a small entry level SAN device which can only use one single cable to access the iSCSI target, so no matter how complex the configuration is, the I/O performance will not be as great as the adding managed switch to perform VLAN trunking.

http://virtualgeek.typepad.com/virtual_gee...ing-vmware.html --> the last question #4 is the eye opener

so by using the deployment diagram that i supplied on top, it is not possible to achieve high performance greater than single cable connection :-|

hope that helps you in the future,

Cheers,

Devastated and desperate admin :-(

post-13561-1242288721_thumb.jpg

post-13561-1242288742_thumb.jpg

Link to comment
Share on other sites

OK, Thanks for the reply Decepticon,

...

so by using the deployment diagram that i supplied on top, it is not possible to achieve high performance greater than single cable connection :-|

hope that helps you in the future,

Cheers,

Devastated and desperate admin :-(

Exactly right, NIC teaming and link aggregation are for failover purposes more than throughput. However, what I had in mind is what will happen when you add a second VM on the same line. If you bundle up your NICs onto a single virtual switch, the next VM will get a different NIC. By default it's round robin allocation, so the second VM will get the second NIC in line. If they all shared the same NIC, I assume the throughput goes down dramatically.

Also, you mentioned not having a switch with VLAN capabilities. A better switch will have better processing power and thus faster throughput. I assume you are not running this test through some Netgear switch from Best Buy. A better switch will have better results.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...