Jump to content

Running ESXi using iSCSI vs. Fibre Channel for Shared SAN Access


h4x0r

Recommended Posts

Hi All,

I'm in the process of getting new Virtualization Technology working in my office using:

2x Dell PowerEDGE 2950-III each installed with iSCSI Controller accessing the following SAN

1x the Dell PowerVault MD3000 (10x 300 GB 15k rpm SAS)

The VMWare ESXi is installed in an internal USB to load 4x VMs which is running SOLARIS ( serving as Project, homes, SAMBA file server and Source code repository and compiling the project source code as build server too.)

5x Windows Server 2003 which performing as Application Server running Apache Tomcat.

And I wonder if there is any performance benefits in implementing the Shared SAN for those two physical Server through iSCSI as opposed with Fibre Channel.

I am aware that FC is faster and more expensive but in this case I won't run any VM with DB server on it.

I'm looking to get 2x DUAL port Gigabit Ethernet so that each server can have 2 GB of Bandwidth into it

shall i go down in the path of having iSCSI or stick with the FC considering the SAN is running on 15k rpm SAS used by 2 ESXi server.

Please share some thought in regards to this configuration.

Thanks,

Link to comment
Share on other sites

Hi All,

I'm in the process of getting new Virtualization Technology working in my office using:

2x Dell PowerEDGE 2950-III each installed with iSCSI Controller accessing the following SAN

1x the Dell PowerVault MD3000 (10x 300 GB 15k rpm SAS)

The VMWare ESXi is installed in an internal USB to load 4x VMs which is running SOLARIS ( serving as Project, homes, SAMBA file server and Source code repository and compiling the project source code as build server too.)

5x Windows Server 2003 which performing as Application Server running Apache Tomcat.

And I wonder if there is any performance benefits in implementing the Shared SAN for those two physical Server through iSCSI as opposed with Fibre Channel.

I am aware that FC is faster and more expensive but in this case I won't run any VM with DB server on it.

I'm looking to get 2x DUAL port Gigabit Ethernet so that each server can have 2 GB of Bandwidth into it

shall i go down in the path of having iSCSI or stick with the FC considering the SAN is running on 15k rpm SAS used by 2 ESXi server.

Please share some thought in regards to this configuration.

Thanks,

I answered your question in the other thread... I didn't see this one right away. :)

I don't think you have enough servers to tax that SAN, you have a pretty good setup there. I think you'll be just fine with iSCSI, speed wise. You just need to configure your ethernet switch properly. Get a switch that supports VLANs and trunk a dedicated VLAN for iSCSI to each server. If you can dedicate a NIC to iSCSI, even better.

Check to see if you can team a NIC in vmware, I don't think you can. You can etherchannel a few NICs but you need to configure it perfectly on the ESX box and the switch. Not sure if ESXi has all the options needed for that. If you do NIC teaming inside a VM, you will gain no benefit. Only when the ESX NICs are spread across an etherchannel do you get load balancing. Otherwise you just get 2 virtual NICs running to 1 physical NIC.

Let us know what kind of switch you plan on putting behind those. If you say Dell, I'm leaving. :P Dell doesn't make switches.

Link to comment
Share on other sites

You can team/bond any number of NICs within ESXi as long as you have the drivers for them. And they will be presented to the VMs as one. And the iSCSI support is pretty extensive.

I don't think you have enough servers to tax that SAN, you have a pretty good setup there. I think you'll be just fine with iSCSI, speed wise. You just need to configure your ethernet switch properly. Get a switch that supports VLANs and trunk a dedicated VLAN for iSCSI to each server. If you can dedicate a NIC to iSCSI, even better.

Check to see if you can team a NIC in vmware, I don't think you can. You can etherchannel a few NICs but you need to configure it perfectly on the ESX box and the switch. Not sure if ESXi has all the options needed for that. If you do NIC teaming inside a VM, you will gain no benefit. Only when the ESX NICs are spread across an etherchannel do you get load balancing. Otherwise you just get 2 virtual NICs running to 1 physical NIC.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...