h4x0r Posted April 8, 2009 Share Posted April 8, 2009 Hi All, I'm in the process of getting new Virtualization Technology working in my office using: 2x Dell PowerEDGE 2950-III each installed with iSCSI Controller accessing the following SAN 1x the Dell PowerVault MD3000 (10x 300 GB 15k rpm SAS) The VMWare ESXi is installed in an internal USB to load 4x VMs which is running SOLARIS ( serving as Project, homes, SAMBA file server and Source code repository and compiling the project source code as build server too.) 5x Windows Server 2003 which performing as Application Server running Apache Tomcat. And I wonder if there is any performance benefits in implementing the Shared SAN for those two physical Server through iSCSI as opposed with Fibre Channel. I am aware that FC is faster and more expensive but in this case I won't run any VM with DB server on it. I'm looking to get 2x DUAL port Gigabit Ethernet so that each server can have 2 GB of Bandwidth into it shall i go down in the path of having iSCSI or stick with the FC considering the SAN is running on 15k rpm SAS used by 2 ESXi server. Please share some thought in regards to this configuration. Thanks, Quote Link to comment Share on other sites More sharing options...
decepticon_eazy_e Posted April 8, 2009 Share Posted April 8, 2009 Hi All, I'm in the process of getting new Virtualization Technology working in my office using: 2x Dell PowerEDGE 2950-III each installed with iSCSI Controller accessing the following SAN 1x the Dell PowerVault MD3000 (10x 300 GB 15k rpm SAS) The VMWare ESXi is installed in an internal USB to load 4x VMs which is running SOLARIS ( serving as Project, homes, SAMBA file server and Source code repository and compiling the project source code as build server too.) 5x Windows Server 2003 which performing as Application Server running Apache Tomcat. And I wonder if there is any performance benefits in implementing the Shared SAN for those two physical Server through iSCSI as opposed with Fibre Channel. I am aware that FC is faster and more expensive but in this case I won't run any VM with DB server on it. I'm looking to get 2x DUAL port Gigabit Ethernet so that each server can have 2 GB of Bandwidth into it shall i go down in the path of having iSCSI or stick with the FC considering the SAN is running on 15k rpm SAS used by 2 ESXi server. Please share some thought in regards to this configuration. Thanks, I answered your question in the other thread... I didn't see this one right away. :) I don't think you have enough servers to tax that SAN, you have a pretty good setup there. I think you'll be just fine with iSCSI, speed wise. You just need to configure your ethernet switch properly. Get a switch that supports VLANs and trunk a dedicated VLAN for iSCSI to each server. If you can dedicate a NIC to iSCSI, even better. Check to see if you can team a NIC in vmware, I don't think you can. You can etherchannel a few NICs but you need to configure it perfectly on the ESX box and the switch. Not sure if ESXi has all the options needed for that. If you do NIC teaming inside a VM, you will gain no benefit. Only when the ESX NICs are spread across an etherchannel do you get load balancing. Otherwise you just get 2 virtual NICs running to 1 physical NIC. Let us know what kind of switch you plan on putting behind those. If you say Dell, I'm leaving. :P Dell doesn't make switches. Quote Link to comment Share on other sites More sharing options...
Cthobs Posted April 8, 2009 Share Posted April 8, 2009 You can team/bond any number of NICs within ESXi as long as you have the drivers for them. And they will be presented to the VMs as one. And the iSCSI support is pretty extensive. I don't think you have enough servers to tax that SAN, you have a pretty good setup there. I think you'll be just fine with iSCSI, speed wise. You just need to configure your ethernet switch properly. Get a switch that supports VLANs and trunk a dedicated VLAN for iSCSI to each server. If you can dedicate a NIC to iSCSI, even better. Check to see if you can team a NIC in vmware, I don't think you can. You can etherchannel a few NICs but you need to configure it perfectly on the ESX box and the switch. Not sure if ESXi has all the options needed for that. If you do NIC teaming inside a VM, you will gain no benefit. Only when the ESX NICs are spread across an etherchannel do you get load balancing. Otherwise you just get 2 virtual NICs running to 1 physical NIC. Quote Link to comment Share on other sites More sharing options...
h4x0r Posted April 12, 2009 Author Share Posted April 12, 2009 thanks for the reply guys, it is all clear now to use ESXi with iSCSI. sorry for the late reply due to the easter holiday ;) Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.