h4x0r Posted April 12, 2009 Share Posted April 12, 2009 Hi All in response to the previous thread in http://hak5.org/forums/index.php?showtopic...3&st=0& which was confirmed by Decepticon and Cthobs, I'm about to deploy VMWare ESXi 3.5 on 2 servers which will be sharing the SAN using iSCSI (2x Gigabit Ethernet teaming cable). Specs: Dell PowerVault MD3000 10x 300 GB SAS 15k rpm 2x Dual port Gigabit Ethernet NIC (4x in total) Dell PowerEDGE 2950-III 2x Intel Quad Core E5410 32 GB DDR-II 667 MHz internal 4x 500 GB SATA 7200 rpm HDD (RAID 5) - I know it is slow for hosting the VMDKs Internal USB slot on the motherboard (but no USB flashdisk ???) here it is the diagram http://img25.imageshack.us/my.php?image=vmlan.jpg please let me know if this is does make sense and according to the best practice ? and the last thing is, as I've got spare 1 TB from the internal RAID10 SATA Drive, any idea of what should i do with them apart from installing 32 MB ESXi ? thanks. Quote Link to comment Share on other sites More sharing options...
VaKo Posted April 12, 2009 Share Posted April 12, 2009 In my similar setup (MD3000 replaced with openfiler) I replaced the internal SAS disks in my 1950 III's with an internal USB stick to run ESXi on. For me this ment I could use the disks in another machine. It also lowers the power consuption of the system. As for iSCSI, best practice ditates a seperate fabric rather than vLANs. Your images aren't loading btw, try http://kimag.es/ instead. Quote Link to comment Share on other sites More sharing options...
h4x0r Posted April 12, 2009 Author Share Posted April 12, 2009 In my similar setup (MD3000 replaced with openfiler) I replaced the internal SAS disks in my 1950 III's with an internal USB stick to run ESXi on. For me this ment I could use the disks in another machine. It also lowers the power consuption of the system. As for iSCSI, best practice ditates a seperate fabric rather than vLANs. Your images aren't loading btw, try http://kimag.es/ instead. VaKo, here it is: or in: http://kimag.es/share/84637437.jpg do you mean by using separate Fibre Optic fabric ? could you please explain or draw it as I'm confused here :-) thanks for replying. Quote Link to comment Share on other sites More sharing options...
decepticon_eazy_e Posted April 13, 2009 Share Posted April 13, 2009 VaKo, here it is: or in: http://kimag.es/share/84637437.jpg do you mean by using separate Fibre Optic fabric ? could you please explain or draw it as I'm confused here :-) thanks for replying. He's talking about dedicated switches and wires for iSCSI (SAN terms call any disk access network a fabric - not necessarily optical). Best practices also say get an iSCSI HBA, which is pretty unnecessary in a low volume environment such as this. If you start to see CPU spikes and bottlenecks, you could invest in one of those. Dedicated iSCSI network gives you speed and security. You don't share the wire with anything. The most secure network is a closed one, which is what you have. No security worries if nobody can get to the physical network. My 2 cents is, traffic measurements will show that nobody ever gets close to a gigabit. Only on big trunks on large networks do you reach a gig. Since you don't have to worry about bandwidth, you picked iSCSI because of cost, use it. If you wanted to spend the money on building a dedicated network for disk access, you would have picked fibre channel. Proper VLANs will give you proper security. (by proper I mean, config native vlans, trunk limits, ACLs, etc) I think you have the right plan here, the important part is draw out everything ahead of time. You will then see where your holes are and how to fill them. Quote Link to comment Share on other sites More sharing options...
h4x0r Posted April 13, 2009 Author Share Posted April 13, 2009 He's talking about dedicated switches and wires for iSCSI (SAN terms call any disk access network a fabric - not necessarily optical). Best practices also say get an iSCSI HBA, which is pretty unnecessary in a low volume environment such as this. If you start to see CPU spikes and bottlenecks, you could invest in one of those. Dedicated iSCSI network gives you speed and security. You don't share the wire with anything. The most secure network is a closed one, which is what you have. No security worries if nobody can get to the physical network. My 2 cents is, traffic measurements will show that nobody ever gets close to a gigabit. Only on big trunks on large networks do you reach a gig. Since you don't have to worry about bandwidth, you picked iSCSI because of cost, use it. If you wanted to spend the money on building a dedicated network for disk access, you would have picked fibre channel. Proper VLANs will give you proper security. (by proper I mean, config native vlans, trunk limits, ACLs, etc) I think you have the right plan here, the important part is draw out everything ahead of time. You will then see where your holes are and how to fill them. OK, Since my Dell PowerEDGE 2950-III comes with 2x Broadcom Integrated Gigabit Ethernet plus I add the Intel Gigabit Ethernet as additional 2 Gigabit ports. (total of 4 ports per server). I've made colour coding of blue and green for the SAN traffic, however the red line is for the management console access, in this case perhaps I can just remove all of the red line (no dedicated mgmt console.) and just make another pair for guest traffic from the network into the servers ? Quote Link to comment Share on other sites More sharing options...
decepticon_eazy_e Posted April 13, 2009 Share Posted April 13, 2009 OK, Since my Dell PowerEDGE 2950-III comes with 2x Broadcom Integrated Gigabit Ethernet plus I add the Intel Gigabit Ethernet as additional 2 Gigabit ports. (total of 4 ports per server). I've made colour coding of blue and green for the SAN traffic, however the red line is for the management console access, in this case perhaps I can just remove all of the red line (no dedicated mgmt console.) and just make another pair for guest traffic from the network into the servers ? Dedicated management is a good idea, more layers (like ogres). So you will have at least this many VLANs... 1 Management 1 iSCSI 1 production traffic Just keep a list like that going as you plan it out. Quote Link to comment Share on other sites More sharing options...
h4x0r Posted April 13, 2009 Author Share Posted April 13, 2009 Dedicated management is a good idea, more layers (like ogres). So you will have at least this many VLANs... 1 Management 1 iSCSI 1 production traffic Just keep a list like that going as you plan it out. great, this means that i can use 2xdirect patch cable connection to the SAN from each server and just leave the production line access one cable here it is the final diagram: URL: http://img245.imageshack.us/img245/2832/iscsisanr.jpg thanks for all of your comments guys. Cheers. Quote Link to comment Share on other sites More sharing options...
h4x0r Posted April 21, 2009 Author Share Posted April 21, 2009 To All, After reading through the internet about using iSCSI SAN with ESXi, it seems that i also need to use vSwitch so that the separate SAN subnet can communicate with my client in LAN subnet. so this is the diagram below, please correct me if I'm wrong. Cheers, Albert Quote Link to comment Share on other sites More sharing options...
decepticon_eazy_e Posted April 21, 2009 Share Posted April 21, 2009 To All, After reading through the internet about using iSCSI SAN with ESXi, it seems that i also need to use vSwitch so that the separate SAN subnet can communicate with my client in LAN subnet. so this is the diagram below, please correct me if I'm wrong. Cheers, Albert You have to do quite a bit of work on the vSwitch. That's where you define the vlan tagging and iSCSI initiators. None of the guest OS's will see or know that it's iSCSI (or FC, or sata, etc). The clients/guest OS should not have any interaction with the actual iSCSI network, unless you have a very specific need to allow them in. Attach a kernel port to the existing vSwitch and configure the iSCSI initiator there, only the kernel of the ESXi box will communicate with the SAN. Quote Link to comment Share on other sites More sharing options...
h4x0r Posted April 26, 2009 Author Share Posted April 26, 2009 You have to do quite a bit of work on the vSwitch. That's where you define the vlan tagging and iSCSI initiators. None of the guest OS's will see or know that it's iSCSI (or FC, or sata, etc). The clients/guest OS should not have any interaction with the actual iSCSI network, unless you have a very specific need to allow them in. Attach a kernel port to the existing vSwitch and configure the iSCSI initiator there, only the kernel of the ESXi box will communicate with the SAN. Yes, that's very true :-) anyway, I'll be implementing the SAN without using the switch as at the moment I don't have any plan to share the SAN with any other servers apart from the ESXi guest VMs. Thanks for your reply. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.