Jump to content

Sharing SAN running ESXi 3.5 using iSCSI on the LAN


h4x0r

Recommended Posts

Hi All in response to the previous thread in http://hak5.org/forums/index.php?showtopic...3&st=0&

which was confirmed by Decepticon and Cthobs, I'm about to deploy VMWare ESXi 3.5 on 2 servers which will be sharing the SAN using iSCSI (2x Gigabit Ethernet teaming cable).

Specs:

Dell PowerVault MD3000

10x 300 GB SAS 15k rpm

2x Dual port Gigabit Ethernet NIC (4x in total)

Dell PowerEDGE 2950-III

2x Intel Quad Core E5410

32 GB DDR-II 667 MHz

internal 4x 500 GB SATA 7200 rpm HDD (RAID 5) - I know it is slow for hosting the VMDKs

Internal USB slot on the motherboard (but no USB flashdisk ???)

here it is the diagram http://img25.imageshack.us/my.php?image=vmlan.jpg

please let me know if this is does make sense and according to the best practice ?

and the last thing is, as I've got spare 1 TB from the internal RAID10 SATA Drive, any idea of what should i do with them apart from installing 32 MB ESXi ?

thanks.

Link to comment
Share on other sites

In my similar setup (MD3000 replaced with openfiler) I replaced the internal SAS disks in my 1950 III's with an internal USB stick to run ESXi on. For me this ment I could use the disks in another machine. It also lowers the power consuption of the system.

As for iSCSI, best practice ditates a seperate fabric rather than vLANs.

Your images aren't loading btw, try http://kimag.es/ instead.

Link to comment
Share on other sites

In my similar setup (MD3000 replaced with openfiler) I replaced the internal SAS disks in my 1950 III's with an internal USB stick to run ESXi on. For me this ment I could use the disks in another machine. It also lowers the power consuption of the system.

As for iSCSI, best practice ditates a seperate fabric rather than vLANs.

Your images aren't loading btw, try http://kimag.es/ instead.

VaKo,

here it is: 84637437.jpg

or in: http://kimag.es/share/84637437.jpg

do you mean by using separate Fibre Optic fabric ?

could you please explain or draw it as I'm confused here :-)

thanks for replying.

post-13561-1239546392_thumb.jpg

Link to comment
Share on other sites

VaKo,

here it is: 84637437.jpg

or in: http://kimag.es/share/84637437.jpg

do you mean by using separate Fibre Optic fabric ?

could you please explain or draw it as I'm confused here :-)

thanks for replying.

He's talking about dedicated switches and wires for iSCSI (SAN terms call any disk access network a fabric - not necessarily optical). Best practices also say get an iSCSI HBA, which is pretty unnecessary in a low volume environment such as this. If you start to see CPU spikes and bottlenecks, you could invest in one of those.

Dedicated iSCSI network gives you speed and security. You don't share the wire with anything. The most secure network is a closed one, which is what you have. No security worries if nobody can get to the physical network.

My 2 cents is, traffic measurements will show that nobody ever gets close to a gigabit. Only on big trunks on large networks do you reach a gig. Since you don't have to worry about bandwidth, you picked iSCSI because of cost, use it. If you wanted to spend the money on building a dedicated network for disk access, you would have picked fibre channel. Proper VLANs will give you proper security. (by proper I mean, config native vlans, trunk limits, ACLs, etc) I think you have the right plan here, the important part is draw out everything ahead of time. You will then see where your holes are and how to fill them.

Link to comment
Share on other sites

He's talking about dedicated switches and wires for iSCSI (SAN terms call any disk access network a fabric - not necessarily optical). Best practices also say get an iSCSI HBA, which is pretty unnecessary in a low volume environment such as this. If you start to see CPU spikes and bottlenecks, you could invest in one of those.

Dedicated iSCSI network gives you speed and security. You don't share the wire with anything. The most secure network is a closed one, which is what you have. No security worries if nobody can get to the physical network.

My 2 cents is, traffic measurements will show that nobody ever gets close to a gigabit. Only on big trunks on large networks do you reach a gig. Since you don't have to worry about bandwidth, you picked iSCSI because of cost, use it. If you wanted to spend the money on building a dedicated network for disk access, you would have picked fibre channel. Proper VLANs will give you proper security. (by proper I mean, config native vlans, trunk limits, ACLs, etc) I think you have the right plan here, the important part is draw out everything ahead of time. You will then see where your holes are and how to fill them.

OK,

Since my Dell PowerEDGE 2950-III comes with 2x Broadcom Integrated Gigabit Ethernet plus I add the Intel Gigabit Ethernet as additional 2 Gigabit ports. (total of 4 ports per server).

I've made colour coding of blue and green for the SAN traffic, however the red line is for the management console access, in this case perhaps I can just remove all of the red line (no dedicated mgmt console.) and just make another pair for guest traffic from the network into the servers ?

Link to comment
Share on other sites

OK,

Since my Dell PowerEDGE 2950-III comes with 2x Broadcom Integrated Gigabit Ethernet plus I add the Intel Gigabit Ethernet as additional 2 Gigabit ports. (total of 4 ports per server).

I've made colour coding of blue and green for the SAN traffic, however the red line is for the management console access, in this case perhaps I can just remove all of the red line (no dedicated mgmt console.) and just make another pair for guest traffic from the network into the servers ?

Dedicated management is a good idea, more layers (like ogres). So you will have at least this many VLANs...

1 Management

1 iSCSI

1 production traffic

Just keep a list like that going as you plan it out.

Link to comment
Share on other sites

Dedicated management is a good idea, more layers (like ogres). So you will have at least this many VLANs...

1 Management

1 iSCSI

1 production traffic

Just keep a list like that going as you plan it out.

great, this means that i can use 2xdirect patch cable connection to the SAN from each server and just leave the production line access one cable

here it is the final diagram:

iscsisanr.jpg

URL:

http://img245.imageshack.us/img245/2832/iscsisanr.jpg

thanks for all of your comments guys.

Cheers.

Link to comment
Share on other sites

To All,

After reading through the internet about using iSCSI SAN with ESXi, it seems that i also need to use vSwitch so that the separate SAN subnet can communicate with my client in LAN subnet.

so this is the diagram below, please correct me if I'm wrong.

Cheers,

Albert

post-13561-1240288661_thumb.jpg

Link to comment
Share on other sites

To All,

After reading through the internet about using iSCSI SAN with ESXi, it seems that i also need to use vSwitch so that the separate SAN subnet can communicate with my client in LAN subnet.

so this is the diagram below, please correct me if I'm wrong.

Cheers,

Albert

You have to do quite a bit of work on the vSwitch. That's where you define the vlan tagging and iSCSI initiators. None of the guest OS's will see or know that it's iSCSI (or FC, or sata, etc). The clients/guest OS should not have any interaction with the actual iSCSI network, unless you have a very specific need to allow them in.

Attach a kernel port to the existing vSwitch and configure the iSCSI initiator there, only the kernel of the ESXi box will communicate with the SAN.

Link to comment
Share on other sites

You have to do quite a bit of work on the vSwitch. That's where you define the vlan tagging and iSCSI initiators. None of the guest OS's will see or know that it's iSCSI (or FC, or sata, etc). The clients/guest OS should not have any interaction with the actual iSCSI network, unless you have a very specific need to allow them in.

Attach a kernel port to the existing vSwitch and configure the iSCSI initiator there, only the kernel of the ESXi box will communicate with the SAN.

Yes, that's very true :-)

anyway, I'll be implementing the SAN without using the switch as at the moment I don't have any plan to share the SAN with any other servers apart from the ESXi guest VMs.

Thanks for your reply.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...