Jump to content

VMWare Infrastructure - LAN configuration


Razorhog

Recommended Posts

Hello all - I am the network admin for a public school. I'm going virtual with my servers, and could use some help - especially with the LAN configuration. I am getting 2 servers (Dell R710, dual Xeon 5520's, and 48gb ram) and an MD3000i SAN (15 146gb 15k rpm SAS drives). VMWare Infrastructure Enterprise edition, Virtual Center Foundation.

I don't have a gigabit switch, but plan on getting a gigabit copper module for my HP Procurve 5308XL. That switch is the core switch for the entire district.

Would putting that module in the 5308XL and using it for iSCSI connections be OK? I know you should have 2 switches for redundancy, but I figure if the core switch goes down, everything goes down regardless. If that would be ok, do all of the iSCSI connections need to be on a separate subnet and have a vswitch handle VLANS? Or are the VLANS handled by the 5308XL?

I have a lot of questions and things I need to get straight in my mind. Any help would greatly be appreciated. I'm a bit scatterbrained right now, so please let me know if you need more information about anything. None of this equipment has been ordered, so I can make modifications as needed. Thanks!

Link to comment
Share on other sites

Alright, let's go through this one at a time.

Hello all - I am the network admin for a public school. I'm going virtual with my servers, and could use some help - especially with the LAN configuration. I am getting 2 servers (Dell R710, dual Xeon 5520's, and 48gb ram) and an MD3000i SAN (15 146gb 15k rpm SAS drives). VMWare Infrastructure Enterprise edition, Virtual Center Foundation.

Pefect, that's a good start.

I don't have a gigabit switch, but plan on getting a gigabit copper module for my HP Procurve 5308XL. That switch is the core switch for the entire district.

Good, those are nice switches. You won't be doing much here until you get that switch though...

Would putting that module in the 5308XL and using it for iSCSI connections be OK? I know you should have 2 switches for redundancy, but I figure if the core switch goes down, everything goes down regardless. If that would be ok, do all of the iSCSI connections need to be on a separate subnet and have a vswitch handle VLANS? Or are the VLANS handled by the 5308XL?

Yes, iSCSI runs just fine on that. iSCSI should sit on a dedicated VLAN, that's correct. The vswitch does not create vlans, at all. It tags packets on the way out of the physical box that correspond to whatever vlan you assigned it. So the port directly attached should be a trunk, with the appropriate vlans allowed.

Redundancy. You are getting a switch with redundant power supplies and hot swap modules. So you have the equivalent of multiple stand alone switches, take advantage of that. The diagram Matt did in the last episode that showed all the switches separated and isolated is only best practices. Best practice and real world possibilities only meet on Sundays for church, they rarely work together. Lump as many of the ports you have on each server together on the vswitch. You will have the most failover that way. Put them all on as trunks on the HP switch. Spread the ports over as many modules as you can. That way if one backplane goes down, or module or supervisor engine goes down, the server will stay up. Do the same on the other server and the SAN, you should have a pretty bullet proof system.

As none of the equipment is ordered, you should add as many gigabit NICs to the servers as possible. If you feel that read/write disk access will be your bottle neck, swap out a NIC for an iSCSI TOE card. Check the VMware HAL before you order. I can't make that call for you since I don't know what will be running on there.

Swap out all of the modules in the 5308 for gigabit, there's no reason not to. Your core switch should not be 10/100.

Link to comment
Share on other sites

First off, thank you for the reply!

I think there is a bit of confusion about the core switch. I already have the 5308XL; it just doesn't have a copper gigabit module in it yet. It is currently the core for my network (fiber modules connect campuses/buildings), and I'm not sure if putting a gig copper module in it for the ESXi servers/SAN is the right way to go about this project. Maybe getting a new separate gig switch would be better? I would only be putting in 1 14 copper port module, would that even be enough? If each server has 6 NIC ports, that would eat up 12 right there. I might need two modules...

The following diagram is taken from Blue Gears: http://www.networkworld.com/community/node/36691

pNIC0 -> vSwitch0 -> Portgroup0 (VMKernel VMotion)

pNIC1 -> vSwitch0 -> Portgroup0 (VMKernel VMotion)

pNIC2 -> vSwitch1 -> Portgroup1 (VMKernel iSCSI)

pNIC3 -> vSwitch1 -> Portgroup1 (VMKernel iSCSI)

pNIC4 -> vSwitch2 -> Portgroup2 (VM Network)

pNIC5 -> vSwitch2 -> Portgroup2 (VM Network)

I'm still kind of fuzzy on the whole Portgroup thing, but I'm getting there.

Each vSwitch will assign a VLan tag. The corresponding ports on the 5308XL will allow that VLAN traffic. Is that correct? If so, I don't understand what happens to the traffic after that - how does it get to the SAN?

Another option - with the MD3000i, it looks like I might be able to simply connect the two servers directly.

You can cable from the Ethernet ports of your host servers directly to your MD3000i RAID controller

iSCSI ports. Direct attachments support single path configurations (for up to four servers) and dual path

data configurations (for up to two servers) for both single and dual controller modules.

Thanks for any help/suggestions. This stuff is fun but complicated...

Link to comment
Share on other sites

I'm still kind of fuzzy on the whole Portgroup thing, but I'm getting there.

Each vSwitch will assign a VLan tag. The corresponding ports on the 5308XL will allow that VLAN traffic. Is that correct? If so, I don't understand what happens to the traffic after that - how does it get to the SAN?

Traffic gets to and from the SAN via VLANs, you need to configure them properly on the switch and the SAN as well as the ESX servers. I recommend you study up on those and figure out how to VLAN on a procurve as well as the SAN before you implement.

I think there is a bit of confusion about the core switch. I already have the 5308XL; it just doesn't have a copper gigabit module in it yet. It is currently the core for my network (fiber modules connect campuses/buildings), and I'm not sure if putting a gig copper module in it for the ESXi servers/SAN is the right way to go about this project. Maybe getting a new separate gig switch would be better? I would only be putting in 1 14 copper port module, would that even be enough? If each server has 6 NIC ports, that would eat up 12 right there. I might need two modules...

Ok, I didn't realize you had the switch already. You need to decide if having the server and SAN traffic on the core is right for you. We can't see where your data goes, so we can't say. If all the data on those servers hits the core and needs to be gigabit, then I would say yes, put it on the core. If you have a majority of traffic that is server to server, or doesn't need to be delivered at top speed, then maybe off load to another switch.

Our typical configuration/recommendation is 2+ cisco 3750s stacked. You then spread your NIC connections over the 2 or more switches for failover. Everything has at least 2 paths to everything else, you have a pretty safe setup that way. HP has options similar to that.

Also, I would never recommend a direct attached solution when you have a SAN. You bought a SAN with redundant iSCSI controllers for a reason. If you directly attach them to the server, you should have bought a JBOD and saved money. You will have to completely rebuild your setup if you want to add a 3rd server down the road. Always plan for expansion and you will never out grow your hardware.

Link to comment
Share on other sites

Traffic gets to and from the SAN via VLANs, you need to configure them properly on the switch and the SAN as well as the ESX servers.

I was under the impression that different subnets could be used, rather than VLANs - am I wrong or do you have to do both?

Ok, I didn't realize you had the switch already. You need to decide if having the server and SAN traffic on the core is right for you. We can't see where your data goes, so we can't say. If all the data on those servers hits the core and needs to be gigabit, then I would say yes, put it on the core. If you have a majority of traffic that is server to server, or doesn't need to be delivered at top speed, then maybe off load to another switch.

Our typical configuration/recommendation is 2+ cisco 3750s stacked. You then spread your NIC connections over the 2 or more switches for failover. Everything has at least 2 paths to everything else, you have a pretty safe setup that way. HP has options similar to that.

Well I like the idea of putting the iSCSI traffic on a different switch/switches. All of the switches in my network are HP, maybe a couple 24 port 2810's would work. At this point I'm confused as to how the VM's will be visible to my LAN.

Also, I would never recommend a direct attached solution when you have a SAN. You bought a SAN with redundant iSCSI controllers for a reason. If you directly attach them to the server, you should have bought a JBOD and saved money. You will have to completely rebuild your setup if you want to add a 3rd server down the road. Always plan for expansion and you will never out grow your hardware.

Excellent advice, thank you.

Link to comment
Share on other sites

I was under the impression that different subnets could be used, rather than VLANs - am I wrong or do you have to do both?

Well I like the idea of putting the iSCSI traffic on a different switch/switches. All of the switches in my network are HP, maybe a couple 24 port 2810's would work. At this point I'm confused as to how the VM's will be visible to my LAN.

subnets and vlans usually correspond, but they don't always have to. Best practices usually dictate no overlapping subnets, i.e. each subnet belongs to a specific vlan. But I already said the thing about best practices...

You can have as many subnets in a single vlan as you want, nothing will prevent you from configuring it that way. The problem lies in having broadcasts that belong to subnet X overlapping with subnet Y. For example 10.10.10.10 is an address in subnet 10.x.x.x, but also in subnet 10.10.10.x, so what broadcasts should it respond to? VLANs will segregate those broadcasts (which is why they call it a broadcast domain) from each other, and mitigate that specific problem. It's up to you to plan it out correctly and avoid such problems. Putting iSCSI in a closed dedicated vlan makes sure that traffic is not overlapping or interfered with from another subnet or vlan. I explained that more thoroughly in another thread.

The VMs that will be visible to your network are up to you, your LAN has a vlan, put the VMs in that same vlan and there you go. I will assume you planned ahead and didn't make VLAN 1 your production vlan.....

If you are confused by all this, put all into a diagram and map it out specifically. If you plan it right, you can implement it right. If you plan it wrong, you can never implement it right.

Link to comment
Share on other sites

The VMs that will be visible to your network are up to you, your LAN has a vlan, put the VMs in that same vlan and there you go. I will assume you planned ahead and didn't make VLAN 1 your production vlan.....

VLAN 1 is the production VLAN. Isn't that the default setting? This network was already in place when I was hired, and only had the default VLAN. Does it really matter that VLAN 1 is the production VLAN?

If you are confused by all this, put all into a diagram and map it out specifically. If you plan it right, you can implement it right. If you plan it wrong, you can never implement it right.

I plan on trying to map it out soon. Diagrams help, and all I've been able to find are generic diagrams in the setup documents.

Link to comment
Share on other sites

VLAN 1 is the production VLAN. Isn't that the default setting? This network was already in place when I was hired, and only had the default VLAN. Does it really matter that VLAN 1 is the production VLAN?

VLAN 1 is always the default setting on every switch, which is exactly why you shouldn't use on any of your switches.

Link to comment
Share on other sites

Here is my beginning diagram.

3526873784_a010f5a25b_o.jpg

Ok, that's a good start.

I would recommend a minimum 6 ports per server with that setup. 2 for each of those "networks". Now take those 2 ports for each of those "networks" and put them on different modules on the core switch. This gives you a failover option on each network, and on the physical switch. The modules can be swapped out if they fail. Pick modules on separate backplanes (left and right) and you get get failover from the backplane.

This is all best practice according to VMware, but I'll throw out a different solution. Take the same 6 ports on each server, join them to the same vSwitch and use port groups on the vSwitch to divide up the "networks". Now instead of 2 physical ports for failover, you have 6. Meaning 5 physical ports on the server, or on the switch, can fail before you have an outage on the server. Much better, yes?

Those ports on the switch need to be trunks, with a vlan for each of the "networks". The SAN would reside in your iSCSI vlan and nothing but those trunk ports from the servers will exist in your vMotion vlan. The vMotion "network" will get it's own range of IPs, which is trivial, but there is no default gateway set. That ensures that traffic can't get in or out from that subnet/vlan/"network".

I assume there's some kind of management port on the SAN, if that's rolled up on the same ports, you'll have to figure out the trunking protocol there. I guarantee there will be some solution provided by the SAN manufacturer for this.

Your production "network" will be vlan 1, since that's what you started with already. You're iSCSI and vMotion will something else between 2 and 4000-something.

If you have the ports available, I recommend you put the SAN and the servers on the core switch. It's your fastest and most resilient switch with the best failover options. Either way put the SAN and the servers on the same switch, eliminate as much lag as possible.

Research "vlan hopping" to explain why you should never use vlan 1 in your production network.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...