Jump to content

[Enterprise] VMWare ESXi 3.5


buzzinh
 Share

Recommended Posts

@ Work we are thinking about getting a Dell server with ESXi Hypervisor and we need to figuire out what config of storage / server we need.

What we are trying to achieve:

-We are going to be hosting several 2003 servers for storing members of staff's user areas, shared document areas, User profiles and possibly user application data.

-Run other VMs like pfsense / freesco router / Windows XP test machines for MSIs etc...

What we are unsure of:

-The best config for a server of this type given that its going to be serving files used by 1700 people ish

We initialy thought about a Dell PowerEdge 2900 iii with 5 300GB SAS 15k disks 4-8GB Ram.... but is this going to be capable of doing the job?

I know its a bit open ended but if there is anyone out there who has experience in this area any kind of advice would be greatly welcomed...

This solution will need to do the job (we are having speed issues at the moment and the money holders want things to be faster plus we want a VM solution) so this is why we are asking for any help or experience anyone can offer.

Cheers

Ollie

'BuzzinH'

Link to comment
Share on other sites

We've built a very nice solution for a stand alone ESXi box based around a Dell PowerEdge 2950 III with 2x 2.5GHz Quad Core CPU's, 16GB of RAM, 2x 146GB 15K SAS in RAID1 and 4x 300GB 10K SAS in RAID5. Only cost £4K so its the cheap option but for 1700 users it might be pushing it depending on your I/O load.

A better option would be to use a few dual quad 1950's or R805's and have an PowerVault MD3000i for storage using iSCSI unless you need the bandwidth of fibre channel. If you intergrate this into Virtual Center you can do all the cool shit like vMotion HA and so forth.

If you fancy spending a bit more, look at the Dell EqualLogic stuff, you can get a basic model for around £25-£30K, so still cheap, but it does all the fancy EMC stuff.

Link to comment
Share on other sites

Check your hardware off the HAL list, it gets updated weekly.

http://www.vmware.com/resources/compatibil...Category=server

The internal RAID group is unnecessary. Use a couple small drives, low RPM and mirror them. 2x 36gb or 18gb drives is plenty. If that dies and you have to reinstall ESXi, you are down an hour or so tops. The drives for the OS is only access during boot up and the entire OS is loaded into RAM, so there's no latency for access time (no need for fast/redundant disks!) VMWare says you can do this off a USB drive, I wouldn't put a production system out on a USB drive, no matter what they say.

If you want HA or vMotion, or StorageMotion, you need to buy the license and attach the servers to a SAN. That way the guest OS resides on a SAN and is accessible to other ESX servers. ESXi does not offer those functions out of the box.

When you get to the point of putting this in production and trusting it, upgrade to the full ESX 3.5. You'll have to go through a vendor and at that point you can look into all the options and hardware required.

Keep the questions coming, I work for a consulting company and VMware installations are our biggest money maker.

Link to comment
Share on other sites

The last Dell rep I spoke to was talking about addon SD cards for 2950's and 1950's which allow you to forgo the hard disks all together. And I know some of the newer Dell boxes come with internal USB ports for this purpose.

Out of interest, what are the benifits of ESX over ESXi? As far as I can see the main push seems to be with ESXi especially as a lot of servers come with ESXi embedded. What do I gain by going with ESX?

Link to comment
Share on other sites

The last Dell rep I spoke to was talking about addon SD cards for 2950's and 1950's which allow you to forgo the hard disks all together. And I know some of the newer Dell boxes come with internal USB ports for this purpose.

Out of interest, what are the benifits of ESX over ESXi? As far as I can see the main push seems to be with ESXi especially as a lot of servers come with ESXi embedded. What do I gain by going with ESX?

1. ESX 3.x has a full linux operating system behind it. ESXi does not, there is a secret CLI and you can enable SSH, but the options there are pretty limited and totally undocumented and unsupported. The backend of ESX 3.x allows you to do everything that the GUI does via CLI. There are some features that have not made it to the GUI that the CLI does, storagemotion is one of them. Why would anyone want this? A couple reasons, disaster, what if you only have a telnet/ssh option from the outside. Another would be scripting, you could automate an ESX install. When rolling out a cluster of servers that will all have similar settings, this is nice. This is the reason that ESXi can fit on a USB drive and ESX 3.5 cannot.

2. The big features, ESX gives you vMotion, HA, storagemotion, bigger HAL list, cloning and templates, consolidated backup, update manager, snapshots, etc. ESXi does not, you can clone in a half-assed way, but it's not nearly as efficient. The Virtual Center is what makes all that possible. If you were to upgrade ESXi, you would buy a Virtual Center license and install it on a server 2003 vm machine. When you connect to the VC via your client, you get all these options (license allowing, of course!)

3. Another big reason is support. ESXi does not have formal support. The help you will find is in forums and white papers, but it's officially not supported. You get a support contract from VMware and a 1-800 number to call. Seems like a trivial perk, but at 2am Monday night, it's better than scouring Expert Exchange or whatever top hit google spits out while you search for the fix to get ALL your servers back online.

If you want to put this in production and depend on it, you need to move up to the full version. The cost is pretty significant but then again the requirements are multiple core-big RAM-redundant servers and a SAN, which is not cheap. If your business can afford those things, you should be able to afford this.

Their big push is to get the hypervisor out there in the real world and get everybody comfortable with it, kind of like a crack dealer. The first hit is free, the next one will cost ya! Same thing with VMWare Server, it's the free version of Workstation, which can do much more.

Link to comment
Share on other sites

OK, when I talk about ESXi I basically mean ESXi + vCenter with all the correct liscences. I know the standalone ESXi is a poorboy solution but is it limited in the same fashion when implimented in a full Virtual Infrastructre.

Link to comment
Share on other sites

  • 1 month later...
is there any significant benefits in using iSCSI (2x1GB Ethernet) vs. Fibre Channel apart from prices ?

Do you have a difference in disks behind those technologies? The R/W and RPMs of the drives will make more of an impact.

FC is 1,2,4,or 8gbps.... So which FC speed are you comparing to 1gbps ethernet to?

iSCSI is great for keeping the costs down, the switches and infrastructure are much cheaper than FC. However, if you just put it in and forget about it, you'll pay for it. iSCSI is an ethernet protocol, just like TCP, UDP, etc. It is susceptible to the same problems. Congestion, dropped packets, QoS, broadcasts, etc. You need to plan out an iSCSI implementation more. You need VLANs and ACLs. You really don't want that same traffic running on the same wire as your WWW traffic. IP addresses can be spoofed, DOS'd, rerouted, and all the other fun things we discuss on this forum. If you take all that into consideration when designing the data center, you should have no problem.

FC is a bit easier for the inexperienced to implement. Zoning and aliasing really should be done and in place before production. However, if you don't do those things, it will still work (but it could work better and easier). FC usually stays contained in your FC switches, which generally are protected by a lock on the door. A bit more secure than iSCSI, but you still need to protect it.

VMware guest OS's will see all disks as directly attached SCSI drives, there is no difference. ESX will need to be configured appropriately for those technologies. If you don't use an iSCSI card in the ESX box, some kind of TOE HBA, the CPU must do all that processing. FC HBAs do the processing on the HBA, much less CPU load. How much less depends on how much disk access you use.

Link to comment
Share on other sites

Do you have a difference in disks behind those technologies? The R/W and RPMs of the drives will make more of an impact.

FC is 1,2,4,or 8gbps.... So which FC speed are you comparing to 1gbps ethernet to?

iSCSI is great for keeping the costs down, the switches and infrastructure are much cheaper than FC. However, if you just put it in and forget about it, you'll pay for it. iSCSI is an ethernet protocol, just like TCP, UDP, etc. It is susceptible to the same problems. Congestion, dropped packets, QoS, broadcasts, etc. You need to plan out an iSCSI implementation more. You need VLANs and ACLs. You really don't want that same traffic running on the same wire as your WWW traffic. IP addresses can be spoofed, DOS'd, rerouted, and all the other fun things we discuss on this forum. If you take all that into consideration when designing the data center, you should have no problem.

FC is a bit easier for the inexperienced to implement. Zoning and aliasing really should be done and in place before production. However, if you don't do those things, it will still work (but it could work better and easier). FC usually stays contained in your FC switches, which generally are protected by a lock on the door. A bit more secure than iSCSI, but you still need to protect it.

VMware guest OS's will see all disks as directly attached SCSI drives, there is no difference. ESX will need to be configured appropriately for those technologies. If you don't use an iSCSI card in the ESX box, some kind of TOE HBA, the CPU must do all that processing. FC HBAs do the processing on the HBA, much less CPU load. How much less depends on how much disk access you use.

Decepticon, Your suggestion and explanation is very great.

I'm planning to use 2x Gigabit Ethernet per server to access the SAN Drive (15k rpm SAS drive).

the plan is that to create separate networking subnet between the Servers-SAN-GigabitSwitch-AnotherGigabitSwitch.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...