Jump to content

Computer Build


Sprouty

Recommended Posts

Hi,

I'm looking to build a new computer to mainly host VMs, running Ubuntu desktop and Virtual box and looking for some advice.

I'm not too sure what i should be looking for hardware wise, and just hoping for some input from the forums. I've got a short hardware i was thinking of:

https://docs.google.com/spreadsheet/ccc?key=0Ag4csyckunWvdDBzVDZwUG9ha0N1dkItTmpua0VZZlE

I was hoping to strip the power supply and case from my current case. Just wondering for any thoughts or if you would swap certain parts in and out?

I would be grateful for any advice.

Many Thanks,

Sprouty

Link to comment
Share on other sites

What are you storing the VMs themselves on?

Stay simple on motherboard, maybe use a h77 isntead of z77 if you don't want to overclock. I would also really take a hard look at the CPU and consider the i7-3770 (non-k) if you are primarily using it for vms. Also, get a good NIC (intel CT gigabit is fine for under $30).

Edited by hexophrenic
Link to comment
Share on other sites

Would like to try and build myself a little test lab out of the VMs, along with using some of them for development and staging.

I guess the boost in performance from the CPU is worth the extra cash, but would the motherboard be?

Many Thanks,

Sprouty

Link to comment
Share on other sites

I am currently running a third lab server for pentesting on an AMD A8-3850 Llano APU / Gigabyte A75M-UD2H mobo with 32GB RAM (total price about $300 including HDD and Intel NIC) and currently have 9 Virtual Machines running on it with no problems 24/7. The hardest thing other then the NIC's I found when selecting the hardware was the SATA chipset. Honestly this thing runs smother then my 1U servers :)

Alan

Link to comment
Share on other sites

Main bottlenecks to consider with VM's, regardless of CPU and Mobo, although you do want one with a VM support track record, when running all VM's at the same time, know that all ram for VM's are files stored on HDD's. Having a raid setup helps, or putting each VM on different disk drives when all running at same time. One drive, with 5 VM's running at same time, will be your main bottleneck. For obvious reasons more ram and CPU, better performance, but don't forget Disk I/O when considering virtualization of any kind, as thats where your biggest hurdles will be with performance the more VM's you load at one time.

Link to comment
Share on other sites

Personally when spec'ing a machine for hosting a set of VMs there are a number of considerations that have to be made. The first consideration is the number of cores available, in-fact I would suggest that you will do better to sacrifice some Ghz from a CPU if it will gain you more cores (if you can get both cores and speed in your price range then great).

Second factor to consider is the storage. Do you need size or performance? If you are planing on a number of small servers (say each requiring 10GB - 20GB of disk space) then SSDs could be the best way forward. If on the other hand your machines will need a large quantity of disk space then you should consider going with RAID to both boost the performance and to give yourself some redundancy in case a disk fails (Though you will have a good backup procedures in place for all your VMs won't you).

Third factor is memory, which is quite easy. Just total up the quantity you want for you initial set of VMs add on some for your host and then add on a bit more. E.g. if you want to have 6 VMs with 2GB of memory each then you would need least 12GB + 2GB for the host + 6GB (enough for another 3 VMs). This gives you a minimum requirement of 20GB, rounding it up to 32GB gives you both plenty of memory for your VMs and is also a easy enough quantity to purchase.

Final factor to consider is the networking. This depends on what you are requiring your VMs to do and what your current networking infrastructure is like. If you VMs will be streaming video out across your network then you may want to have multiple gigabit NICs on the machine. On the other hand if they aren't going to be network intensive then they could all share the one gigabit NIC on the motherboard. Finally if you only have a 100Mb network then would want to have multiple NICs again as it would be very easy for one VM to saturate the single network connection (effectively DOSing the rest of your VMs).

If you are planing on leaving your VMs running permanently then I would recommend investigating alternatives to Virtual Box. Xen, VMWare and KVM are all very good alternatives. Personally I have found Xen to be very easy and reliable, but Linux seems to be heading down the KVM line these days so that could be a good way to go for the experience.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...