Jump to content

A few beaowulf puzzles


Sparda
 Share

Recommended Posts

ok, so it's faily obviuse that distrabuting proccessing is a good idea since 10 fast computers is for more finantily efficent then buying one realy fast computer (putting all your eggs in one basket). How ever, to make full use of th processor power avalable it is a good idea to run two or three server on the cluster using xen. Again, this is easy to understand, how ever, it's the network implications that i'm lost about.

Say this cluster is running three servers for a big network, and you want each server to have there own IP address (obviusly), how would you achive this?

Would you, for example, have 4 network cards in the master node, 1 to connect to the network that the nodes are on (since they should be on there own network to increase security), and the other 3 so that each of the 3 servers have there own uniquie IP address and MAC addess?

Link to comment
Share on other sites

I'm not sure if I understand your question or not but I'll give an answer a shot anyway.

To run the cluster you're going to need a master node that distributes the load to the slave nodes. That master node could have one, two, or more NICs in it. If it only has one the slaves would be theoretically visible to the outside world. If you have two NICs you could set it up just like a NAT router. The master node would have to understand some way of distributing the work to the slaves (e.g. round robin).

I hope that answers your question. If it doesn't could you please explain a bit more about what you're trying to understand??

Ben

Link to comment
Share on other sites

No, what I want to know is how you would give each xen session it's own IP address, and prefrably it's own MAC address.

wouldnt this already be implied since each NIC has its own mac. You would then be required to statically assigned the unique IP?

Link to comment
Share on other sites

Well, I'm not that familiar with Xen but could you use IP aliasing (Linux tutorial here) set up virtual interfaces on your system?? This would prevent you from having to put multiple NICs in a system however you'll have reduced throughput for each Xen session since they're sharing the same physical device.

Of course, you'll have reduced throughput if you have more than one Xen session anyway since you're probably running multiple NICs off of the same PCI bus.

Ben

Link to comment
Share on other sites

The PCI bus has a maximum though put of 133MB/s, since 100 Mbps is 12.5MB/s I don't see much of a problem. If, however, you are using gigabit networks cards (and all servers should have a Gigabit network card), then you are going to have a problem, if the master node has two any way, (1000 / 8 = 125) there would be a slow down. It would get worse if you where xen'ing 3 servers on the cluster and each one had it's own gigabit network card in the master node. Perhaps it would be wise to run the main server (say the network user authentication server) on the master node, so then it only needed 2 giabit network cards and then run the other two servers (xen'ed obviusly) on another two nodes and fit thoughs nodes with another gigabit network card. There by keeping the proccessing distibuted but easeing the load on the master node?

Link to comment
Share on other sites

The PCI bus has a maximum though put of 133MB/s, since 100 Mbps is 12.5MB/s I don't see much of a problem. If, however, you are using gigabit networks cards (and all servers should have a Gigabit network card), then you are going to have a problem, if the master node has two any way, (1000 / 8 = 125) there would be a slow down. It would get worse if you where xen'ing 3 servers on the cluster and each one had it's own gigabit network card in the master node. Perhaps it would be wise to run the main server (say the network user authentication server) on the master node, so then it only needed 2 giabit network cards and then run the other two servers (xen'ed obviusly) on another two nodes and fit thoughs nodes with another gigabit network card. There by keeping the proccessing distibuted but easeing the load on the master node?

basically a bottle neck?

Link to comment
Share on other sites

Say this cluster is running three servers for a big network, and you want each server to have there own IP address (obviusly), how would you achive this?

I was under the impression that each node _has_ its own IP address. This allows you to contact a specific node for administrative purposes. And that's pretty much the only reason you would want this. From a user-perspective the whole idea behind a cluster is that you don't know which node is handling your request since it doesn't matter (to you).

Would you, for example, have 4 network cards in the master node, 1 to connect to the network that the nodes are on (since they should be on there own network to increase security), and the other 3 so that each of the 3 servers have there own uniquie IP address and MAC addess?

With that configuration the master node would become a single point of failure: precisely the thing you're trying to prevent in a cluster.

What you probably should have is a switch that connects the nodes to eachother as well as the outer world. The Master Node is listening for requests on the cluster IP, and passes any work received on to one of the nodes who will then handle the user request using the cluster IP. I have no idea if this means the communication that takes place between the client and the cluster involves the use of some generated MAC address for the cluster itself that all the nodes spoof, or if they don't and this would become a way for you to determine which of the nodes serviced your request.

The main idea is that if the master node drops dead, one of the other nodes takes over and becomes Master node until the original Master node is revived.

If you want inter-node communication to be faster or more private, you would have a secondary, typically faster network, such as Infiniband, that all the nodes are attached to. This is also typically the point where things start to get rather expensive, so most home-use clusters simply communicate over the generic switch. Alternatives to this include a secondary switch that all nodes are attached to, as well as having the nodes connect to eachother using Firewire (possibly it was USB, but that's not very likely).

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...