Jump to content

Recommended Posts

Posted

I have spare time, a couple old computers (3 laptops and 3 desktops), and would like to put them to use by constructing a cluster. It would be the most practical use cluster. My very(VERY) minimal programming experience is in mathematical function programs i've written in python during precalculus.

Posted
I have spare time, a couple old computers (3 laptops and 3 desktops), and would like to put them to use by constructing a cluster. It would be the most practical use cluster(by that i mean the cluster should not be mathematical/scientific processes specific). My very(VERY) minimal programming experience is in mathematical function programs i've written in python during precalculus. project links? good tutorials? what software/os should i implement? Willing to spend up to $400 on extra hardware.
Posted
Pelican HPC is a good place to start. It has MPI build it, not too sure about Python wrappings for it but I'd imagine their must be some around. From what I understand you have a front node that then delegates part of an MPI program off to the other nodes over a fast network.
Posted

Alright thanks I'll check it out. I've heard chaos, openmosix, and clusterknoppix referenced a lot for running a cluster unit. Do you know anything about it?

Posted (edited)

I know its not hard to set up a clustering computers, but it would be nice if someone could make a tutorial video. There are so many projects out there on clustering computers that, it can even be very intimidating for a novice user, to start on its own.

Edited by Infiltrator
Posted
I know its not hard to set up a clustering computers, but it would be nice if someone could make a tutorial video. There are so many projects out there on clustering computers that, it can even be very intimidating for a novice user, to start on its own.

Actually I might see if I can make a video tomorrow if I can figure it out.

Posted

high availability is probably what i'm looking for then.. I feel like i relatively understand the idea of connecting compters to an ethernet hub to enable communication and running one as the main using MPI software to distribute the workload.

Posted
high availability is probably what i'm looking for then.. I feel like i relatively understand the idea of connecting compters to an ethernet hub to enable communication and running one as the main using MPI software to distribute the workload.

No you are right, you are looking for High Performance Clusters. High Availability Clusters are made for when software (or hardware) fails it can automatically recover by switching to backup software/hardware.

Have a look on the Wikipedia page for High-availability Cluster

Posted

i see. are high availability solely for failover/backup? because i'm no to worried about that since i dont plan on running a website or even storing to much valuable info on my final product. btw i just caught ep-409(halloween special) and they did say they were going to build their cluster with pelican hpc. i think first i'm going to try the chaos os off http://www.midnightcode.org. For actual cluster "construction" is there much importance on the quality/type of the ethernet cables and network hub? Can i use an ext hard drive for the storage drive that they all save to?

Posted
..are high availability solely for failover/backup? because i'm no to worried about that since i dont plan on running a website..

nvm just answered question. didn't read earlier comments very carefully :(

Posted
are high availability solely for failover/backup? because i'm no to worried about that since i dont plan on running a website or even storing to much valuable info on my final product.

That's pretty much what the two words means, "High Availability". You want to have a system up running 24/7 without any disruptions, so if one server goes down due to technical problems, the remaining servers in the cluster will take over.

For the clients there will be no difference, since its all transparent to them.

Posted
That's pretty much what the two words means, "High Availability". You want to have a system up running 24/7 without any disruptions, so if one server goes down due to technical problems, the remaining servers in the cluster will take over.

For the clients there will be no difference, since its all transparent to them.

Did not mean to put a kink in the works. Most people think hpc is the only kind of cluster and that is not exactly right. Technically you have fail over with the hpc if one node fails, another node will take over. Now if the main node fails that is another issue depending on how it is set up..

Posted

does this sound like all the equipment i'll need?

1. NETGEAR GS108 10/100/1000Mbps ProSafe Gigabit Ethernet Desktop Switch with Jumbo Frame support

2. Ethernet Cables(550mhz good enough?)

3. Western Digital WDH1NC15000N 1.5TB My Book World Edition Network Hard Drive

4. An MPI program or an OS with one built in

5. 5 cpu's with windows xp or newer (failed hard drives on 2 so iso disk boot ok?)

Posted
does this sound like all the equipment i'll need?

1. NETGEAR GS108 10/100/1000Mbps ProSafe Gigabit Ethernet Desktop Switch with Jumbo Frame support

2. Ethernet Cables(550mhz good enough?)

3. Western Digital WDH1NC15000N 1.5TB My Book World Edition Network Hard Drive

4. An MPI program or an OS with one built in

5. 5 cpu's with windows xp or newer (failed hard drives on 2 so iso disk boot ok?)

Yeah that'll be enough but once you get started you'll just end up wanting more power :P

Here's a video I did with setting a Pelican HPC up. It was soooo easy. Sorry about the poor quality.

Posted

watched your vid. looks easy enough lol. can you run "regular" programs on the cluster? firefox, games, linux programs etc. or is there only certain programs that can be run on multiple computers

Posted
watched your vid. looks easy enough lol. can you run "regular" programs on the cluster? firefox, games, linux programs etc. or is there only certain programs that can be run on multiple computers

You should be able to but I don't think they'll run on all the nodes just your frontend node. If you want to add software just do a "apt-get install <program>" and that'll download and install the software for you. Problem is if you're only running in RAM then the next time you boot it will have been flushed.

Posted

I was watching last night, some Youtube videos on how to set up Pelican HPC. Man it sure does look very easy and simple to set up. But was wondering how does one get an app to operate on this environment, like Cain for instance, do I just ran the application on the master node, and from there it will split the work load to the slave nodes.

Or there's more tweaking to do, than just running the app?

Thanks.

Posted
what iso disks should i put in my slaves if using pelican?

What do you mean, to configure it?

Posted
I was watching last night, some Youtube videos on how to set up Pelican HPC. Man it sure does look very easy and simple to set up. But was wondering how does one get an app to operate on this environment, like Cain for instance, do I just ran the application on the master node, and from there it will split the work load to the slave nodes.

Or there's more tweaking to do, than just running the app?

Thanks.

It has to be a program especially made to use MPI and then you can compile and run it on all the nodes. Realistically High Performance Clusters like this aren't really useful for running normal programs, they're much more useful for inhouse scientific simulations and data manipulation etc.

what iso disks should i put in my slaves if using pelican?

You don't need any disks in your slaves as they boot over the network. Just set Ethernet or LAN or something similar as the number 1 boot option in your boot priority in your BIOS. Then when the slave boots it will check to see if it can connect to your frontend node, if it can it will then download the OS from your frontend node and boot that.

You'll only need an ISO if your BIOS doesn't support booting from the network. In that case then you can use ROM-o-Matic to boot a temporary Linux OS that will then boot from the network.

It can be a bit complicated but once you've done it a couple of times it's really easy.

Posted
It has to be a program especially made to use MPI and then you can compile and run it on all the nodes. Realistically High Performance Clusters like this aren't really useful for running normal programs, they're much more useful for inhouse scientific simulations and data manipulation etc.

You don't need any disks in your slaves as they boot over the network. Just set Ethernet or LAN or something similar as the number 1 boot option in your boot priority in your BIOS. Then when the slave boots it will check to see if it can connect to your frontend node, if it can it will then download the OS from your frontend node and boot that.

You'll only need an ISO if your BIOS doesn't support booting from the network. In that case then you can use ROM-o-Matic to boot a temporary Linux OS that will then boot from the network.

It can be a bit complicated but once you've done it a couple of times it's really easy.

I love etherboot (boot from floppy) for old systems for almost diskless thin clients.

Posted

ya at first 1 node was not cooperating. i made a rom-o-matic boot disk so its working now. how can i shut down my cluster. i tried to shut it down with the shutdown button on the desktop but it messed up the nodes so they didn't boot correctly the next time. how do i shut everything down at once or.. should i manually shut down each node before shutting down the main?

Posted

ya at first 1 node was not cooperating. i made a rom-o-matic boot disk so its working now. how can i shut down my cluster. i tried to shut it down with the shutdown button on the desktop but it messed up the nodes so they didn't boot correctly the next time. how do i shut everything down at once or.. should i manually shut down each node before shutting down the main?

I just force shutdowned (if that's a word) all of the nodes, just by holding the power buttons. It's quick and it's not going to do any short term damage. I suppose you could login to each of the nodes and then shut them down normally, then login to the frontend node and shut that one down as well.

To be honest I'm not sure if their is a proper way to do it. Although you could probably whip up a bash/python script to do it automatically remotely.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...