Jump to content

distributed computing/cpu pool


dclay

Recommended Posts

Unless your workload distributes, you won't gain a thing. It's a case of the "If 1 woman's pregnancy takes 9 months, 9 women's pregnancies take only 1 month" fallacy.

So, what is it you want these CPUs to be doing?

Link to comment
Share on other sites

Unless your workload distributes, you won't gain a thing. It's a case of the "If 1 woman's pregnancy takes 9 months, 9 women's pregnancies take only 1 month" fallacy.

So, what is it you want these CPUs to be doing?

yea i understand the fallacy however i was picturing more like DBZ Spirit Bomb "lend me your power" type of thing......but how do you distribute the workload? and i want to use the cpu for programs like pyrit, aircrack-ng, etc

Edited by dclay
Link to comment
Share on other sites

The best way to manage this is to split up your wordlist in line with the processing speed of the system you're going to run it on. With a bit of *nix script-fu you could have 1 command to trigger the execution of the tools on the various remote systems you have:

#!/bin/bash
HASH=$1
for box in `cat remote_machines.txt`
do  xterm -e ssh pyrit@${box} pyrit $HASH &
done

So if you have a hash (and assuming that's how you invoke pyrit, which it probably isn't) you'd run that script with the hash as parameter and you'd get a stack of xterms on your screen, each representing the pyrit process running on each of the remote machines, chugging away at the hash using their specific subset of the complete wordlist.

To properly chop up the workload, run some benches with the tool of choice and discover the rate at which the machines operate. Use that to determine what percentage of the total keyspace this machine should be working on and ensure only that portion is used by the tool when working for you.

If you don't use a wordlist, but instead generate something like 'alphanumeric, case-sensitive, 8 characters long', divide the 62 possible values of the first character of the possible combo's across the machines and drop 1 char from the part that's provided by the generator.

Link to comment
Share on other sites

  • 5 weeks later...

Heh. Okay, so you want to run Pyrit in a distributed fashion. Problem is, natively Pyrit doesn't do this. There isn't some Pyrit server program that provides multiple Pyrit client programs with work, so if you want to scale a Pyrit job across multiple machines you're going to have to do this differently. To find out how we need to look at the program in question, which for this specific situation will be Pyrit but can be any other program aswell (aircrack-ng, etc.).

Pyrit does the following:

[WORDLIST] >--> Pyrit >--> [output]


What you want is

/---> Pyrit >--\
|---> Pyrit >--|
[WORDLIST] >--|---> Pyrit >--+---> [output]
|---> Pyrit >--|
\---> Pyrit >--/


But since Pyrit natively can't provide this you need to achieve it in a different way. By far the easiest way is this:

[WORDLIST Pt1] >---> Pyrit >---> [output]
[WORDLIST Pt2] >---> Pyrit >---> [output]
[WORDLIST Pt3] >---> Pyrit >---> [output]
[WORDLIST Pt4] >---> Pyrit >---> [output]
[WORDLIST Pt5] >---> Pyrit >---> [output]

You split up your wordlist. Fast nodes on your network get a larger section, slower a smaller with the underlying goal of all nodes, assuming no solution is found, work on the wordlist for roughly the same amount of time.
One way to do this is to put WORDLIST Pt1 on NODE1 as wordlist.txt, WORDLIST Pt2 on NODE2 also as wordlist.txt etc and whenever you want to crack a hash you SSH into each of your nodes, start Pyrit with the exact same command as you'd use on the other nodes. Something like "pyrit -hash $HASH -wordlist wordlist.txt" (and yes, I pulled this command out of my ass. Just read the pyrit docs for the correct invocation). You could have the wordlist part on each node be named something specific, but then you'd have to be mentally aware of this each time you want to crack a hash, which is cumbersome.
An arguably better way is to use a fileserver that distributes the wordlist parts to each of the nodes. You would have a Samba/NFS/Whatever server exporting all the parts to all the nodes using filenames like "wordlist.part.[TARGET_NODE_IP_ADDRESS]" and a shell script that essentially does

#!/bin/bash
THIS_NODE_IP=`hostname -i`
pyrit -hash "$1" -wordlist "/mount/fileserver/hashcracking/wordlist.part.$THIS_NODE_IP"

So now when you have a hash you can SSH into each node and just run this 'run_pyrit.sh' (or whatever) script with the hash as a parameter and it will go to work on its own part of the wordlist. You could consider having your script redirect the output of the pyrit program to some file kept on the fileserver, like this "pyrit ... > /mount/fileserver/hashcracking/result.$HASH.$NODE_IP.txt" but that's of course entirely optional.

The problem now is that when you want to have all your nodes run pyrit on a hash, you need to manually SSH into each and every node to start this process. There are 2 options for you:

1. Have the nodes poll a file with hashes for them to work on. This would have to be a per-node file so the node can remove/rename the file when it's completed working on it. Doing this only makes sense when you continually have new hashes for your nodes to try and you'll want some way to stop the other nodes when one of the nodes finds a solution to a given hash.

2. Manually start the process on all nodes for a specific hash via a script. This would be the better solution for more ad-hoc hash crackling.

The script in my previous post is for #2. It starts a new local xterm process in the background (the '&' character at the end) within which an SSH session will be made to each of the nodes which you've listed in the remote_machines.txt file (one line per hostname and/or ip address) and within this SSH session, once established, it will run the pyrit command (pyrit $HASH). To prevent you having to provide a password on each of these ssh sessions, set up each of your nodes for passwordless SSH login.

The idea is that if you have, say, 6 nodes listed in that text file that when you run that script from your main machine, it'll open 6 xterm windows, each connected to 1 of your nodes that will instantly get to work on cracking the provided hash using their allotted part of the wordlist. The only remaining 'problem' is that once one of the nodes finds a solution the others can stop cracking but there's no means to inform them of this so you're going to have to do that manually. I doubt you'll see this as a really big problem.

Edited by Cooper
Link to comment
Share on other sites

I have tested pyrit cluster. It was faster to split your wordlist into equal chucks

I split mine up to equal chuncks based on each machines performance... math is always fun.

250 million wordlist

20,000 per second labtop

15,000 per second desktop

As pyrit has a built in cluster function, I only got 25000 per second. Maybe network bottle neck...

I achieve 35000 per second when wordlist was split properly.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...