Jump to content

Cool way to look up hashes with your web browser


vailixi

Recommended Posts

So I came up with a fun way to look up hashes.

post-44134-0-24510000-1427952353_thumb.j

This is what I came up with. You can write the hash / plaintext pairs to individual files named for the hash but without a .txt file extension.

So the file looks something like this.

7dff371b14986821e1778231479afdf93e698fa0
donkeypuncher

And the filename is something like this:

7dff371b14986821e1778231479afdf93e698fa0

Here's a simple script that does this with sha1 hashes. This could pretty much be any hash type.

#!/bin/bash
cat all.txt | while read line
do

echo $line | openssl sha1 |sed -e "s/(stdin)= //g" > temp1.txt
echo $line > temp2.txt
cat temp1.txt temp2.txt > temp3.txt
donkey=$(head -n 1 temp3.txt)
cat temp3.txt > "${donkey}"
rm temp1.txt temp2.txt temp3.txt
done

So basically you read through the wordlist and create a plaintext hash pair file for every plaintext in the list. An easy result. cd into the directory where you stored your hashes and cat out the hash. That simple almost zero lookup time because you're just calling a file.

Cooler still is you can upload them to a web server and you or anyone else can lookup hashes in a web browser. Just type in the address. yoursite/hashtype/hash
If you get a hit it's your plainext/hash pair. If it's not in your dictionary you get a 404 error.

Or for extra added awesomeness you can create an HTML file for each with propper titles, tags, etc. Make a site map and pretty soon people will be able to lookup your hashes on Google.

The cool thing here is you don't have to cat sort sed nawk grep split or generate new tables when you add words. You just more your new text hash pair files into the directory where you have them stored. You can skip or overwrite the existing and store the new files with little hassle. As an added bonus all of your friends can use your lookup files.

The main problem I'm running into is hosting. I'm looking for cheap host that will let me pretty much store unlimitted files.

If you are interested in working on something like this hit me up.

Edited by vailixi
Link to comment
Share on other sites

That simple almost zero lookup time because you're just calling a file.

Try this for a sec:

#!/bin/bash
mkdir hashes
cd hashes
for filenumber in {1..99999999999}
do    touch "file_$filenumber"
done

Now lookup any file in there.

time ls -l hashes/file_35132474655

I think you're going to find that the reported time is going to be anything but 'almost zero', particularly when the filesystem cache for this directory is cold.

You've also managed to waste a *TON* of diskspace. Storage on your harddisk is, by default, measured in 4kb pages for most filesystems. So even when you store just 1 byte, 4 kb is used on the harddisk. What you're storing is almost certainly less than 16 bytes. That means you need at least 256 times the amount of storage when you use your method relative to when you were to naively chuck everything into 1 file. And mind you, I'm not yet counting the inode itself, which doesn't come free and which is, itself, a limited resource. I know from experience - I'm using symlinks to unify 3 large harddisks in my fileserver on a small SD card and specifically formatted it with a small sector size so it can store all those small files (a symlink is a special file that only stores the name of the file you're referring to).

There are 2 options for retaining precomputed hashes: A simple lookup file that you can grep which will be fine for small datasets, and rainbow tables / databases which are much more suited for really large datasets.

Link to comment
Share on other sites

I agree with Cooper. While I applaud your efforts there are better ways of doing this. If you don't want to use rainbow tables and just want to come up with your own solution I would at least consider using a relational database. Something small like MySQL or SQLite will definitely work better than creating a ton of files.

Link to comment
Share on other sites

Cooper, so if my filesizes are even multiples of pagesize and block size that would be more optimal? 4k, 8k, 16k, or 64k filesize?

I agree. Rainbow tables are awesome. I'm just working on something new.

Also is there an easy way to port a rainbow table to a text file or database?

Edited by vailixi
Link to comment
Share on other sites

More optimal, certainly. But I feel you're better off splitting up your problem and finding the optimal solution to each part.

I think the parts for this problem are:

1. Have a user enter a value on a website.

2. Verify the value in (1) is a valid hash key to begin with (things like length checking, making sure it's using only valid chars, that sort of thing).

3. Perform the most efficient lookup for that hash in your precomputed keystore (if it's running on an 8-core machine, divide the keystore up in 8 chunks and have each thread check one chunk. Find some way to store the data more efficiently as those 40 printable bytes can be stored in 20 bytes when represented as binary data)

Link to comment
Share on other sites

  • 2 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...