Jump to content
Hak5 Forums

The Power Company

Active Members
  • Content count

    60
  • Joined

  • Last visited

  • Days Won

    1

About The Power Company

  • Rank
    Hak5 Fan +

Profile Information

  • Location
    Anywhere With Lights
  • Interests
    Zapping

Recent Profile Visitors

373 profile views
  1. The Power Company

    WarBox

    looks like a fun time! I once stuck my raspberry pi into a tissue box with antennas poking through but this looks a lot better
  2. The Power Company

    Elite Field kit has changed...

    Yeah I bought an elite kit a few months back, can confirm it has not changed (except for it being currently sold out)
  3. The Power Company

    Large Capacity MP3 Players

    I use Google Play music, and even though my entire library is easily too large for my phone's 64gig storage, I can still have all my favorite songs downloaded. There aren't many cases where I lose WiFi access for a long time, but even then I still have about 72 hours worth of songs I can listen too without access.
  4. Really? I thought that hackers were supposed to be as noisy as possible when infiltrating a network!
  5. The Power Company

    Non-Malicious Botnet?

    Perhaps botnet isn't the correct terminology, but I have a few old laptops sitting around unused. I was thinking that if you were running a program that handles some multi-threaded task and carries out processing methods on a large dataset, you could have a centralized system to keep track of overall progress, assigning the next item in the dataset to be processed as soon as one of the PCs finishes its current task.
  6. The Power Company

    Non-Malicious Botnet?

    Hey guys, I was wondering what the best/most efficient way to get multiple devices to act in unison, as a botnet would, but without malicious intentions, as a botnet wouldn't. Would the best choice be to use some cloud platform like Apache Mesos or Docker sort of application? Amazon Web Services maybe? Would designing an actual botnet make any sense? Anyone have any experience with this sort of thing?
  7. The Power Company

    Piratebox

    Is it possible to run Piratebox without OpenWrt? I know the Nano already supports OpenWrt, and I'm pretty sure that the Tetra also does, but it isn't in OpenWrt's Table of Hardware yet... EDIT: I wish I could say that I mean the stock version of OpenWrt, but honestly it was so late what I posted this that I completely forgot that both pineapples already run OpenWrt. I mean its not like it says "with OpenWrt" in the ascii art that appear when you ssh into one... oh wait...
  8. The Power Company

    Deep Web Crawler Building 101

    Makes sense. It's funny, the slowness of navigating the Tor network is usually seen as a disadvantage, but from a security standpoint it is actually quite beneficial.
  9. The Power Company

    DownloadExecSMB non powershell payload need

    I figured as much. From looking around a little it seems that Windows XP has powershell anyway, so unless the target manually removed it (which isn't possible to do without breaking it for Windows versions past XP) there shouldn't be any problem... unless I'm completely out of the loop and winxp stands for something other than Windows XP.
  10. The Power Company

    DownloadExecSMB non powershell payload need

    I haven't looked into those specific payloads, but many commands that run in PowerShell are identical to those in the normal command prompt. Does the script use any cmdlets or other PowerShell-specific commands? If it doesn't, it may still work if you just changing the line where it opens PowerShell to opening cmd instead.
  11. The Power Company

    Deep Web Crawler Building 101

    Multi-threading would probably help. I think I'll try implementing some of that sweet Cuda GPU Acceleration sauce as well, it works wonders for deep learning and password cracking.
  12. The Power Company

    Deep Web Crawler Building 101

    Hey Guys, I've recently been getting into web crawling and I've been considering ways one could make a web crawler to detect onion sites on the Tor network. I know there are already lots of deep-web/dark-web/dank-web indexing sites, such as Ahmia and the onion crate, where one can go to find active onions. However, because new onions appear and disappear daily, it would be handy to have a personal tool that automatically detects onions, possibly even extracting some basic information, and logs the findings for later. Maybe catch some sweet hacks before the feds get to them, or accidentally infect yourself with cutting-edge malware. Idea 1: Brute Force The obvious (and naive) implementation would be to try and brute-force onion names and run something like requests.get from Python's requests library. Assuming you are routing traffic into the Tor network, requests.get will return 200 when an onion site exists and is online at the given address, so any combinations returning 200 should be flagged for later. If if another flag is thrown, such as 404, no action will be taken and the loop will continue to iterate. By iterating through all possible onion links, one would eventually hit some real onions. This design is very similar to password brute-forcing, in both concept and effectiveness. All onion addresses consist of 16-character hashes made up of any letter of the alphabet (case insensitive) and decimal digits from 2 to 7, thus representing an 80-bit number in base32. An example of an actual onion address is http://msydqstlz2kzerdg.onion/ which is the onion link to the Ahmia search engine for Tor sites. This leaves roughly 1208925820000000000000000 possible character combinations for onion addresses. For reference, the largest possible value of a "long", the largest primitive data type for storing integers in Java, is 9223372036854775807, a solid six digits too short to contain the number of potential onions. If you designed a simple program to count from 0 to 1208925820000000000000000 it would take... a long ass time to run (my pc takes about a minute get into 7 digit territory counting by one, and about eight minutes to get into 8 digit territory... the destination number has 24 digits). It isn't that important to me if the web crawler takes several days or even weeks to run through every possible combination, since the majority of onion sites with actual content do persist for a while anyway. As for fresh sites that may not last long, you would have to get lucky for your crawler to hit the correct address during the short period where the site is online. This crawler would be designed to run continuously, looping through every possible combination over and over to continually update the list. There would also be periodic checks of whether onions in the list are still online. Pros: relatively straightforward to program and maintain, could potentially discover onions not contained in other indexes Cons: inefficient and ineffective unless you have a supercomputer lying around Idea 2: Crawler Crawler The next possible implementation would be to leverage the work already done by others by creating an index of indexes. By checking for changes in popular existing indexes at arbitrary intervals, my onion list would update itself with far less computation and time. The one downside is that we can already access these indexes anyway, so we wouldn't get any juicy information before our deep-web peers do. Each site stores its index info in a different format, so the crawler would have to be tailored to read sites from each index differently. We would also have to manually account for index sites going down or new sites being discovered. Pros: less heavy-lifting for my PC, doesn't need to be run constantly Cons: must be tailored to each individual index, more work to code, indexes could go down or change formats, onion sites discovered are ones I could already find anyway. Idea 3: Google-Style Crawler The last idea I have is to implement a crawler algorithm similar to the ones used by Google's own web spiders. My above crawler algorithms only consider the main 'home' addresses, consisting of the 16 characters and the .onion, even though most sites have many pages (fh5kdigeivkfjgk4.onion would be indexed, fh5kdigeivkfjgk4.onion/home would not). One function of professional-grade search-engine crawlers is they build their indexes by following links on the current site. The algorithm would follow links contained in the page source to navigate around the website, and if addresses belonging to new onion sites are found (i.e. the 16 characters are different) it will add them to the index. This would be especially handy upon discovery of sites similar to the Hidden Wiki, which are stuffed full of links to other active (or inactive) onions. Pros: Can take advantage of onion links discovered within new sites, index will fill faster Cons: The Tor network is often quite slow, navigating though sites could be time-consuming. Right now I have some basic test code running to test out a few things, but nothing worth posting quite yet. I will post any progress I make here. Let me know if you guys have any recommendations.
  13. The Power Company

    Tetra Tactical Shoulder Bag

    Most laptops don't fit in it, but it is great if you are traveling light (and I mean very light)
  14. The Power Company

    Pineapples With Kismet Web Inerface?

    I've gotten the web interface working on Ubuntu 17 but I haven't tried configuring it for pineapples yet.
  15. The Power Company

    Pineapples With Kismet Web Inerface?

    Sweet, not sure how I missed that. Thanks!
×