Jump to content

The Power Company

Active Members
  • Posts

    78
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by The Power Company

  1. There are tons of CTF resources listed here https://laptophackingcoffee.org/doku.php?id=wiki:resources I've used a bunch of them, they are very helpful
  2. I would spend some time learning about grep, piping, and regular expressions. These are ridiculously important things to know, especially when trying to locate files, identify rogue processes on your system, and just generally staying in control of your system in a hostile environment, whether as an invading red team or a besieged blue team. For some reason very few "hacking" tutorials mention these, I guess they just assume you already know about them. There are many interactive tutorials for things like regex though.
  3. I heard about the Hak5 products from... Hak5 themselves. I watched a few of their tutorials on Linux stuff and here we are.
  4. Hi guys, its been a while! I'm currently playing with some... legacy... hardware, and my old machine doesn't want to boot from a CD and is too old to USB boot. Luckily, I have a bunch of old floppies lying around, plenty large enough to fit Damn Small Linux or TinyCore onto! What is the best way to write/burn/enscribe a boot ISO onto a floppy disk from Linux? I tried to google it but all I found were windows utilities...
  5. Just a heads up, you can get CPEH (Certified Professional Ethical Hacker) and CEH (Certified Ethical Hacker, up to v9) from places like LimeTorrents and The Pirate Bay, their lab exercises are very hands-on and are a great way to expand your knowledge.
  6. Only elite hackers can access the sticker set page ?
  7. Since the Hak5 stickers are no longer available for standalone purchase in the shop, I just had a great idea for a new product. Well actually not great at all but you know... Everyone knows that hackers love their stickers, and we often stick them to our laptops and other devices. So why not leverage stickers as a potential attack vector? A few of you may remember that concept iPhone case from a while back that claimed to absorb leftover RF signals to charge your phone (this concept is the basis for some wireless charging) but it charged the phone so slowly that it wasn't really the infinite power source it claimed to be back then. However... an entire smartphone is a very power-hungry device. What you could incorporate that same technology into a tiny IOT sticker that could, say, broadcast its GPS location or send deauth packets everywhere? It could use the extra static electricity from a laptop or 801.11 waves or whatever to slowly charge itself up, and once it reaches a certain threshold it will use that power to broadcast something or other, could be anything really, using a thin copper lining as an antenna. You could give these stickers out at conventions and other places, and all of the poor nerds who slap your sticker onto their stuff will be officially h4k3d! **rubs hands together connivingly**
  8. You can find lots of cameras that are publicly on the internet through shodan.io
  9. Just got in, looks like a fun site!
  10. Why hello there, I was wondering what sort of software I should use for visual analysis of the XML files that nmap or masscan can generate. I want to be able to visualize the connections between several large networks in a fashion similar to zenmap's topology map, but zenmap has a limit to how many points can be plotted. Are there any good alternatives?
  11. Hey guys, I've recently been trying out some hacker wargames, and I am working on the Blowfish game from Smash the Stack (http://smashthestack.org/wargames.html). Getting into level2 was super easy, but as I am such an amateur, I cannot get to level3... Has anyone been through this one? I'm not asking for the solution, just some hints at what to look for. EDIT: I got into level3! Muahahah! Only ten more to go...
  12. Like these folks have been saying, the most important part is knowledge. If you approach the computer field with the mindset of learning your stuff and increasing your tech savvy rather than the mindset of becoming a hacker, you will be far more successful. And one day you will wake up and realize you already are the hacker you dreamed of becoming at the beginning.
  13. looks like a fun time! I once stuck my raspberry pi into a tissue box with antennas poking through but this looks a lot better
  14. Yeah I bought an elite kit a few months back, can confirm it has not changed (except for it being currently sold out)
  15. I use Google Play music, and even though my entire library is easily too large for my phone's 64gig storage, I can still have all my favorite songs downloaded. There aren't many cases where I lose WiFi access for a long time, but even then I still have about 72 hours worth of songs I can listen too without access.
  16. Perhaps botnet isn't the correct terminology, but I have a few old laptops sitting around unused. I was thinking that if you were running a program that handles some multi-threaded task and carries out processing methods on a large dataset, you could have a centralized system to keep track of overall progress, assigning the next item in the dataset to be processed as soon as one of the PCs finishes its current task.
  17. Hey guys, I was wondering what the best/most efficient way to get multiple devices to act in unison, as a botnet would, but without malicious intentions, as a botnet wouldn't. Would the best choice be to use some cloud platform like Apache Mesos or Docker sort of application? Amazon Web Services maybe? Would designing an actual botnet make any sense? Anyone have any experience with this sort of thing?
  18. Is it possible to run Piratebox without OpenWrt? I know the Nano already supports OpenWrt, and I'm pretty sure that the Tetra also does, but it isn't in OpenWrt's Table of Hardware yet... EDIT: I wish I could say that I mean the stock version of OpenWrt, but honestly it was so late what I posted this that I completely forgot that both pineapples already run OpenWrt. I mean its not like it says "with OpenWrt" in the ascii art that appear when you ssh into one... oh wait...
  19. Makes sense. It's funny, the slowness of navigating the Tor network is usually seen as a disadvantage, but from a security standpoint it is actually quite beneficial.
  20. I figured as much. From looking around a little it seems that Windows XP has powershell anyway, so unless the target manually removed it (which isn't possible to do without breaking it for Windows versions past XP) there shouldn't be any problem... unless I'm completely out of the loop and winxp stands for something other than Windows XP.
  21. I haven't looked into those specific payloads, but many commands that run in PowerShell are identical to those in the normal command prompt. Does the script use any cmdlets or other PowerShell-specific commands? If it doesn't, it may still work if you just changing the line where it opens PowerShell to opening cmd instead.
  22. Multi-threading would probably help. I think I'll try implementing some of that sweet Cuda GPU Acceleration sauce as well, it works wonders for deep learning and password cracking.
  23. Hey Guys, I've recently been getting into web crawling and I've been considering ways one could make a web crawler to detect onion sites on the Tor network. I know there are already lots of deep-web/dark-web/dank-web indexing sites, such as Ahmia and the onion crate, where one can go to find active onions. However, because new onions appear and disappear daily, it would be handy to have a personal tool that automatically detects onions, possibly even extracting some basic information, and logs the findings for later. Maybe catch some sweet hacks before the feds get to them, or accidentally infect yourself with cutting-edge malware. Idea 1: Brute Force The obvious (and naive) implementation would be to try and brute-force onion names and run something like requests.get from Python's requests library. Assuming you are routing traffic into the Tor network, requests.get will return 200 when an onion site exists and is online at the given address, so any combinations returning 200 should be flagged for later. If if another flag is thrown, such as 404, no action will be taken and the loop will continue to iterate. By iterating through all possible onion links, one would eventually hit some real onions. This design is very similar to password brute-forcing, in both concept and effectiveness. All onion addresses consist of 16-character hashes made up of any letter of the alphabet (case insensitive) and decimal digits from 2 to 7, thus representing an 80-bit number in base32. An example of an actual onion address is http://msydqstlz2kzerdg.onion/ which is the onion link to the Ahmia search engine for Tor sites. This leaves roughly 1208925820000000000000000 possible character combinations for onion addresses. For reference, the largest possible value of a "long", the largest primitive data type for storing integers in Java, is 9223372036854775807, a solid six digits too short to contain the number of potential onions. If you designed a simple program to count from 0 to 1208925820000000000000000 it would take... a long ass time to run (my pc takes about a minute get into 7 digit territory counting by one, and about eight minutes to get into 8 digit territory... the destination number has 24 digits). It isn't that important to me if the web crawler takes several days or even weeks to run through every possible combination, since the majority of onion sites with actual content do persist for a while anyway. As for fresh sites that may not last long, you would have to get lucky for your crawler to hit the correct address during the short period where the site is online. This crawler would be designed to run continuously, looping through every possible combination over and over to continually update the list. There would also be periodic checks of whether onions in the list are still online. Pros: relatively straightforward to program and maintain, could potentially discover onions not contained in other indexes Cons: inefficient and ineffective unless you have a supercomputer lying around Idea 2: Crawler Crawler The next possible implementation would be to leverage the work already done by others by creating an index of indexes. By checking for changes in popular existing indexes at arbitrary intervals, my onion list would update itself with far less computation and time. The one downside is that we can already access these indexes anyway, so we wouldn't get any juicy information before our deep-web peers do. Each site stores its index info in a different format, so the crawler would have to be tailored to read sites from each index differently. We would also have to manually account for index sites going down or new sites being discovered. Pros: less heavy-lifting for my PC, doesn't need to be run constantly Cons: must be tailored to each individual index, more work to code, indexes could go down or change formats, onion sites discovered are ones I could already find anyway. Idea 3: Google-Style Crawler The last idea I have is to implement a crawler algorithm similar to the ones used by Google's own web spiders. My above crawler algorithms only consider the main 'home' addresses, consisting of the 16 characters and the .onion, even though most sites have many pages (fh5kdigeivkfjgk4.onion would be indexed, fh5kdigeivkfjgk4.onion/home would not). One function of professional-grade search-engine crawlers is they build their indexes by following links on the current site. The algorithm would follow links contained in the page source to navigate around the website, and if addresses belonging to new onion sites are found (i.e. the 16 characters are different) it will add them to the index. This would be especially handy upon discovery of sites similar to the Hidden Wiki, which are stuffed full of links to other active (or inactive) onions. Pros: Can take advantage of onion links discovered within new sites, index will fill faster Cons: The Tor network is often quite slow, navigating though sites could be time-consuming. Right now I have some basic test code running to test out a few things, but nothing worth posting quite yet. I will post any progress I make here. Let me know if you guys have any recommendations.
  24. I love me a good spectral analyzer! While $200 is certainly less than a lot of high-end ones, the site seems to have a good return policy, so if you don't like what you get, just give it back. If a quick google search of "is website.com a scam" doesn't turn up any curious results, you should be good. As with a lot of products online, sometimes you simply don't know until you buy it.
×
×
  • Create New...