flyingpoptartcat Posted February 12, 2012 Posted February 12, 2012 (edited) I started a project on google code. id love your feedback! Contribute if you wish. http://code.google.com/p/web-sorrow/ A perl based tool used for checking a Web server for misconfiguration, version detection, enumeration, and server information. I will build more Functionality in the future. what is's NOT: Vulnerably scanner, inspection proxy, DDoS tool, exploitation framework. It's entirely focused on Enumeration and collecting Info on the target server CURRENT functionality: -S - stands for standard. a set of Standard tests and includes: indexing of directories testing, banner grabbing, language detection (should be obvious), robots.txt, 200 response testing (some servers send a 200 ok for every req), and thumbs.db scanning -auth - looks for login pages with a list of some of the most common login files and dirs and admin consoles. don't need to be very big list of URLs because what else are going to name it? notAlogin.php??? -Cp - scan with a huge list of plugins dirs. the list is a bit old (Drupal and wordpress plugins databases are now current but sorry joomla's still a bit old) -I - searches the responses for interesting strings -Ws - looks for web services such as hosting provider, blogging services, favicon fingerprinting, and cms version info -Fd - look for generally things people don't want you to see. The list is generated form a TON of robot.txt so whatever it finds should be interesting. -ninja - A light weight and undetectable scan that uses bits and peaces from other scans -R - use http range headers to make scans faster -Shadow - Use Google cache instead of requesting from the target host -Sd - Bruteforce Sub Domains -Db - Bruteforce Directories with the big dirbuster Database -ua - use a custom UserAgent. PUT UA IN QUOTES if theres spaces -proxy - send all http reqs via a proxy. example: 255.255.255.254:8080 -e - run all the scans in the tool web-sorrow also has false positives checking on most of it's requests (it pretty accurate but not perfect) Edited June 2, 2012 by flyingpoptartcat Quote
flyingpoptartcat Posted February 12, 2012 Author Posted February 12, 2012 (edited) remember to check for updates Edited May 25, 2012 by flyingpoptartcat Quote
Infiltrator Posted February 13, 2012 Posted February 13, 2012 Just wondering if you would be able to upload a video demonstrating how it works. Quote
flyingpoptartcat Posted February 13, 2012 Author Posted February 13, 2012 · Hidden by flyingpoptartcat, June 8, 2012 - irrelivent content Hidden by flyingpoptartcat, June 8, 2012 - irrelivent content Just wondering if you would be able to upload a video demonstrating how it works. i'd love to. i'm constantly adding features and fixing bug so stay up to date! v1.2 coming soon
flyingpoptartcat Posted February 16, 2012 Author Posted February 16, 2012 · Hidden by flyingpoptartcat, June 8, 2012 - irrelivent content Hidden by flyingpoptartcat, June 8, 2012 - irrelivent content when ever i try to edit the footage of the tool it come out looking like cr4p and it like 200mb do i need a special encoder or a setting missing??
flyingpoptartcat Posted February 23, 2012 Author Posted February 23, 2012 (edited) Just wondering if you would be able to upload a video demonstrating how it works. i'v added some pretty detailed docs on http://code.google.com/p/web-sorrow/ Edited June 8, 2012 by flyingpoptartcat Quote
bwall Posted May 20, 2012 Posted May 20, 2012 The major thing that gives away a web scan is how they don't really throttle the scan. I drop most web scans just by putting a SYN rate limit in my IPTABLES. It works with port scans and SYN floods too. I would look into HTTP 1.1 Keep Alive as it lets you keep the same connection to check multiple pages. This would stop you from getting picked up by the same rules that keep other easily avoidable attacks left out in the cold. Quote
flyingpoptartcat Posted May 25, 2012 Author Posted May 25, 2012 The major thing that gives away a web scan is how they don't really throttle the scan. I drop most web scans just by putting a SYN rate limit in my IPTABLES. It works with port scans and SYN floods too. I would look into HTTP 1.1 Keep Alive as it lets you keep the same connection to check multiple pages. This would stop you from getting picked up by the same rules that keep other easily avoidable attacks left out in the cold. If you want to use stealth try -ninja Also Web-Sorrow uses Connection caching aka all on one socket. Don't forget to update I posted this thread awhile ago Quote
bwall Posted May 25, 2012 Posted May 25, 2012 If you want to use stealth try -ninja Also Web-Sorrow uses Connection caching aka all on one socket. Don't forget to update I posted this thread awhile ago I hadn't tried it out, I was just suggesting based on how I detect scans, the fact that yours does that puts it above most. I see scans from w00tw00t all the time, to the point where I started blocking IPs in iptables that scan my server for vulnerabilities, and made taunting 404 messages. :P Quote
flyingpoptartcat Posted May 27, 2012 Author Posted May 27, 2012 I hadn't tried it out, I was just suggesting based on how I detect scans, the fact that yours does that puts it above most. I see scans from w00tw00t all the time, to the point where I started blocking IPs in iptables that scan my server for vulnerabilities, and made taunting 404 messages. :P Well thank you. btw custom 404 pages are always fun to see Quote
bwall Posted May 31, 2012 Posted May 31, 2012 (edited) Well thank you. btw custom 404 pages are always fun to see So I got to rethinking how I block web scanners, and had an idea for putting some code in my custom 404 page. Here is the run down of my idea. http://pastebin.com/Nf2YyAGe It blocks any IP that visits those pages(if they don't exist that is). I'm going to try your tool against that idea, and let you know the results. I'm not sure if PHP code will run on a HEAD command, but if it doesn't, that could be one way to bypass my idea. Edit: The custom 404/403 method does work against your tool. This includes ninja mode. Edited May 31, 2012 by bwall Quote
flyingpoptartcat Posted June 2, 2012 Author Posted June 2, 2012 So I got to rethinking how I block web scanners, and had an idea for putting some code in my custom 404 page. Here is the run down of my idea. http://pastebin.com/Nf2YyAGe It blocks any IP that visits those pages(if they don't exist that is). I'm going to try your tool against that idea, and let you know the results. I'm not sure if PHP code will run on a HEAD command, but if it doesn't, that could be one way to bypass my idea. Edit: The custom 404/403 method does work against your tool. This includes ninja mode. neet idea! If you want a good list of things to block goto Web-Sorrow_v(version number)/DB/small-tests.db and open in text editor. In Web-Sorrow -ninja does NOT make other scans stealthy It Itself is a scan that uses very few requests. BTW I've just updated web-sorrow to v1.3.7 Quote
bwall Posted June 2, 2012 Posted June 2, 2012 neet idea! If you want a good list of things to block goto Web-Sorrow_v(version number)/DB/small-tests.db and open in text editor. In Web-Sorrow -ninja does NOT make other scans stealthy It Itself is a scan that uses very few requests. BTW I've just updated web-sorrow to v1.3.7 Hey, I tried using -R, and its still only sending GETs, not HEADs. I'm not sure if the command line argument is catching or not. Quote
flyingpoptartcat Posted June 2, 2012 Author Posted June 2, 2012 Hey, I tried using -R, and its still only sending GETs, not HEADs. I'm not sure if the command line argument is catching or not. http://restpatterns.org/HTTP_Headers/Content-Range Quote
bwall Posted June 2, 2012 Posted June 2, 2012 http://restpatterns....s/Content-Range Ok, I figured -R would send HEAD requests, as that's a good way to check if a file exists. I see what you are doing with the Content-Range though, that is pretty neat. That way only bytes 0 through 1 get returned. Although, it does process the whole php page, meaning the 404/403 script will block it. Would it be difficult to add a flag to send HEADs instead of GETs? http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html 9.4 Quote
flyingpoptartcat Posted June 2, 2012 Author Posted June 2, 2012 Ok, I figured -R would send HEAD requests, as that's a good way to check if a file exists. I see what you are doing with the Content-Range though, that is pretty neat. That way only bytes 0 through 1 get returned. Although, it does process the whole php page, meaning the 404/403 script will block it. Would it be difficult to add a flag to send HEADs instead of GETs? http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html 9.4 well I would have to make a subroutine to make requests and check whether the flag is set every time a request is made. so yes it would be hard. and you might want to use regex when checking the url to avoid evasions like ././././././passwords.txt?tgvfjhgvjhg=rgtrf444 Quote
bwall Posted June 2, 2012 Posted June 2, 2012 well I would have to make a subroutine to make requests and check whether the flag is set every time a request is made. so yes it would be hard. and you might want to use regex when checking the url to avoid evasions like ././././././passwords.txt?tgvfjhgvjhg=rgtrf444 Good idea, I was planning on adding to the methods I use to sanitize urls for checking. Right now I have it removing an extra / at the beginning, but I think I'm going to loop that, and change how the values are stored. I'll post the next version. I also want to change everything over to the startsWith check, so excess stuff like the GET args will be ignored. Quote
bwall Posted June 2, 2012 Posted June 2, 2012 well I would have to make a subroutine to make requests and check whether the flag is set every time a request is made. so yes it would be hard. and you might want to use regex when checking the url to avoid evasions like ././././././passwords.txt?tgvfjhgvjhg=rgtrf444 Double posting, sorry. So I did some work with netcat and Apache. Apache seems to automatically remove the "./" but I'll keep that functionality in for other webservers. Also, HEADs do run PHP code, so using a HEAD will not avoid detection. Here is my current version. http://pastebin.com/h7SPzftp I'm going to add more locations to it later. Quote
flyingpoptartcat Posted June 2, 2012 Author Posted June 2, 2012 there are 100's of different server daemons and even more versions and forks of those that would be a good idea to keep it in Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.