Jump to content

Jason Cooper

Dedicated Members
  • Posts

    520
  • Joined

  • Last visited

  • Days Won

    8

Everything posted by Jason Cooper

  1. If you have a stable connection then distance won't affect the ARP spoofing as it isn't a race condition type attack. In your example as long as you can connect reliably you could mount a MITM attack with SSLStrip.
  2. It also indicates that they haven't thought about their firewalls either as any sane admin will have services blocked at the firewall unless they are actively being used.
  3. Using a directional antenna will increase your range (in some directions at the expense of others) and if there are any wireless networks in this range then you will be able to connect to them. Of course most wireless networks are protected via encryption so you still won't be able to use them (unless you crack the encryption). "transmitting over spec" refers to the amount of power that your wireless interface is set to use when transmitting. Most wireless interfaces will transmit below 1 Watt, there are a few that will transmit at this or higher but not by much.
  4. It depends on the methods used to verify the cookie. There are two common methods used for handling Session IDs, lets call them time based and session based. Session based cookies produce a pseudo random id that it puts in the cookie on the browser and also stores in the database on the server (along with session details, user, expiry date, etc). When a request comes in with a cookie it looks up the associated entry in the database and if valid uses it. At the point of actively logging out it drops the entry from the table and so that session ID is no longer valid. Time based cookies create a session id that consists of a user identifiier (e.g. username), the time issued, a pseudo random salt for variety and a hash. The has is created using those values and password/magic value that is kept private on the server. The server doesn't store any information about the session itself, instead when it receives a request with the cookie it verifies its hash is still valid and then checks the issued time in the cookie and if the cookie hasn't timed out it uses the data from that to get the users details. If the session based method is being used then when the user logs out the server actively removes the session and any future attempts to use that session cookie will fail. The downside is that you have to keep a details of every session active, which can be a very large set of data for big systems, and search that list for every request. The time base method cookies on the other hand will be active for certain length of time. The user logging out doesn't invalidate them it just has their browser throw the details away. The benefits of this method is that your server doesn't have to store the session details as it has everything it needs in the cookie (saving space) and that the same session cookie can be used for more than one host in the same domain, which may not share the same database. The downside is that you can't easily invalidate a individual session cookie (you can do all by changing your servers passwords used to generate/validate the cookies). If you have the resources then, from a security point of view, session based cookies are usually a better choice than time based. Having said that there are plenty of real world exceptions where the time based is a better choice, which makes it worth knowing about both methods.
  5. Generally (and without being there trying it for Mike we can only talk about generalisations) if you can receive a signal from an access point (AP) you should be able to transmit a signal at the same power used by the AP and it should be able to receive it. Unless the AP is transmitting over spec then you shouldn't have to transmit over spec. Having said that the real world doesn't always cooperate and the closer you are to the limit of your range the less reliable the connection will be as the signal to noise ratio will be lower. Hence the recommendation that the only way that Mike will find out if a reliable connection can be made will be for him to try it.
  6. If you are viewing a production web site then they should never be shown error messages. Not showing error messages doesn't stop people from abusing any errors in your site, but it does make their job harder. If you were just viewing the site and then came across the error then I would suggest reporting it to the sites owner, that way they are at least aware that their site is broken (after all obscure pages on sites can easily be missed when a change elsewhere causes them to break).
  7. It also wouldn't stop people connecting normally to a network where the DCHP server has a MAC address to IP mapping as it will only hand out those IPs to the devices with those MAC addresses (Very useful for client devices that you move between a number of networks but would like to know what its IP will be on some or all of them).
  8. Sounds like network-manager to me. If you have got network manager installed then try uninstalling it and configuring your network manually. Network-manager is great when you just want it to look after one interface but when you start to have multiple interfaces it can get confused quite quickly.
  9. What credentials is it that you are worried about making it easy for script kiddies to get? Is it just credit card details that you are worried about? If so, then you could always get your tool to replace the first 12 digits with X's and leave the last 4 digits. That should be enough for pen-testers to show that credit card details were obtainable without putting the details at further risk of being used. It would also mean that if a script kiddie wants to use your tool for nefarious purposes they would have to actively alter it.
  10. The first thing to check is that you aren't using the same SSID as someone else nearby, as some enterprise wireless networks will actually automatically deauth people from other networks if they have the same SSID and the access point isn't registered as belonging to their network (effectively a kind of defence against people using a pineapple close to the network). You can try to triangulate the attackers location if you have a directional antenna (the more directional the better) and a laptop. Use the directional antenna with the laptop and then run wireshark to show live packet dumps (perhaps with a filter for deauth packets). Then watch the number of packets received as you slowly rotate the directional antenna through 360 degrees. Make a note of where you start picking up the packets and where you stop picking them up. Plot this on a map of your local area. Move to a different location and repeat the process. Once you have plotted both sets onto the map you should see the area where your attacker is most likely situated. If the area shown on the map is still too large to identify the attacker from then you can repeat the process but pick locations closer to the area you have identified.
  11. If you are using a high gain antenna then you will receive the gain for both signals being sent and received. Really the only way to know if it will work reliably over the distance is to try it. I have seen some places where there is so much interference in the 2.4Ghz range that you can barely connect when stood next to the access point, but then there are other places where you can connect from half a mile away (with a good antenna).
  12. Looking at your current blocked page message I guess your router is already doing a similar thing with the DNS. What router are you using and what version of firmware is it running? It might be possible to edit the page that is displays to the user (even if it is changed to a simple redirect to another web server).
  13. If you have your own DNS server that your machines use for look ups then you could simply put an entry in it for the sites that you want to block that returns your websites IP. Then set up your website to just return the blocked page no matter the page requested.
  14. If you are only using the method when you feel you need to be anonymous then you will eventually make a mistake and give your actual location away. That is what happened to sabu, he made one mistake that exposed his real IP and that was enough of a thread for the authorities to locate him, watch him and build a case good enough to arrest him. At the same time if you use the method all the time then you will give them a large number of packets to watch, the timing of which can be used narrow down your location. After all there will be fluctuations in the response times which can be charted over time and then used with a list of recorded events that would affect them (weather, etc.) to find a probable location. It isn't easy but given access to enough data it is possible. Having said that it would certainly be harder to locate someone who was using your method than someone who was using just a VPN or a single proxy.
  15. Will your university notice you connecting from your home? If it notices you it won't be because you are connecting from home. Doing illegal things or breaking any IT policy's you signed when joining the University will get you noticed however. Seriously though you can always cover yourself by simply contacting the IT department at your University and say something like "I do a lot of work for my studies at home. From home I can see the University's wireless network and just wanted to check that I would not be infringing any IT policies by connecting to the Universities wireless network from my home."
  16. What helps one person learn something doesn't always help others. The best advice I can give would be to see if they are available at your local library first and if so then borrow them and see which works best for you. If they aren't available at your local library then see if they can get them for you (there may be a small charge for them to order them but no where near the cost of buying it yourself). When looking for a good book on a subject I personally would start with O'Reilly as I have always found their books to be both very useful to learn from and an excellent reference afterwards.
  17. Also if it is a web service running through apache or similar then you can always go and check its access logs to see who accessed what pages and when. Depending on the authentication method used you may or may not have usernames listed with the access, but you should have IP addresses listed which should help you narrow it down to a specific machine. If It does run through apache or similar but the system doesn't have any access controls built in then you can always add them in at the apache stage (if you have a directory service like Active Directory or LDAP then you can configure apache to authenticate users against that). That way the username should appear in the logs as well as the other details. If the service isn't web based but on a server that people are physically logging into then you should be able to check the file that has changed to get the date and time it changed then look through the machines logs to see who was logged into the server at that time.
  18. I would suggest targeting the zip files first as the last time I had do a similar task they were a lot faster to crack. Once you have a password try it on all the other zip and rar files (after all he wouldn't be the first person to use the same password for everything). Also you did try a dictionary attack first before going for the brute force attack, didn't you? If he is any good with password choice it would fail, but given the cost of running a dictionary attack is so low you might as well give it a shot (perhaps on an old machine you might have laying around).
  19. A few things to consider. A) I have yet to see untraceble payment methods accepted via satellite internet providers, so this would push you down the fake id route and potentially opening yourself up to fraud charges as well as whatever charges you get for your actions through the account. B) Connecting through a consistent route would make watching you easier, they may not be able to find you immediately but they could quite easily build a profile on you to help narrow down their search. E.G. if you suddenly drop your connection it could indicate a power outage in your area, suddenly they have a theory about which country and city you are in. C) Thanks to knowing exactly where the satellite is located they can use your ping times to estimate a rough location for you (kind of like a reverse GPS).
  20. Personally when spec'ing a machine for hosting a set of VMs there are a number of considerations that have to be made. The first consideration is the number of cores available, in-fact I would suggest that you will do better to sacrifice some Ghz from a CPU if it will gain you more cores (if you can get both cores and speed in your price range then great). Second factor to consider is the storage. Do you need size or performance? If you are planing on a number of small servers (say each requiring 10GB - 20GB of disk space) then SSDs could be the best way forward. If on the other hand your machines will need a large quantity of disk space then you should consider going with RAID to both boost the performance and to give yourself some redundancy in case a disk fails (Though you will have a good backup procedures in place for all your VMs won't you). Third factor is memory, which is quite easy. Just total up the quantity you want for you initial set of VMs add on some for your host and then add on a bit more. E.g. if you want to have 6 VMs with 2GB of memory each then you would need least 12GB + 2GB for the host + 6GB (enough for another 3 VMs). This gives you a minimum requirement of 20GB, rounding it up to 32GB gives you both plenty of memory for your VMs and is also a easy enough quantity to purchase. Final factor to consider is the networking. This depends on what you are requiring your VMs to do and what your current networking infrastructure is like. If you VMs will be streaming video out across your network then you may want to have multiple gigabit NICs on the machine. On the other hand if they aren't going to be network intensive then they could all share the one gigabit NIC on the motherboard. Finally if you only have a 100Mb network then would want to have multiple NICs again as it would be very easy for one VM to saturate the single network connection (effectively DOSing the rest of your VMs). If you are planing on leaving your VMs running permanently then I would recommend investigating alternatives to Virtual Box. Xen, VMWare and KVM are all very good alternatives. Personally I have found Xen to be very easy and reliable, but Linux seems to be heading down the KVM line these days so that could be a good way to go for the experience.
  21. Whatever way you go about securing your files don't forget that loosing the USB drive isn't the only risk involved with putting confidential files on a USB drive. Some applications will make cached copies of files locally while they are using them and don't clear them out when they close. Misconfigured search programs can cache metadata about your files that may even include a preview version of the image. So if you aren't careful you could be leaving your confidential files on other peoples machines in a form that they can read and possibly even presented to them when they innocently search for their own files. Also you will need to remember to clear out the opened recently list in any programs you use (and possible the windows start menu) as even just the filename and path can give information away about your file. Having said all that TrueCrypt is awesome and a definitely harder to attack and hide than an encrypted zip file.
  22. I have encountered a number of situations where regular expressions were the only way to effectively process the necessary data out of a some XML files in the time required. Generally it was when the XML files I was dealing with were so large, both physically and logically, that the overheads of fully parsing the XML would tie up the servers available resources for far too long. In one case we were dealing with a processing time of over a day on a server with 16GB of memory available to it, switching to regular expressions brought that time down to about an hour. Other times I have had to use regular expressions to recover data from corrupt XML files that actually break the parser. You do have to be careful though when using regular expressions to process data from XML files, and usually you will be better off using a proper XML parser. Generally the best way for programmers to think of dealing with XML is "If you don't have a reason why you can't use a proper XML parser then you should be using one." As to what airman_dopey is trying to do I suspect that it might be easier to create a program/script that effectively sucks data in from both logs while p0f and ettercap are running and uses that data to update its own data and then regularly uses its data to output a report.
  23. This is where I would reach for good old reliable Perl. perl -e'$x=join("",&lt;STDIN&gt;);$x=~s/\s*[\r\n]+\s*//gs; $x=~s/^.*?(&lt;host.*&lt;\/host&gt;).*?$/$1/i;$x=~s/&lt;\/host&gt;/&lt;\/host&gt;\n/gi;print $x;' &lt;InputFile.xml First we load the whole input into a variable ($x) and then there is just three stages as digip suggested. The first is to remove all the new lines (including and spaces before and after those characters) which leaves us with a single line. Secondly we strip outside of the host tags ($x=~s/^.*?(<host.*<\/host>).*?$/$1/i;). Finally we put a newline on the end of every closing host tag. If you are wanting to do this regularly then I would suggest putting it into an actual perl script file that you can just run and direct your XML into. The next question is what are you planning to do once you have the files in this format? I assume you planning on looking for changes over time by comparing older files with newer files, but if you are planning on doing something more complex then it could pay off to expand the script further.
  24. There are a number of options available assuming a unix based os (linux, BSD, etc.). You can one of the following to see information about the process (ps lists processes and piping its output through grep reduces the lines down to those you are interested in). Linux style: ps -elf | grep spbin BSD style: ps aux | grep spbin If you have it running in a terminal/console then you could also press 'CTRL+z' to pause the job and get your prompt back and then search your history for your full command. Once you have paused a job you can use 'fg' to start it again in the foreground or 'bg' to start it in the background (very useful when you realise that you have missed the '&' of the end of a process you meant to start in the background). In this case you will probably want to start it again in the foreground so you don't accidentally close the terminal and terminate the process.
  25. Those that find that they get the "unexpected '[' error", it is caused by syntax that requires version 5.4 or newer of php. Looking at the code again I can see it will take a while to devise the a way to attack this without simply replacing the code with a logger to grab the password when the attacker attempts to run a command. If anyone else is looking at this then as we know you can assume that $e = "system". I also believe that $e1="passthru" and $fex = "function_exists". I have based these assumptions on the number of characters returned from the pack function when called with the 'I', 'S' and 'C' options (4, 2 and 1 respectively). Which gives us a length of 8 for the length of $e1 function name (passthru) and 15 for the length of the $fex function name (function_exists).
×
×
  • Create New...