Jump to content

Jason Cooper

Dedicated Members
  • Posts

    520
  • Joined

  • Last visited

  • Days Won

    8

Everything posted by Jason Cooper

  1. I haven't got my machine with airodump-ng to hand, but it might be easier for you to parse to the output files it can generate rather than the screen output. Just use something like airodump-ng wlan0 -w OutputData and then while it is running tail the file using the command tail -f OutputData.txt If you see things appearing in the file as time goes on then you could just open the file in your script and read from that.
  2. Using hostapd you can turn any linux distro into an access point.
  3. If you still have trouble and you have any access to the router then you might be able to get it to tell you what networks it thinks are connected to each interface. That would hopefully give you a set of network ranges to start searching with nmap.
  4. To keep things simple I will avoid talking about level 3 switches, vlans, etc and will just talk about the very basic diffence between the devices. A wirless access point provides a connection for a number of wireless devices to the network. A hub shares the connection between all the devices, any packets it receives get sent out on all interfaces (i.e. all cables plugged in) A switch is smarter than a hub and will look at the packet it has received and send it out to the interface it knows the device is connected to. The above build up the basis of a network, a router will sit between two or more networks and take the packets passed to it and then depending on the destination network of the packet put it out on a specific network. The difference between the router and a switch is the level they are working on. Routers simply know what network is connected to which interface and switches simply know what devices are connected to each interface. These are very simplified explanations and things get more complex when you really delve into networks, but hopefully they are good enough to give you rough idea of what each device is doing. Modern wireless routers are really a switch, router and wireless access point contained in one package. The router part decides if to pass packets to the switch or the internet interface. The switch passes the packet out on the correct interface (one of the cabled ones, the one connected to the wireless access point or the one connected to the router).
  5. Kind of, the v8 engine is designed to be included in other applications. node.js takes the v8 engine, as you say, but combines it with other libraries to give a really good environment for developing server side code. The general advice I would give is to use the v8 engine if you are looking to include JavaScript support in an application, but to use node.js if you are intending to code server side JavaScript, especially if you want a lot of documentation and examples to look at when things get tricky.
  6. It all depends on the pentesting activities. Port scanning, as already said, shouldn't require ports to be forwarded on the local NAT. Reverse connecting shell code will require you to forward the ports it wants to connect to. The easy way to think of it is that if the connection is initiated from your machine then the NAT will handle the connection. If it is initiated from the target machine then you will need to forward any required ports or NAT won't know where to pass the connection onto.
  7. It sounds to me like you are using it as a switch rather than a router. Not a problem in this case considering the layout but if you wanted to use it as a router rather than just an additional switch then you would want to disable the NAT on the router and you may need to manually set up the routing tables for it depending on the routers being used. The reason to disable the NAT on the router is that you don't want to get into the nightmare of multiple levels of NAT where machines at the lower levels can connect to those on the higher level but the higher level machines can't connect to the lower level ones. Personally unless you have a good reason to really separate the two networks the situation with using the router as a switch sounds fine for the environment.
  8. JavaScript can also be run now server side with node.js A language that be used to code both Client and Server side on the web is a very powerful thing to learn and while it hasn't yet prised perl from my hands as the language of choice it is running a close second.
  9. Truecrypt is great for this sort of thing. I have a couple of SDHC cards holding truecrypt volumes that I back up my eeePC to, it works great and I don't have to worry about loosing the SD card (easily done with the size of them). Given the bandwidth of USB2.0 you probably won't even notice a loss in performance from the encryption. The one bit of advice that I think should be given for this method is to seriously think about the filesystem you use in the truecrypt volume. if you are only going to be accessing it though Linux then one of the ext filesystems will be fine, but if you are wanting to reliably access it through Windows as well as Linux then you may prefer to use FAT32 or NTFS. If you do want to move to full disk encryption on your Linux machine then check out http://tldp.org/HOWTO/html_single/Disk-Encryption-HOWTO/ which should help you get started. Just remember to backup before hand.
  10. Most distributions have use a separate directory (conf.d, sites-available, etc) to house local config files, this helps avoid the problem of an update replacing apaches main config file an erasing all local configurations. Of course there are some people who still make all their configuration changes in apache's main config file rather than using local config files, so if it did get replaced then they would loose any changes they have made. Another big advantage of using local config files for most of your settings is that you can easily replicate security settings from one server to another by simply copying the one file, rather than cutting and pasting lines and hoping that you have got all the relevant ones.
  11. I use Gentoo on my eeepc 900 (5 years old now) and if you have the patience then it can definitely be lean and fast. The main downside is that it you need to make sure you have plenty of time when you want to run an full update as it can require a lot of compiling on the machine. Having said that I have also a 7 year old Toshiba laptop (Pentium 4 - 2.66GHz, 512MB memory and 30GB hard disk) which runs the latest version of Debian fine.
  12. You could try a simple dictionary attack, if it is a simple/common password being used then it could get you in.
  13. AI is a vast area so don't be disheartened if you don't understand some of it. Just concentrated on playing with the bits you think you understand and then go back to bits you don't understand later as the extra experience can help. Also read around the area in other maths and computer science subjects, especially probability, statistics, game theory and algorithm analysis.
  14. The more advanced attacks like SSL strip are very good reasons why whole sites should be available via https. As long as you explicitly visit the https version then you will either get a secure connection, a warning about an invalid certificate or no connection at all. So if a https version of a site is available then bookmark that one and use that.
  15. True, but SSL (provided you always use SSL for connecting to the site) can help reduce the risk of a number of other forms of attack where the attacker is on your local network and doesn't control the server.
  16. Yes there are some pretty cool tools that can crack password hashes quickly for certain levels of complexity, but that is no reason to continue passing passwords around in plain text.
  17. What is especially depressing is that a few years ago the site was hacked and users passwords were released. Looking at the username password combinations that were released they were obviously grabbed by a packet sniffer and not obtained from cracking the hashes from the database. At that point you would have thought that upgrading to SSL would have been a given. Then at least you are making them work to get the passwords :)
  18. Personally I wouldn't bother with denyhosts, I find that requiring key authentication stops brute force attempts without any risk of denying your own hosts access.
  19. Have you eliminated a miss configuration of the shared folders permissions? If so have you asked the students how they did it? Assuming they have hacked there way in anything we can suggest would be pure guess work on our part. It might be a server that is missing a patch, it could be a key logger that logged an admin's login, it could be an easy guessable password, etc. Those students who have gained access will know how they did it (even if it is just running an exploit).
  20. Personally there are a few times that I like to use FTP over SFTP. For most file transfers I will use ssh (either sftp or scp). The few times that I prefer to use FTP is for when I have a lot of files being made available to the general public or if I am needing to transfer large files a low powered system. The making file available to the public scenario more often falls under a web server's role now than an ftp server, but if using an ftp server then chroot it into its own environment and really lock down which directories can be seen. The transferring large files to a low powered system then set up a write only account and encrypt your backup files before transferring. That way they may be able to sniff your login credentials but they can't read back any of the files, and if they sniff the whole file while it is being transferred they would still have to break the encryption before they have anything. As always really lock down the access that ftp user has so that they can't change to other directories and cause mischief if they do sniff your login details. The transferring large files scenario is commonly a problem seen when transferring backups from a server to a NAS box. The server has plenty of power for its side of a copy over ssh but the NAS box doesn't have enough power to do the decrypt on its side of ssh. I have seen it where a server would only be able to copy a file to a NAS box at about 2MB/s over sftp but over ftp it could hit 10MB/s. When transferring a large backup file regularly it made sense to encrypt on the local machine and then use FTP to transfer the file.
  21. In theory, assuming there are bugs in all software that can be exploited, then by increasing the number of open services then you are increasing your chances of being exploited. Of course this can be offset by using well maintained mainstream packages (e.g. openSSH) which should have less bugs than exotic poorly maintained packages. On a security advice point of view with using FTP: Don't use the same credentials that you use for other systems. If possible use Kerberos for authentication rather than username and password. Limit the directories that can be accessed through FTP to only those that you require. Chroot the ftp daemon so that if someone does get access they are limited in what they can access/do.
  22. Think loops rather than arrays. Start with asking the user how many they want to compare and then use a for loop to compare that many letters and finish with telling the user what the lowest value letter was.
  23. You are bang on with how the else works, have a play and you will soon get the hang of them. The && is a logical AND, so if both the left hand and right hand conditions are true it is true, if either is false then it is false. A single & is bitwise AND which is very different, e.g. if X has a value of 1 (01 in binary) and Y has a value of 3 (11 in binary) then X&Y would equal 1. The bitwise logic works on each bit in the values, so the first bits are compared from X and Y, which are 0 and 1. As they are not both 1 then the value is 0. The second bits are then compared (1 and 1) as they are both 1 the value of the second bit is 1. Things to read up on are Boolean logic, AND (&&), OR(||), NOT(!)), and Bitwise Operators, AND (&), OR(|), NOT(~), XOR (^).
  24. As it is C then this works, just remember to input your letters in the format "b, a, c" #include <stdio.h> #include <stdlib.h> int main() { char let1; char let2; char let3; printf("Input three letters:"); scanf("%c, %c, %c",&let1, &let2, &let3); if(let1<=let2 && let1<=let3) { printf("%c is the lowest\n",let1); } else if(let2<=let3) { printf("%c is the lowest\n",let2); } else { printf("%c is the lowest\n",let3); } return 0; } Now that we have done your homework tell us why it works and then write the code to allow you to compare as many characters as you like.
  25. That would get past the DNS as it would never need to be resolved. The main reason I required this was for a kiosk type machine that was limited to a couple of sites, the sites themselves may have links out to other sites, but we didn't want the users to be able to follow them. The point of redirecting to the local machine is that it is running an web server on it that serves out a page for any request that gives the user the choice of returning the previous page or resetting the kiosk back to the homepage. Of course the machines are sat behind a firewall that drops outgoing connections to machines not in the list of allowed hosts, so they wouldn't be able to get out. However they would get stuck on a browser error page informing them of a failed connection and as it is in kiosk mode they don't get access to the address bar and so wouldn't be able to get back without restarting the machine. The redirecting other hosts to the web server on the local machine pretty much avoids this situation (as long as none of the sites allowed has a link to a blocked sites IP address).
×
×
  • Create New...