Jump to content

Jason Cooper

Dedicated Members
  • Posts

    520
  • Joined

  • Last visited

  • Days Won

    8

Everything posted by Jason Cooper

  1. Assuming I have understood the question the simple answer is no, if you try to run two ADSL modems on one phone line they will just interfere with each other. To get a connection down to your nerd lab you would could run an Ethernet cable down and connect that to the old router (remembering to set switch off things on the old router like DHCP and to give it an unused IP address). Or if your routers have wireless on them you might be able to get a wireless connection in your nerd lab which you can set your old router up to bridge (assuming it has that option or can be installed with openwrt or dd-wrt) with its switch. If you can't run a cable or get a wireless connection a third option is to look at Ethernet over power adapters which will let you use the power lines in your house as an Ethernet cable.
  2. You could try sniffing the network for a bit till you know the MAC address of someone who has authenticated. Once they have stopped using the network try using their MAC address. It is quite common for captive portals to keep a list of MAC addresses that have authenticated and then capture any web connections that don't come from one of those MAC addresses.
  3. This page has some interesting information on your script problem. I think that you should probably start looking at using XML for the values returned rather than just pushing HTML. That way you can have your javascript and html passed back as two seperate entries. You can then eval the javascript to run it and put the html into the div. If you do go down this road you will need to remember to encode your javascript and html to stop it causing problems when the XML is parsed. example of returned XML. (You will want to be using responseXML rather than responseText when you are using values returned in XML). <returns> <javascript>alert("Hello World");</javascript> <html>Example of AJAX</html> </returns>
  4. I know that linux on my netbook (DebianEeePC) is set up so that it looks for a network of pineapple as the highest priority and that it probes for the networks. I have also got it set up so that if it connects a network called pineapple it breaks my interfaces IP details so that I won't be able to send and receive packets. Which is a fairly good way to stop it. I don't know if windows can be set up in a similar way so it would be interesting to hear from anyone who has.
  5. This page might be of help. It also might be worth trying to use windows standard MBR rather than the GRUB and see if the problem is GRUB or windows.
  6. That is why you need to make the Jasager network the highest priority as then your machine will send out probes for Jasager and the real AP won't respond to that as it is only looking for probes for its network, but the pineapple will respond to it as it responds to everything. Of course it may be possible for someone to specifically set Jasager to only respond to one or two networks, which would be harder to detect but it would also limit the traffic that it manages to collect.
  7. Is it just windows that GRUB doesn't boot or does it fail on Linux as well? Looking at the man page for ntfs-clone shows that you may not be able to get windows booting easily on the new disk.
  8. The point is to set a preferred network that shouldn't exist. Then if you do connect to it you know it is an evil twin and not a safe network. If you really wanted to remove the need to look at your network ssid every time you connect then you could create a program/script that runs after you have connected and either setup an configuration on your interface that won't work or pops up a warning message that lets you know that you are not on a safe network.
  9. You can avoid the need for checking state with a small change to your getData function. By adding a line at the start that makes httprequest local to the get data function then a fresh object is created with each call of the function. This object is available in your event handler thanks to the way that JavaScript scopes its variables. function getData(dataSource,id){ var httprequest=undefined; if (window.XMLHttpRequest){ // Mozilla, Safari httprequest=new XMLHttpRequest(); if (httprequest.overrideMimeType){ httprequest.overrideMimeType('text/xml'); } }else if (window.ActiveXObject){ // IE try { httprequest=new ActiveXObject("Msxml2.XMLHTTP"); }catch (e){ try{ httprequest=new ActiveXObject("Microsoft.XMLHTTP"); }catch (e){} } } if(httprequest){ var obj=document.getElementById(id); httprequest.open("GET",dataSource); httprequest.onreadystatechange=function(){ if(httprequest.readyState==4 && httprequest.status==200){ obj.innerHTML=httprequest.responseText; } } } httprequest.send(null); } } This can be a little tricky to get your head round when starting off, but the way it works is this. The first call creates a local httprequest object for that call of the getData function. Copies of these variables are available to your function that is defined within it, in this case the event handler. The second time the function is called a new httprequest object is created because when we declare httprequest we set it undefined. Setting httprequest to undefined doesn't destroy the object that it was pointing at as it is still pointed to by the first copy of the variable in the first instance of the event handler. So at this stage we have the old httprequest object existing to deal with the first call and a new one that we can use for the second call. We can repeated call this function and each call would use a new httprequest object and leave it available for it's event handlers. The down side to this method is that you do leave httprequest objects in memory (as garbage collection won't dump them as they are still in use by their own event handler). The benefit this method has over the one you are currently using is that the second call doesn't have to wait for the first httprequest to have completed before it can be sent. This will usually get you better performance from your AJAX code. There is a third alternative where you have a global array that stores your httprequest objects and when making a fresh request it loops through the array to find an available object (i.e. one with a readyState of 4 or 0). If it doesn't find one it creates a fresh one an puts pushes into the array. Either way it will have a usable httprequest that won't be reused until it has completed its request. This is a little more code but doesn't leave objects around that you will never be able to use again.
  10. The quick theoretical answer is yes, they would be equal strength as they are the same cipher. In practice it would depend on the coding of the encryption routines and if there was any mistakes in either that makes it weaker than the other.
  11. I upgraded my netbook to 2GB quite soon after getting it, so I can't say if you notice a difference as I wasn't running for long enough on 1GB to compare. I will say though that with the cost of memory these days you might as well upgrade it as memory usage always increases.
  12. In which case you will have to use multiple httprequest objects. You would keep yourself sane by writing a set of routines to handle your ajax calls.
  13. Your problem is actually coming from a combination of two asynchronous call with just one httprequest object. Here is a quick walk through of what is happening. call_one uses the global httprequest object to request a GET method to retrieve some data. The http request will take a while so the code continues to run relying on the onreadystatechange event handler to call the function to handle the returned data. As your code is still running call_one finishes and call_two is then called. This uses the same httprequest object to make a different request using the GET method. The httprequest object forgets about its old request and just remembers the new request. Again as it is asynchronous your code continues to run relying on the eventhandler to call your function when the request completes. There is nothing else for your code to do after call_two, so the next thing that happens is the result from the first request comes back. Unfortunately the httprequest object has thrown away the details for that request so it ignores it. Then the result from the second request comes back and for some reason httprequest calls its onreadystatechange event handler twice with the result of the second call. The first event handler call uses the scope of call_one, so the obj variable points to targetDIV_one. The second event handler call uses the scope of call_two, so the obj variable points to targetDIV_two. I would recommend that you use JQuery rather than rolling your own ajax handlers. It makes your life easier and you can concentrate on your code rather than the code to speak to your the server. Also it makes it nice and easy to choose if you want to do an asynchronous call or not (in other words do you want your code to wait till you get response or do you want it to get on with other things while waiting). Also its asynchronous calls use a separate httprequest object for each call which avoids the problem you have encountered.
  14. I figured that I would have a go at running some tests of ext2, ext3, ext4 and ntfs on a spare USB flash drive and see if my results match up with those that I have seen on the other sites on the internet. I looked at testing read, write and delete performance of each filesystem and there was some interesting results. ext2, ext4 and ntfs all performed pretty consistently as the size of files being dealt with increased. Interestingly ext3 seemed to struggle more with the larger file sizes. From the results I would still recommend ext4. ntfs performed a lot better than I thought it would and it would be interesting to try these sort of tests on a well used file system. That would take a lot longer to test though as I would need to write a program to write lots of files to the flash drive and delete some of them, then write some more and then delete some more then write some more, and so on. I have put more details of the tests and the results in a post on my blog.
  15. First I know that there is a speed resilience trade off between NTFS and FAT32 on usb drives. NTFS is faster, but FAT32 was more resilient to being unplugged without having been unmounted. If you are looking for the best performance then I would suggest considering to use ext4 as there are a few articles that I have seen where it is shown to be faster than most other formats.
  16. The UUID of the subversion repository has changed. The chances are that someone has had to recreate the repository for some reason. You will need to find out what it is trying to update from the repository and checkout a new copy of it. The UUID will match and it will let you update from it. If you are using backtrack as a live CD then you could try getting the latest copy of the it and see if the problem still exists.
  17. My first computer was an Oric Atmos followed up a couple of years later by a ZX Spectrum. Both were great machines and I learned so much about computing from using them.
  18. The other thing you need to consider is where you will store your backup tapes. If you have a safety deposit box then that is a very good place to keep them as it is out of your house. If you work in an office and have a draw that you can keep locked and you trust the other people in your office then that can also be a reasonable place to keep them. If you haven't anywhere secure out of your house to store them then you may want to consider looking into fire safes, as a house fire is the sort of disaster that will easily destroy your computers and NAS boxes. Which is when you will really want to be able to get hold of the data you have backed up (especially if it has a list of equipment, the fire destroyed, that you want the insurance company to replace).
  19. We use a Dell unit that uses Ultrium LTO 3 tapes. It is getting a bit old now but has worked perfectly for over 3 years. The main reason to use tapes is that it is traditionally a lot faster to write to them than a USB device and if you have a large set of data to be backed up, then it is important to be able to write the backup in a reasonable length of time.
  20. Jason Cooper

    Room Security

    I don't really like keypads for security as you can normally figure out what numbers are used in the code by simply dusting the keypad for finger prints. So for a 6 digit code there is only 720 possible orderings as the 6 digits are known. A room mate can easily try all the possible orderings over a couple of weeks.
  21. scponly is a great little program that can help with this sort of setup. It lets you limit a user to only doing file transfers rather than letting them also get a shell on the machine. You should probably run it in a chroot environment to really limit what files they can access.
  22. When starting up the service didn't know what the passphrase was for your keys, so it couldn't decrypt them to use them.
  23. Jason Cooper

    Room Security

    Get a fire safe, as when you have some security you are more likely to get your valuables damaged in a fire than stolen. As others have said I wouldn't go for a biometric lock as I have no way of testing how secure they are, at least without having paid for it in the first place.
  24. I use an LTO3 drive as that gives us 400GB of room per tape, which is enough to cover a set of backups (thanks to tar and gzip/bzip2). The key thing is that our script that writes the backups to the tape will email us afterwards and tell us to swap the tape, or we would forget to replace the tape each week. The script also keeps count of when a cleaning tape was last used and reminds us to put one in every 8 swap of tapes, just to keep it all clean. If you have very large backups then you will want to be looking at using either a newer LTO type and/or a tape robot, which rather than using one tape will let you span the archive over multiple tapes.
  25. Personally we use a method of nightly backups being taken on each server and these are then passed over the network to two machines. One is away from our server room, the other is in our server room but writes each weeks backups to tape (the tapes are kept off site). This means that for recent backups I can recover them from one of the two local machines, and if we have to go back further I can get the backup from the tape, or if disaster strikes we can rebuild from our latest tape backups. For the drivers try http://download.cnet.com/Adaptec-SCSI-RAID-2610SA-Controller/3000-18492_4-82764.html. I don't know if they will work with your OS. But if you do a google search you get plenty of results.
×
×
  • Create New...