Jump to content

[Support] Portal Auth


sud0nick
 Share

Recommended Posts

oooh I didn't think of that. Good idea, DataHead. I'll work on it.

Awesome, I been doing it manually for alot of the pure cisco portals, it would be a great time saver :-)

And also, a lot of the cisco ones I been working with, have bigger nested directory structures. had one with 8 folders recursively nested in its /image/. Yikes

Link to comment
Share on other sites

  • Replies 262
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Awesome, I been doing it manually for alot of the pure cisco portals, it would be a great time saver :-)

And also, a lot of the cisco ones I been working with, have bigger nested directory structures. had one with 8 folders recursively nested in its /image/. Yikes

Have you been using PortalAuth to clone them? Is it working out well for you? I'm currently looking into speeding up the saving of images as I think that is where a lot of the delay is coming from.

Link to comment
Share on other sites

Here are a few updates.

1. A new test version is up on my site. I have added DataHead's request to parse associated CSS files and download the images. The script accounts for multiple images across multiple CSS files and updates the links in each to point to the new location.

2. I still need to work on downloading images faster. I have come to the conclusion that Cheeto's issue, where he states that the Pineapple becomes unresponsive is due to two problems. Problem one is that my script takes forever to download large images (they don't even have to be that large, something as small as 900kb can take a long time). The particular portal he was attempting to clone contains one very large image. The second problem that I have come across is trying to send a separate AJAX request while one is currently active; the system will not do it and there is no way to multithread in JavaScript. While the AJAX connection is open to clone the portal all other AJAX requests will be queued up waiting for the open one to close. Almost everything the Pineapple does through the web interface requires an AJAX request. This is why it appears to "lock up" but you can still SSH into it and run commands. This is still presenting an issue for me in creating a kill switch for the cloning process. In the mean time, if the cloning process appears unresponsive and you want to kill it simply SSH into your Pineapple and run the following command.

kill $(ps | grep "python /sd/infusions/portalauth/includes/scripts/por" | awk '{print $1'})

This will stop the cloning process and log an error in PortalAuth stating it was terminated.

3. Another dependency has been added (python module tinycss) in order to parse CSS files. Along with that I have modified the dependency scripts to automatically install all dependencies in /sd/depends. My reasoning for this is in previous versions the depends were first installed internally and then could be moved to the sd card but if there wasn't enough internal space in the first place you were out of luck. Now there is no need to download then move, it is all done at once. Also, the time to install dependencies has been reduced greatly as I no longer run the setup.py script for each. I simply copy the module directories into /sd/depends/ then create a symlink in /usr/lib/python2.7/site-packages so they can be accessed from any script.

Before installing the new test version please uninstall your dependencies, there is still a bug I'm working out there as well. As always if there are any problems please let me know.

Edited by sud0nick
Link to comment
Share on other sites

Looking forward to testing this.

BTW guys, the more feedback we give sud0nick, the better PortalAuth will be. So if you can, spare just a couple of minutes and try PortalAuth out on a captive portal.

If there are any issues send sud0nick your error log.

Let´s get this infusion rolling.

Thanks sud0nick!

Link to comment
Share on other sites

progess is being made..

A portal that was once falling into a loop now is showing an error. I prefer this 100 times rather than a loop.

The good thing is that we can now send the error logs to Sud0nick. Also, make sure to save the captive portal's source code too.

The more information we send him, the better this infusion is going to be.

Additional information could be a screen shot of the captive portal etc...

cheers

Link to comment
Share on other sites

cheeto,

I know you think that it would enter an endless loop before but what really happened is what I described above (read #2 in post #131). I know this because I have taken the time to test and prove it on the back end, not just look at the web interface and wait for it to respond. Also, like I've said before make sure you uninstall your depends before moving to a new test version. I looked at the errors you sent me and they are the same I ran into in testing and why I recommend uninstalling depends first. Once you uninstall and reinstall depends that error should disappear.

Link to comment
Share on other sites

Ok thanks for the tip.

How would i go about uninstalling the dependencis?

Would i have to go to sd/depends/ (and what do i delete in there?)

I thought that they would have been uninstalled once i removed the PotalAuth infusion. I guess i was wrong. :(

thx

Edited by cheeto
Link to comment
Share on other sites

Open the large tile and in the config tab you should see a link at the bottom that says Uninstall Dependencies.

One thing I've included in v2.3 is it attempts to do a full uninstall before installing depends so it can clean up. This will help when switching between versions and only one additional dependency is required. If you look in /sd/depends/bs4 you should not see a symlink labeled bs4. It has appeared when trying to install depends before when they already existed and has been causing problems. Since the installation process is quicker now you should be able to do this in less than 30 seconds. Let me know if you are still having trouble.

Link to comment
Share on other sites

OK I followed your instructions and went to the config window and uninstalled the dependencis. (can't beleive i didn't see that option before)

After that, I went to the Pinaeapple bar and uninstalled PortalAuth.

After uninstalling, I simply went to your page and followed the download/insall proceedure.

Once installed, PortaAuth asked me to install the dependencies again, So I did. (was i suppose to do that? I would assume these are an updated version of the dependencies).

And finally, i had to configure the server info... I used your webpage. Would it be possible to use , www.google.com instead?

If you look in /sd/depends/bs4 you should not see a symlink labeled bs4

That's correct.

Anyway, if my proccedures mentioned above are correct, I'll give it another GO tomorrow.

Many thanks!!

Edited by cheeto
Link to comment
Share on other sites

You didn't need to uninstall PortalAuth. The dependencies are separate and after uninstalling you would see the Install Dependencies link in the small tile.

For the server info you could maybe find a Google page that displays basic info but you would need to have the Expected Data field match exactly what the page returns otherwise Portal Auth will always think there is a captive portal present.

Link to comment
Share on other sites

Right now the test server field is not the same as what the cloning script tries to reach out to. I can make it that way if you want but I think that my script is built more for portals which involve less content than a full website. I tried cloning my website once with it and I got an error that said the web server reset the connection so the clone failed.

Link to comment
Share on other sites

Right now the test server field is not the same as what the cloning script tries to reach out to. I can make it that way if you want but I think that my script is built more for portals which involve less content than a full website. I tried cloning my website once with it and I got an error that said the web server reset the connection so the clone failed.

Yeah I saw the bit of your hardcoded url, I changed it to my liking and cloned the other target. More for dnsspoof than a portal :-)

Link to comment
Share on other sites

I don't know much of python, otherwise I would whip up A bigger version of what i modified and toss together a patch for it, but cloning any said login site ( facebook for example ) just grab the front page / url specified and don't allow crawling too far deep for links. Like a limit of 100 or less or more links to grab / crawl / download etc from main target url.

But that functionality personaly goes beyond the scope in my opinion, of this infusion.

Edited by DataHead
Link to comment
Share on other sites

but its nothing too strenuous to implement. Just gots to get the full wget from the repo, and something as such: wget -mkEpnp http://site.org .

and add some scripted checks for what I had posted above, and viola, a mirrored front page of the taget. Then have the portalauth infusion crawl the mirrored copy, make the injects / strips, then profit

Edited by DataHead
Link to comment
Share on other sites

If this has peaked your interest: here is some useful wget mirror based commands

--recursive \

--no-clobber \

--page-requisites \

--html-extension \

--convert-links \

--restrict-file-names=windows \

--domains website.org \

--no-parent \

www.website.org/tutorials/html/

This command downloads the Web site www.website.org/tutorials/html/.

The options are:

--recursive: download the entire Web site.

--domains website.org: don't follow links outside website.org.

--no-parent: don't follow links outside the directory tutorials/html/.

--page-requisites: get all the elements that compose the page (images, CSS and so on).

--html-extension: save files with the .html extension.

--convert-links: convert links so that they work locally, off-line.

--restrict-file-names=windows: modify filenames so that they will work in Windows as well.

--no-clobber: don't overwrite any existing files (used in case the download is interrupted and

resumed).

Link to comment
Share on other sites

Yeah I've used wget many times before, and I even found the exact article where you copied this from :tongue: lol. I remember looking into for PortalAuth when cloning was first brought up many pages back in this thread but I also remember coming across some specific reasons why I could not use it (although I can't remember them now). Regardless, I can make the Target URL the same as the one that the cloning script reaches out to and there should be no problems in cloning a page. By adding the use of wget the infusion would be taking an unnecessary extra step. Wget is only going to do for you what is already being done by the requests library in my cloning script.

EDIT:

I forgot you mentioned you wanted it to be used for DNSSpoof in which case it would make more sense to copy an entire site rather than doing what my cloning script does now. Since PortalAuth is meant for cloning portals to be presented with nodogsplash I'm wondering if this extra feature really belongs in PortalAuth or if it's beyond the scope of it as you stated before. What do you guys think? If you feel it belongs I can probably make it happen (after I'm done with my finals this week).

Edited by sud0nick
Link to comment
Share on other sites

We get to vote? I'd say:

  • PortalAuth for captive portals,
  • SET to clone web sites, and
  • Wget, Burp Suite, ZAP, etc. for spidering sites.

I wouldn't clone or spider sites from the pineapple, it just doesn't seem like the best tool for the job.

Just my opinion since you asked.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • Recently Browsing   0 members

    No registered users viewing this page.


×
×
  • Create New...