Jump to content

Google Hijacking problem


Recommended Posts

i love hak.5...

2 days before i was searching on google and everytime i clck on link it redirects to other page ...first i thought its a virus but i scanned my pc 7 times with all famous internet security softwares but nothing found....is there anyway to get rid of this problem........first it was happening only in Internet Explorer and then firefox and thn chrome...evry single browser..i research little bit online and i found on one link that its some EXPLOIT 302 problem ..i have noo idea wa tht means ..i m posting the link which says something about Exploit 302 ...pls help me ...i get so pissed off when this redirects to another link.i... always have to go back 4 times to google search and thn it opens.....i dont know wat is the problem.


Link to comment
Share on other sites

Note the bolded part of item #4 in the list above. At a very early stage the connection between your page and the hijacking page simply breaks. This means that you can not put a script on your page that identifies if this is taking place. You can not "tell Googlebot" that your URL is the right URL for your page either.

Here are some common misconceptions. The first thoughts of technically skilled webmasters will be along the lines of "banning" something, i.e. detecting the hijack by means of some kind of script and then performing some kind of action. Lets' clear up the misunderstandings first:

You can't ban 302 referrers as such

Why? Because your server will never know that a 302 is used for reaching it. This information is never passed to your server, so you can't instruct your server to react to it.

You can't ban a "go.php?someURL" redirect script

Why? Because your server will never know that a "go.php?someURL" redirect script is used for reaching it. This information is never passed to your server, so you can't instruct your server to react to it.

Even if you could, it would have no effect with Google

Why? Because Googlebot does not carry a referrer with it when it spiders, so you don't know where it's been before it visited you. As already mentioned, Googlebot could have seen a link to your page a lot of places, so it can't "just pick one". Visits by Googlebot have no referrers, so you can't tell Googlebot that one link that points to your site is good while another is bad.

You CAN ban click through from the page holding the 302 script - but it's no good

Yes you can - but this will only hit legitimate traffic, meaning that surfers clicking from the redirect URL will not be able to view your page. It also means that you will have to maintain an ever-increasing list of individual pages linking to your site. For Googlebot (and any other SE spider) those links will still work, as they pass on no referrer. So, if you do this Googlebot will never know it.

You CAN request removal of URLs from Google's index in some cases

This is definitely not for the faint at heart. I will not recommend this, only note that some webmasters seem to have had success with it. If you feel it's not for you, then don't do it. The point here is that you as webmaster could try to get the redirect script deleted from Google.

Google does accept requests for removal, as long as the page you wish to remove has one of these three properties:

It returns a "404 Not Found" status code (or, perhaps even a "410 Gone" status code)

It has this meta tag: <meta name="robots" value="noindex">

It is disallowed in the "robots.txt" file of the domain it belongs to

Only the first can be influenced by webmasters that do not control the redirect script, and the way to do it will not be appealing to all. Simply, you have to make sure that the target page returns a 404, which means that the target page must be unavailable (with sufficient technical skills you can do this so that it only returns a 404 if there is no referrer). Then you have to request removal of the redirect script URL, i.e. not the URL of the target page. Use extreme caution: If you request that the target page should be removed while it returns a 404 error, then it will be removed from Google's index. You don't want to remove your own page, only the redirect script.

After the request is submitted, Google will spider the URL to examine if the requirements are met. When Googlebot has seen your pages via the redirect script and it has gotten a 404 error you can put your page back up.

Precautions against being hijacked

I have tracked this and related problems with the search engines literally for years. If there was something that you could easily do to fix it as a webmaster, I would have published it a long time ago. That said; the points listed below will most likely make your pages harder to hijack. I will and can not promise immunity, though, and I specifically don't want to spread false hopes by promising that these will help you once a hijack has already taken place. On the other hand, once hijacked you will lose nothing by trying them.

Always redirect your "non-www" domain (example.com) to the www version (www.example.com) - or the other way round (I personally prefer non-www domains, but that's just because it appeals to my personal sense of convenience). The direction is not important. It is important that you do it with a 301 redirect and not a 302, as the 302 is the one leading to duplicate pages. If you use the Apache web server, the way to do this is to insert the following in your root ".htaccess" file:

RewriteCond %{HTTP_HOST} !^www\.example\.com

RewriteRule (.*) http://www.example.com/$1 [R=301,L]Or, for www-to-non-www redirection, use this syntax:

RewriteCond %{HTTP_HOST} !^example\.com

RewriteRule (.*) http://example.com/$1 [R=301,L]Always use absolute internal linking on your web site (i.e. include your full domain name in links that are pointing from one page of your site to another page on your site)

Include a bit of always updated content on your pages (e.g. a timestamp, a random quote, a page counter, or whatever)

Use the <base href=""> meta tag on all your pages

Just like redirecting the non-www version of your domain to the www version, you can make all your pages "confirm their URL artificially" by inserting a 301 redirect from any URL to the exact same URL, and then serve a "200 OK" status code, as usual. This is not trivial, as it will easily throw your server into a loop.

Link to comment
Share on other sites


Judging by the English used in the first post I'm presuming he's not a webmaster and the above info will just confuse him even more..

He looks to be a end user not a webmaster ;) (Personnel observation)

Link to comment
Share on other sites

well thank you for helping but as Plunk said in first post i really appreciate that you reply me and help me out but i m still feeling little bit confusing.....if you can just tell me in steps wat to do first and thn go on ...it will be great ......i really appreciate your reply thank you ..

Link to comment
Share on other sites

Sounds like ID 10t error is in effect, as well as the PEBKAC sploit. :blink:

Link to comment
Share on other sites

All right i found solution for my own problem ...well in my case it was some DNS problem and when i checked my computer with Anti- malware named ......there were 3 trojans found and they were Trojan.Dns changer ...well for now it had fixed problem i cannot gurantee its permanent fix but i would recommend that those who has same problem with google search use Malwarebytes software and it wil solve the problem in most causes....anyways below is the information where these trojans had affected in my computer ..

Malwarebytes' Anti-Malware 1.31

Database version: 1466

Windows 6.0.6001 Service Pack 1

12/6/2008 9:16:32 AM

mbam-log-2008-12-06 (09-16-32).txt

Scan type: Quick Scan

Objects scanned: 52963

Time elapsed: 3 minute(s), 16 second(s)

Memory Processes Infected: 0

Memory Modules Infected: 0

Registry Keys Infected: 2

Registry Values Infected: 1

Registry Data Items Infected: 3

Folders Infected: 1

Files Infected: 2

Memory Processes Infected:

(No malicious items detected)

Memory Modules Infected:

(No malicious items detected)

Registry Keys Infected:

HKEY_CLASSES_ROOT\CLSID\{147a976f-eee1-4377-8ea7-4716e4cdd239} (Adware.MyWebSearch) -> Quarantined and deleted successfully.

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Windows Tribute Service (Trojan.Agent) -> Delete on reboot.

Registry Values Infected:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run\c53.tmp (Trojan.Agent) -> Quarantined and deleted successfully.

Registry Data Items Infected:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Interfaces\{cd384f7e-f098-400c-aa96-6fb2fff73209}\NameServer (Trojan.DNSChanger) -> Data:; -> Delete on reboot.

HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\Tcpip\Parameters\Interfaces\{cd384f7e-f098-400c-aa96-6fb2fff73209}\NameServer (Trojan.DNSChanger) -> Data:; -> Delete on reboot.

HKEY_LOCAL_MACHINE\SYSTEM\ControlSet002\Services\Tcpip\Parameters\Interfaces\{cd384f7e-f098-400c-aa96-6fb2fff73209}\NameServer (Trojan.DNSChanger) -> Data:; -> Quarantined and deleted successfully.

Folders Infected:

C:\resycled (Trojan.DNSChanger) -> Quarantined and deleted successfully.

Files Infected:

C:\resycled\boot.com (Trojan.DNSChanger) -> Quarantined and deleted successfully.

C:\Windows\Temp\C53.tmp (Trojan.Agent) -> Quarantined and deleted successfully.

WARNING : this trojans are not detected by any antivirus releases until sept -2008...so use anti-malware and remove it if possible or manually serach for these locations and delete it .

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
  • Create New...