Jump to content

Finding NSA back doors in commonly used online software.


Anonymous123

Recommended Posts

Indeed. Cory Doctorow and Bruce Schneier have interesting input on this.

I doubt any backdoor would be easy to find as the code would typically be proprietary and/or on a server somewhere, but eventually it is inevitable that they would be exploited by someone.

Link to comment
Share on other sites

  • 2 weeks later...

It's probably common practice for some companies to leave critical bugs in working code and cater the exploit to vulnerability which can then be sold. Ethically questionable but highly profitable. I have 2067 packages installed on my machine right now. Many users have more than that. More installed programs means more lines of code. More lines of code usually means more bugs. And there are a lot of people who will install an application recommended by a friend. Mobile apps and browers plugins are often installed without a lot of descretion. How many facebook games use Flash? What is Adobe's motivation for giving away Flash for free. Plenty of programs make use of deprecated libraries. Do devs do this on purpose because they know there are vulns in the code? It's sure it has happened before. It would be easy enough to deny that anything like that was going down if the bugs are discovered and published down the road.

Just think of a developer who is hard up for cash. He is one of a few hundred people working at a software company. Some intelligence agency, foreign or domesty, or cybercriminal pays said dev a large sum of money and all he has to do is include some buggy code that will probably go unnoticed for the forseeable future. The software product is installed on millions of computers. And so it goes. So say there 1000 or so software packages with critical bugs being produced at any give time. Keeping the abc companies or kackers who are in the know out of your computer may prove quite difficult.

Incidentally I just figured out something cool. You can pipe the output of dpkg to wc like this:

dpkg --get-selections | wc -l

I also leave you with this question since I'm not really up on forensics and bug hunting type stuff. Antiviruses use signatures to scan viruses. Is there a software to scan for bad code. Like take an object dump of each object and compare strings and functions to functions and strings in a database for possible matches. Then alerting the end user of possible security risks that may arise from installing the software. I can see how hackers would just use this find security holes in systems and software and I understand that code can be obfuscated or encypted to avoid detection. But nontheless it might be useful to come up with something that does this.

So rather than scanning for malicious software you would be scanning for the vulnerabilities that will allow malicious software to be installed on the target system.

You could also create an online database of known checksums of version of each library or object to verify that it is what it says it is as per what. Software could query the database for checksums to validated the each package cryptographically. I realize it's possible to create to packages with an identical checksum that are toally different. Just basically keep encoded the file until you get a collision and bob's your uncle. Most people just don't have the compute power for this though.

Edited by vailixi
Link to comment
Share on other sites

I don't believe companies would willingly leave in bugs and inform the powers that be about them so they can exploit them to their heart's content. Any company cought doing that would be unable to sell anything anywhere ever. And let's not even talk about how viciously the shareholders are going to shove it up the rear of the management team.

My view is that it's a lot more benign, like with the Hacker Team / Flash 0-day thing. Someone finds a bug, reports it to the vendor, vendor says "thanks" and he gets honorable mention somewhere. Same person feels good about himself but then learns that people pay big money for that sort of thing. He finds another bug and decides to sell it. Vendor is kept in the dark while anybody with a fat enough wallet goes apeshit on their targets.

In a sense this protects most of us lowly nobodies - it costs the attacker considerable money to attack you. Unless they can recoup that in some way, they're not likely to go do it. Once the thing is out in the open and wholesale exploitation is occurring, everybody is attacked indiscriminately so the money that needs to be recouped per victim for the process to be viable to the attacker goes down, but the visibility goes *WAY* up meaning that the bug is likely to be found and fixed. So long as we the lowly nobodies keep our software up to date we're mostly in the clear.

The difference between 'normal' exploits and one that came from a government institution is that the government probably paid or hacked the vendor to get read access to the source code of their product. They find the problems in the code, occasionally mention a few of them to the vendor but keep quiet about the really big issues and go wild with exploiting those on their very, very specific targets. I don't believe the goverment wants to hack everybody. For the most part they already can if they want to. It's just inhibitively expensive, even for them. The return is too low to make it viable. Once you make yourself sufficiently visible it gets more likely, so the smart thing to do is to present yourself as meek civilian #89236467122247. If you do anything that might raise the interest of an agency, cover your tracks, don't talk to anybody about it ever and change identities frequently - which is HARD. See the anonimity topics on this forum for the complications you're faced with.

Moving on, Adobe's reason for giving away Flash for free is simple - the player is free, the creator isn't. If need to pay for the player, nobody will have it meaning nobody will want to buy your creator. You see this everywhere. Even the other way around, where you pay for the player but the creator is free. Think map editors for games and such. Game creators tend to do this because it means those additional maps are an incentive to the players to play the game. Each unique player is money in the bank and the longer there's a buzz around for your game the longer people will actually be purchasing it. Sure, it'll end up in the digital bargain bin eventually, but you'll have made a tidy profit on a mostly fixed investment. It just makes economic sense to have a model like this.

Devs end up using deprecated libraries because, for the parts they're using them, they work and they're too busy creating some stupid visual gizmo that marketing says is VITAL to selling the program as opposed to improving the core of the program. This is also something you see everywhere. People don't buy the next version because you updated the libraries. They buy it because of $new_feature so unless you provide that you're going to lose. Also, you never EVER create perfect code from start to finish. There are always assumptions on how that method/function should be used which seemed reasonable at the time but end up being shit for what's being asked of it now - but it works in 99% of cases and there are bigger fish to fry. What's worse, most code is done by teams of people who don't always have the same opinion on what qualifies as "good enough". Outsourcing your development I believe is a MASSIVE part of this problem. While the local devs try their best to make the program great (if you're lucky) since it's their company's product and they feel a personal responsibility, those outsourced devs get paid for creating cunks of feature. The main bar to be reached is 'does it work'. If it's complete shit code that doesn't follow the architecture, that's secondary. And code reviewers (if your company was even willing to invest in that) get spat in their face by management when they keep sending back code to those devs for the code being shit. Eventually someone's going to tell those reviewers to lower their standards because their "quest for perfection" is preventing them from delivering the next release on schedule.

Problems in code are rarely malicious acts. Just a product of the pressures put on the dev team and how they manage to handle that.

There are quite a static code scanning tools which can find a lot of problems effectively. Also there's valgrind which tests a compiled program's memory behaviour by getting between it and the memory functions exposed by the C shared library. It's slow as hell and your binary better be compiled with debugging on (commercial software is typically stripped of these) so valgrind can tell you what function in the program you're testing is doing the nasty allowing you to fix the issue. When it's a plugin into some massive program such as a browser, this simply will not fly.

Having a database of known-bad software is, in a sense, the CVE database. Basically, the best thing to do is to run the latest of everything. No guarantee that it prevents problems, but you'll get the maximum amount of fixes to known problems which is the next best thing.

Link to comment
Share on other sites

Leaving in an authorized cert in the .ssh folder of the root user I would suspect having been done to ease development and nobody stopped to check if that cert never ended up in production.

Ignorance and stupidity is a lot more common than malice.

Link to comment
Share on other sites

With respect to Cisco, their was an article not to long ago about how the y "honeypotted" the discovery of NSA intercepting their devices. More or less, they were able to track when their stuff was being tampered with, which seems to be an ongoing issue not just for them but a lot of tech companies who wish to ship worldwide communication equipment. From what I remember, they setup a dead drop where the devices had more or less a fake drop, so they could monitor the interception of the packages, which were being tampered with. Here is an example of what typically happens with security agencies intercepting devices:

http://arstechnica.com/tech-policy/2014/05/photos-of-an-nsa-upgrade-factory-show-cisco-router-getting-implant/

http://www.theregister.co.uk/2015/03/18/want_to_dodge_nsa_supply_chain_taps_ask_cisco_for_a_dead_drop/

Link to comment
Share on other sites

Send a router to "Allah Ackbar Death To The Infidel NOC" with the address of someone you know (preferably with their consent - it helps if this someone is afraid of flying). See what happens.

Link to comment
Share on other sites

this could be someone's golden ticket. If you have a few thousand dollars to blow... Buy some cheap routers and ship them all off with the destination of... maybe some windows installed hard drives...

anonymous, isis, corporate billion dollar industry china,....etc....

Link to comment
Share on other sites

  • 2 weeks later...

I don't believe companies would willingly leave in bugs and inform the powers that be about them so they can exploit them to their heart's content. Any company cought doing that would be unable to sell anything anywhere ever. And let's not even talk about how viciously the shareholders are going to shove it up the rear of the management team.

My view is that it's a lot more benign, like with the Hacker Team / Flash 0-day thing. Someone finds a bug, reports it to the vendor, vendor says "thanks" and he gets honorable mention somewhere. Same person feels good about himself but then learns that people pay big money for that sort of thing. He finds another bug and decides to sell it. Vendor is kept in the dark while anybody with a fat enough wallet goes apeshit on their targets.

In a sense this protects most of us lowly nobodies - it costs the attacker considerable money to attack you. Unless they can recoup that in some way, they're not likely to go do it. Once the thing is out in the open and wholesale exploitation is occurring, everybody is attacked indiscriminately so the money that needs to be recouped per victim for the process to be viable to the attacker goes down, but the visibility goes *WAY* up meaning that the bug is likely to be found and fixed. So long as we the lowly nobodies keep our software up to date we're mostly in the clear.

The difference between 'normal' exploits and one that came from a government institution is that the government probably paid or hacked the vendor to get read access to the source code of their product. They find the problems in the code, occasionally mention a few of them to the vendor but keep quiet about the really big issues and go wild with exploiting those on their very, very specific targets. I don't believe the goverment wants to hack everybody. For the most part they already can if they want to. It's just inhibitively expensive, even for them. The return is too low to make it viable. Once you make yourself sufficiently visible it gets more likely, so the smart thing to do is to present yourself as meek civilian #89236467122247. If you do anything that might raise the interest of an agency, cover your tracks, don't talk to anybody about it ever and change identities frequently - which is HARD. See the anonimity topics on this forum for the complications you're faced with.

Moving on, Adobe's reason for giving away Flash for free is simple - the player is free, the creator isn't. If need to pay for the player, nobody will have it meaning nobody will want to buy your creator. You see this everywhere. Even the other way around, where you pay for the player but the creator is free. Think map editors for games and such. Game creators tend to do this because it means those additional maps are an incentive to the players to play the game. Each unique player is money in the bank and the longer there's a buzz around for your game the longer people will actually be purchasing it. Sure, it'll end up in the digital bargain bin eventually, but you'll have made a tidy profit on a mostly fixed investment. It just makes economic sense to have a model like this.

Devs end up using deprecated libraries because, for the parts they're using them, they work and they're too busy creating some stupid visual gizmo that marketing says is VITAL to selling the program as opposed to improving the core of the program. This is also something you see everywhere. People don't buy the next version because you updated the libraries. They buy it because of $new_feature so unless you provide that you're going to lose. Also, you never EVER create perfect code from start to finish. There are always assumptions on how that method/function should be used which seemed reasonable at the time but end up being shit for what's being asked of it now - but it works in 99% of cases and there are bigger fish to fry. What's worse, most code is done by teams of people who don't always have the same opinion on what qualifies as "good enough". Outsourcing your development I believe is a MASSIVE part of this problem. While the local devs try their best to make the program great (if you're lucky) since it's their company's product and they feel a personal responsibility, those outsourced devs get paid for creating cunks of feature. The main bar to be reached is 'does it work'. If it's complete shit code that doesn't follow the architecture, that's secondary. And code reviewers (if your company was even willing to invest in that) get spat in their face by management when they keep sending back code to those devs for the code being shit. Eventually someone's going to tell those reviewers to lower their standards because their "quest for perfection" is preventing them from delivering the next release on schedule.

Problems in code are rarely malicious acts. Just a product of the pressures put on the dev team and how they manage to handle that.

There are quite a static code scanning tools which can find a lot of problems effectively. Also there's valgrind which tests a compiled program's memory behaviour by getting between it and the memory functions exposed by the C shared library. It's slow as hell and your binary better be compiled with debugging on (commercial software is typically stripped of these) so valgrind can tell you what function in the program you're testing is doing the nasty allowing you to fix the issue. When it's a plugin into some massive program such as a browser, this simply will not fly.

Having a database of known-bad software is, in a sense, the CVE database. Basically, the best thing to do is to run the latest of everything. No guarantee that it prevents problems, but you'll get the maximum amount of fixes to known problems which is the next best thing.

True, it's more likely somebody was butthurt about not getting a bug bounty and then sold or published the exploit.

As far as the profitability of commerical exploitation I'm not sure. I've never tried it. But it seems commerical blackhats use the oldest possible exploit to get the job done. The upgraded version of the kit usually has newer exploits.

Hypothetically if one was really good at e-whoring, social media marketing, or search engine optimization, mass exploitation via web exploit kit could be fairly profitable. User visits site and becomes part of the botnet.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...