Jump to content

Compiling on the pineapple?


Recommended Posts

So I was reading a book titled "Hacking, the art of exploitation" by Jon Ericson, and I came across some neat code, specifically the countermeasure to port scanning that he proposes in his book. I was wondering, If I wanted to compile code on my pineapple, how would I go about installing GCC, and doing that. The libraries it uses are libnet and pcap. Basically if somebody tries to port scan the pineapple then, they get hundreds of open ports that really aren't open.

Link to comment
Share on other sites

It would wouldn't it? I have been trying to get the same code working on Windows, and I finally figured out how to compile the libraries on Windows, but ran into another problem where I was missing functions. I think I figured out why I was missing functions though, it's because I didn't include one of the includes he has in the book. Anyway, compiling libraries have been something of a nemesis of mine for a while. They just don't teach you how to go out and get third party libraries in schools. I think the program would have to be modified to run in the background, as a system service however.

The code works perfectly fine on his companion CD.

Edited by overwraith
Link to comment
Share on other sites

The pineapple as a compilation platform is a bit of a challenge, particularly because it's starved for RAM. 64 megs isn't going to get you very far when compiling things like GCC itself.

If you *must* do it on the pineapple, get a big and relatively high-end SD card. Create a swap partition of, I would say at least 2 gigs but more if you think you'll need it and can spare it, and get cracking. The CPU is slow as [CENSORED] but it'll get there eventually.

The alternative is to create a cross-compiler on your main box, but that's easiest when you're running Linux and you mentioned Windows. Never heard people talk about running a cross-compiling GCC on Windows... I'm sure it can be done somehow.

To put the CPU power of the pineapple into perspective, OpenWRT ran some OpenSSL benchmarks on a number of systems they support.

697.95 1.0.1e 13225220  4608750 2451050 1546240 2159960  790190 2747390 2405380 2143570 1.7  58.2  5.8  4.7
266.24 1.0.1e 21197480 13745150 6514560 2891090 3101700 1108310 5160960 4460390 3923050 4.8 165.0 16.6 13.4

The first line is the CPU in the Raspberry Pi, the second the CPU in the pineapple. The numbers are provided in the order as they are reported on that link, you can look up what they mean specifically, but what this shows is that the CPU in the pineapple can achieve about twice the performance of a Pi on OpenSSL-related tasks. I find that amazing considering the vast difference in clockspeed and can only assume there's some hardware feature on the MIPS 24K that is being exploited to the fullest by OpenSSL.
From what I've seen so far I wouldn't be amazed when compiling just GCC on the Pineapple will take more than half a day, not in the least because of the constant swapping.

Edited by Cooper
Link to comment
Share on other sites

8 years ago I was running ELKS on an original Ericsson PC sporting an 8086 processor and a whopping 20 MB Winchester drive...

Link to comment
Share on other sites

They're all good. Pick whichever one you're most comfortable with.

What you're going to end up doing is create a cross-compilation toolchain, which is gcc, binutils (ld, as, etc.) and one or two other programs. From that point on you can compile anything. The tricky bit after that will not be how to cross-compile something, but how you will package it in such a way that you can move it and all its dependencies over. Most software has a --dest-dir (or some such) directive that should help you. Google for tutorials on the toolchain building and please keep us posted on your progress.

Link to comment
Share on other sites

Oh, I should add, regular gcc will remain gcc when you create your toolchain. Same with all the other programs. Your cross compilation toolchain programs will be called something like gcc-mips24k or something like that. The tutorial should explain this in greater detail.

Link to comment
Share on other sites

If you all want to have a look at the code, you can download it here:

http://www.nostarch.com/hacking2.htm

Unfortunately I think that some of the functions in the code have been depreciated, and they will need replaced. I don't think the book says which version of Libnet they use, but the internet only goes back to about version 1.0.2. I am tinkering around with it in visual studio before I tackle the open wrt issue.

The particular program I am viewing is shroud.c.

Edited by overwraith
Link to comment
Share on other sites

Is this any help to you? It's about 1.0.2 and how to get a proper install out of it in spite of the developers' best efforts to thwart your efforts...

Link to comment
Share on other sites

Ok, so I was reading in the book, trying to figure out what version of Libnet they use, and I found it, they use Libnet version 1.0, and so I don't forget later the page number is 254. If we wanted to use the original code, this may be a problem, because I can't find a 1.0 version anywhere. My live CD started glitching, so I am downloading a new one.

Like I said, the farthest back I can get is 1.0.2, but it does look like some of the depreciated functions are still in it.

Edited by overwraith
Link to comment
Share on other sites

Typically a point release (1.0.0 -> 1.0.1 -> 1.0.2) must not change the public API. Just try it before you assume the worst.

Link to comment
Share on other sites

Interesting little fact. LLVM/Clang supports the MIPS 24K target.

Not according to the CLANG page itself, but LLVM says so and I tend to believe them as they are the actual generators of the binaries. With the current 3.4 release they also support MSA or unabbreviated MIPS SIMD Architecture, which is MIPS' version of x86's MMX or ARM's NEON. Stuff gets optimised for this either automagically at the compiler's own discretion (=very rare not very effectively), by a programmer using intrinsics in the code (=quite rare to see this but when used typically 65% better performance relative to plain C) or by a programmer using (inline) ASM in the code (=still rare to see this - requires a very motivated programmer, but 400% better performance than intrinsics, 650% better than plain C).

Normally you don't see these optimisations in software unless it's absolutely necessary and yields an obviously massive performance gain, such as with video where you know you need to do a fixed set of operations on each pixel on the screen. What I'm wondering is to what extent an LLVM/CLANG-generated binary is capable of outperforming a GCC-generated binary...

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...