Jump to content

Some basic questions about exploits and Linux


NicholasVA

Recommended Posts

Hello.

I am new to hacking/pen-testing and not super familiar with Linux. I am looking for a bit of clarity with a few basic questions:

1. Assuming I have credentials to a Linux server, how would I access the box remotely? For instance in Windows there is Remote Desktop. Is there something similar in Linux? If command-line is the only option, would SSH be equivalent to console login or is it more limited?

2. I found a C exploit on exploit-DB. I would like to compile it (gcc) but I read in an article/post that it is best to compile on the target system because a locally-compiled executable may have incompatibilities. Is this indeed a problem?

3. (Continuing on No.2) Is gcc part of all Linux releases or will I need to get the gcc compiler (and/or libraries) on the target system myself?

4. In metasploit some exploits require a SESSION parameter. What does this refer to? Does it imply that I first have to establish a session to my target (through another exploit) and then launch the second exploit through the former's session?

5. Assuming I have some kind of access to the target system (ie an SSH session), can I use that "channel" to launch a metasploit exploit?

Thanks

Nicholas

Link to comment
Share on other sites

1. Okay, let's start with a prime distinction between Windows and, well, non-Windows really: In Windows the program window you interact with is inescapably tied to the actual program doing the magic, and your only option for interacting with it is over something that exports the full desktop, such as RDP. In non-Windows, the program window you interact with, or at least the visual parts of it, are wholly separate from the rest of the program.

Now, when you interact with a non-Windows box you would pretty much exclusively use SSH which immediately gives you access to the command line. There's virtually *nothing* you can do using a GUI on Linux that you can't also do on the command line, you just have to remember the program parameters and such which can be challenging, particularly to people who are new to the system and/or have a mild aversion to using the keyboard.

A result of that difference between Windows and non-Windows is best demonstrated with an example. Let's say your own machine is a Linux machine running X-Windows (xorg to most package managers) so you have all the windowy goodness you know and love. You need to do something with remote server S and, since it's a server in a rack somewhere without even a screen attached and possibly not even a graphics card installed the admin never bothered installing X-Windows on there. So you SSH to the box to allow you securely interact with it. Let's now assume that you need to do browse to a website from that server S and this *must* be done from the actual machine for whatever reason and the stack of text-based browsers don't cut it (let's say this website has javascript on it, which isn't processed by text-based browsers). You might think you need to install X-Windows on this server, and you'd be wrong. You can install the browser, which will bring in a number of X-Windows libraries, but not the actual X-Windows program itself. Before you start the browser up you set the DISPLAY environment variable such that it points at your machine - SSH can do this automatically for you and will tunnel it over its secure connection so it'll be safe aswell. You now run the browser on server S and the window will pop up on your local machine. You can interact with it as you would any other locally running program. It'll lag a bit because the program logic is still being executed on your server S, but everything else will be identical.

In my personal opinion, GUIs are great for the 'happy flow' of any process, but the second you need to do something that's even slightly off the beaten path, you're pretty much forced to use the CLI and because of this I rarely bother with GUIs. Also, you can create your own shell scripts to automate things you would normally do often but involves a number of specialist commands in a specific order. One example I posted here recently is my need to generate a good password. The command I use for that is

tr -cd '[:graph:]' < /dev/urandom | fold -w64 | head -n1

I put that command in a 'genpwd.sh' script, allowing me to forget about these commands and their specific invocations.

Similarly, I've got a set of scripts to manage my MP3 collection, allowing me go from downloaded alphabet soup names and strange encodings to a strict common structure which works for me in record time.

Bottom line, the command line interface on Linux is, by far, the most powerful method of interacting with a non-Windows machine. The GUI exists almost exclusively to make it less cumbersome for people who don't want to be bothered by it, and the GUI is actually the more limited option (but you can always start an xterm, the Linux equivalent of the 'Command prompt', and get going).

2. When you compile code, you turn it into a binary which is typically designed to run on your own machine. How specific it will be tailored to your specific machine depends on how you invoke GCC. If you're on a 32-bit x86 machine GCC will by default create a 32-bit x86 binary. If the machine you want to run this exploit on is a 64-bit machine and support for 32-bit compatibility was disabled in the kernel, that binary you created will not run on this target machine. For larger programs, dependencies rear their ugly head. If you compile your program locally and want to run it on a machine that's, in terms of hardware, identical to yours, but you have some library installed and the configure script detects it and decides to include some code that depends on it since, after all, you have that library. You then take this binary to the other machine which doesn't have this library installed. Depending on how well the program was written, it might simply refuse to run, citing the missing library as a reason. A similar situation can happen when you link against a library both machines have, but your machine has a different version of the library and between versions some function was removed/renamed and the code is designed to accommodate both but the compiled binary would only support the one which was there during compilation.

It's also very easy to think every computer on earth is a 64-bit intel machine, but there are TONS of other, incompatible processor architectures out there. Raspberry Pi's for instance use an ARM processor. Good luck starting an x64 binary on that. HP still creates machine with Itanium processors (our clients use them) which are 64-bit intel processors that are incompatible with what you know to be a 64-bit intel processor.

The point here is that you're not required to build your binary on the target machine (google the word "crosscompiler") but it tends to be the easiest to achieve the desired result.

3. Depends on the distribution. Many include it by default, but an admin is free to remove it again and, from a security standpoint, it's recommended that they do on production servers whose programs don't require a compiler to be present.

4. I believe that might be the case, but it can also be that session 1 is you hacking machine X and session 2 is you hacking machine Y. The only thing separating your actions with these machines is the session.

5. It's my understanding that that's quite specifically what Metasploit exploits try to achieve. The exploit does something to the remote service with the goal of giving you a session to the machine that provides you with a command prompt on that machine, preferably with significant privileges.

Link to comment
Share on other sites

Wow, thanks for the fantastic explanations! This is truly superb, thanks!!! I am moderately familiar with SSH and the command line but for some reason an SSH session doesn't feel like I am "owning" the machine. I suppose I am so used to the GUI's nowadays...

In regards to compiling an exploit, let's assume I want to do it on the target machine but gcc is not on it. Would I bring my own gcc to the party? I actually tried this, downloaded gcc on my machine, FTPed it onto the target, then ran SSH and made sure (chmod) I had RWX on both gcc & the exploit. Despite all that, gcc refused to run. Are there different gcc versions? Would I have to find the proper gcc for my target?

Link to comment
Share on other sites

You need to provide the correct GCC for the platform (much like with the exploits - if you know your target type you can make an exploit on one such box and use it on another but once the machine types diverges things get unreliable quite quickly) and much like how in Windows copying over just the .exe isn't sufficient with GCC you should install the program via the package manager if at all possible. Alternatively you can grab the package which the package manager would normally provide and unpack it in a folder which you have access to. Fiddle a bit with the directory tree you end up with, but that should be fairly doable too. Do note that you can set, via the /etc/fstab file, mount options for partitions which make everything on them non-executable, regardless of permission bits. Commonly used for /tmp to name one. If it fails, google the error message you're getting. Should tell you plenty.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...