Jump to content

LXC : a new vulnerability ? No just a display bug !


Brain 0verride

Recommended Posts

Sometimes you discovering a vulnerability when you don’t search for and sometimes finally like this, it’s simply a false alert. More than 70 percents of vulnerabilities I’ve found in my life have no rapport with a security research, but simply due to chance, when doing administrator tasks for example.
 
This day, I decide for a customer that have millions of hit on this website because of a holiday game, to put the content of his website directly in memory for not having iops problems anymore. For doing this i simply use a ramdisk and i make a synchronization from disk (where is stored the code) to ramdisk each minute via rsync.
 
This customers is on a lxc container with 8 GB RAM connected to a separate MySQL server by a private network. The webserver only use less than 1 GB of RAM and the applications less than 500 MB of disk space.
 
So I just create a ramdisk like this :


mkdir /home/ramdisk
echo "shm /home/ramdisk tmpfs nodev,nosuid,noexec 0 0" >> /etc/fstab
mount /home/ramdisk
rsync -avz --stats --delete /home/xxxx /home/ramdisk/


 
After this, i am verifying with a simple df -h and i can see a big suprise :


~# df -h
Filesystem Size Used Avail Use% Mounted on
zfstore/zfs-containers/subvol-9202234-disk-1 32G 1.4G 31G 5% /
none 492K 0 492K 0% /dev
tmpfs 26G 68K 26G 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 1.7G 0 1.7G 0% /run/shm
shm 126G 410M 126G 1% /home/ramdisk


 
My /home/ramdisk have a size of 126 G. Just after i verify with and without it, if ram seems used by this but the ram is exactly at the same state. Very excited to probably have found a new vulnerability, i am verifying on a new container on an other cluster and i can reproduce the problem with success. In the same time I am sending an email to a person i know that work on an implementation of this product and it is finally just a display problem : Privileged containers only fail to *show* the used memory (it’s an accounting issue), but after hitting the specified limits you’ll be writing to swap space instead, and ultimately the kernel’s OOM killer will kill the container before it starts using more RAM than assigned (note that both RAM and swap limits have to be hit). End of the story :)

--
Christophe Casalegno
https://twitter.com/Brain0verride

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...