Monday, September 24, 2012

The joys and hazards of multi-process browser security

Web browsers with some form of multi-process model are becoming increasingly common. Depending on the exact setup, there can be significant consequences for security posture and exploitation methods.

Spray techniques

Probably the most significant security effect of multi-process models is the effect on spraying. Spraying, of course, is a technique where parts of a processes' heap or address space are filled with data helpful for exploitation. It's sometimes useful to spray the heap with a certain pattern of data, or spray the address space in general with executable JIT mappings, or both.

In the good ol' days, when every part of the browser and all the plug-ins were run in the same process, there were many possible attack permutations:

  • Spray Java JIT pages to exploit a browser bug.
  • Spray Java JIT pages to exploit a Flash bug.
  • Spray Flash JIT pages to exploit a browser bug.
  • Spray Java JIT pages to exploit Java.
  • You could even spray browser JS JIT pages to exploit Java if you wanted to ;-)
  • ...etc.

Since the good ol' days, various things happened to lock all this down:

  • The Java plug-in was rearchitected so that it runs out-of-process in most browsers.
  • IE and Chromium placed page limits on JavaScript-derived JIT pages (covered a little in the famous Accuvant paper.)
  • Firefox introduced its out-of-process plug-ins feature (for some plug-ins, most notably Flash) and Chromium had all plug-ins out-of-process since the first release.

The end result is trickier exploitation, although it's worth noting that one worrysome combination remains: IE still runs Flash in-process, and this has been abused by attackers in many of the recent IE 0days.

One-shot vs. multi-shot

The terms "one-shot" and "multi-shot" have long been used in the world of server-side exploitation. "One-shot" refers to a service that is dead after just one crash -- so your exploit had better be reliable! "Multi-shot" refers to a service whereby it remains running after your lousy exploit causes a crash. This could be because the service has a parent process that launches new children if they die or it could simply be because the service is launched by a framework that automatically restarts dead services.

Although moving to a multi-process browser is generally very positive thing for security and stability, you do run the risk of introducing "multi-shot" attacks.

In other words, let's say your exploit isn't 100% reliable. Wouldn't it be nice if you could just use a bit of JavaScript to run the exploit over and over in a child process until it works? Perhaps you simply weren't able to default ASLR and you're in the situation where you have a 1/256 chance of your hard-coded address being correct. Again, this could be brute-forced in a "multi-shot" attack.

The most likely "multi-shot" attacks are against plug-ins that are run out-of-process, or against browser tabs, if browser tabs can have separate processes.

These attacks can be defended against by limiting the rate of child process crashes or spawns. Chromium deploys some tricks in this area.

Broker escalation

Once an attack has gained code execution inside a sandbox, there are various directions it might go next. It might attack the OS kernel. Or for the purposes of this discussion, it might attack the privileged broker. The privileged broker typically runs outside of the sandbox, so any memory corruption vulnerability in the broker is a possible avenue for sandbox escape.

To attack the memory corruption bug, you'll likely need to defeat DEP / ASLR in the broker process. An interesting question is, how far along are you already, by virtue of code execution in the sandboxed process? Obviously, you know the full memory map layout of the compromised sandboxed process.

The answer, is it depends on your OS and the way the various processes relate to each other. The situation is not ideal on Windows; due to the way the OS works, certain system-critical DLLs are typically located at the same address across all processes. So ASLR in the broker process is already compromised to an extent, no matter how the sandboxed processes are created. I found this interesting.

The situation is better on Linux, where each process can have a totally different address space layout, including system libraries, executable, heap, etc. This is taken advantage of by the Chromium "zygote" process model for the sandboxed processes. So a compromise of a sandboxed process does not give any direct details about the address space layout of the broker process. There may be ways to leak it, but not directly, and /proc certainly isn't mapped in the sandboxed context! All this is another reason I recommend 64-bit Linux running Chrome as a browsing platform.

Wednesday, July 4, 2012

Chrome 20 on Linux and Flash sandboxing

[Very behind on blog posts so time to crank some out]

A week or so ago, Chrome 20 was released to the stable channel. There was little fanfare and even the official Chrome blog didn't have much to declare apart from bugfixes.

There were some things going on under the hood for the Linux platform, though. Security things, and some of them I implemented and am quite excited by.

The biggest item is an improvement to Flash security. Traditionally, Linux -- across all browsers -- hasn't had great Flash security, due to lack of sandboxing options. That just changed: so-called Pepper Flash shipped to the stable channel on Linux with Chrome 20 (other platforms to follow real soon). I went into a little detail about the technical sandbox measures in Pepper Flash for Linux in an older blog post.

As mentioned in the previous blog post, native 64-bit Flash also gives a useful security boost on 64-bit Linux platforms.

There's more. Perhaps you're running 64-bit Ubuntu 12.04? Courtesy of Kees Cook, this release sneaked in Will Drewry's seccomp filter patches, which I blogged about earlier this year in the context of vsftpd-3.0.0's usage of seccomp filter sandboxing.

So why have just one Flash sandbox if you can have two? A bit of double-bagging if you like. Assuming you're running 64-bit Ubuntu 12.04 and Chrome 20 or newer, you'll also have a seccomp filter policy slapped on Flash -- in additional to the chroot() and PID namespace. This may impede attackers trying to perform a local privilege escalation, who can no longer call crazy brand-new syscalls or use socket() to load crazy protocol modules, etc.

No sandbox or combination of sandboxes will ever be perfect, but "some" is better than "none". For people who want to run Flash, Chrome 20 on 64-bit Ubuntu 12.04 is one of the more locked-down ways to do it.

Monday, April 9, 2012

vsftpd-3.0.0 and seccomp filter sandboxing is here!

vsftpd-3.0.0 is released.

Aside from the usual few fixes, I'm excited about built-in support for Will Drewry's seccomp filter, which landed in Ubuntu. To give it a whirl, you'll need a 64-bit Ubuntu 12.04 (beta at time of writing), and a 64-bit build of vsftpd.

Why all the excitement?

vsftpd has always piled on all of the Linux sandboxing / privilege facilities available, including chroot, capabilities, file descriptor passing, pid / network / etc. namespaces, rlimits, and even a ptrace-based demo (never quite production).

seccomp filter brings a new level of power and granularity in the form of the ability to permit, deny or emulate raw syscalls, with some control over the arguments. In many ways it's similar to what can be achieved with a clunky ptrace-based sandbox -- but it will go a lot faster, have a lot less bugs and not be prone to various fail-open conditions. In other words, it's designed to be used as a security technology whereas ptrace() is not.

Some of the more compelling points of seccomp filter include:

  • Ability to restrict access to the kernel API. In all likelihood, a compromise of a vsftpd process wouldn't be much use to an attacker due to the use of chroot() and namespaces. The attacker would be looking to escalate privileges and the most fruitful way to do this would be going after a kernel bug. By only allowing a small subset of syscalls, the number of kernel APIs exposed to attack is minimal.

  • Application-defined. An unprivileged application can install a filter. This has various benefits. For example, a future Chromium will likely ship without the need for a "setuid helper". A future vsftpd might offer robust sandboxing even when not started as root.

  • Compatible with syscall emulation. Doing access control on user-space pointer arguments is racy with ptrace and impossible with seccomp filter. However, a denied syscall can be emulated via a SIGSYS signal. In the signal handler, something like an open() call can be "faked", perhaps even to the extent of sending the filename over a local socketpair for validation and delegated open. Very tasty. I'll look at writing a general wrapper if no-one else does.

  • Defense against glibc vulnerabilities. I'll go into this in more detail in another post, but a recent glibc memory corruption vulnerability illustrated that glibc takes an "interesting" code path in response to detecting bad situations. This failure code path ended up making the glibc bug highly exploitable. Fortunately, the syscalls needed by the "interesting" code path don't need to be permitted in a seccomp filter policy, thus blocking much of the problem.

It's all very powerful, and vsftpd isn't the only exited consumer. There's already a patch in OpenSSH, to be released with version 6.

Personally, I'm not sure I have the skill to attack vsftpd + seccomp filter. Even if I were to achieve code execution, the set of permitted syscalls is pretty limited. If you look at some of the memorable Linux kernel vulns of recent years: AF_CAN by Ben Hawkes, sock_sendpage by Julien Tinnes and Tavis Ormandy, or sys_tee -- all of these would be blocked either at the syscall, or syscall argument validation level. If you go back to 2003, there's brk(), which would probably have done the trick. If you know of any other examples, I'd love to collect them.

Tuesday, April 3, 2012

vsftpd-3.0.0-pre2

Just a quick note that vsftpd-3.0.0 is imminent. The big-ticket item is the new seccomp filter sandboxing support.

Please test this, particularly on 64-bit Ubuntu Precise Beta 2 (or newer) or if you use SSL support.

I would love to get a quick note (e-mail or comment here) even if just to say it seems to work in your configuration.

https://security.appspot.com/downloads/vsftpd-3.0.0-pre2.tar.gz

https://security.appspot.com/downloads/vsftpd-3.0.0-pre2.tar.gz.asc

Wednesday, March 28, 2012

vsftpd-3.0.0-pre1 and seccomp filter

For the brave, there now exists a pre-release version of vsftpd-3.0.0:

https://security.appspot.com/downloads/vsftpd-3.0.0-pre1.tar.gz

https://security.appspot.com/downloads/vsftpd-3.0.0-pre1.tar.gz.asc

The most significant change is an initial implementation of a secondary sandbox based on seccomp filter, as recently merged to Ubuntu 12.04. This secondary sandbox is pretty powerful, but I'll go into more details in a subsequent post.

For now, suffice to say I'm interested in testing of this new build, e.g.
  • Does it compile for you? (I've added various new gcc flags etc).

  • Any runtime regressions?

  • Does it run ok on 64-bit Ubuntu 12.04-beta2 or newer?

This last question is key as that is the configuration that will automatically use a seccomp filter. The astute among you will note that beta2 is not due out until tomorrow, but an apt-get dist-upgrade from beta1 will pull in the kernel that you need.

Will Drewry's excellent work on seccomp filter is the most exciting Linux security feature in a long time and the eventual vsftpd combined sandbox that will result should be a very tough nut to crack indeed.

Thursday, March 22, 2012

On the failings of Pwn2Own 2012

This year's Pwn2Own and Pwnium contests were interesting for many reasons. If you look at the results closely, there are many interesting observations and conclusions to be made.

$60k is more than enough to encourage disclosure of full exploits

As evidenced by the Pwnium results, $60k is certainly enough to motivate researchers into disclosing full exploits, including sandbox escapes or bypasses.

There was some minor controversy on this point leading up to the competitions, culminating in this post from ZDI. The post unfortunately was a little strong in its statements including "In fact, we don't believe that even the entirety of the $105,000 we are offering would be considered an acceptable bounty", "for the $60,000 they are offering, it is incredibly unlikely that anyone will participate" and "such an exploit against Chrome will never see the light of day at CanSecWest". At least we all now have data; I don't expect ZDI to make this mistake again. Without data, it's an understandable mistake to have made.

Bad actors will find loopholes and punk you

One of the stated -- and laudable -- goals of both Pwn2Own and Pwnium is to make users safer by getting bugs fixed. As recently noted by the EFF, there are some who are not interested in getting bugs fixed. At face value, it would seem to be counterproductive for these greyhat or blackhat parties to participate.

Enter VUPEN, who somehow managed to turn up and get the best of all worlds: $60k, tons of free publicity for their dubious business model and... minimal cost. To explore the minimal cost, let's look at one of the bugs they used: a Flash bug (not Chrome as widely reported), present in Flash 11.1 but already fixed in Flash 11.2. In other words, the bug they used already had a fixed lifetime. Using such a bug enabled them to collect a large prize whilst only handing over a doomed asset in return.

Although operating within the rules, their entry did not do much to advance user security and safety -- the bug fix was already in the pipeline to users. They did however punk $60k out of Pwn2Own and turned the whole contest into a VUPEN marketing spree.

Game theory

At the last minute at Pwn2Own, contestants Vincenzo and Willem swooped in with a Firefox exploit to collect a $30k second place prize. The timing suggests that they were waiting to see if their single 0-day would net them a prize or not. It did. We'll never know what they would have done if the $30k reward was already sewn up by someone else, but one possibility is a non-disclosure -- which wouldn't help make anyone safer.

Fixing future contests

The data collected suggests some possible structure to future contests to ensure they bring maximal benefit to user safety:
  • Require full exploits, including sandbox escapes or bypasses.

  • Do not pay out for bugs already fixed in development releases or repositories.

  • Have a fixed reward value per exploit.

Saturday, March 17, 2012

Some random observations on Linux ASLR

I've had cause to be staring at memory maps recently across a variety of systems. No surprise then that some suboptimal or at least interesting ASLR quirks have come to light.

1) Partial failure of ASLR on 32-bit Fedora

My Fedora is a couple of releases behind, so no idea if it's been fixed. It seems that the desire to pack all the shared libraries into virtual address 0x00nnnnnn has a catastrophic failure mode when there are too many libraries: something always ends up at 0x00110000. You can see it with repeated invocations of ldd /opt/google/chrome/chrome|grep 0x0011:

libglib-2.0.so.0 => /lib/libglib-2.0.so.0 (0x00110000)
libXext.so.6 => /usr/lib/libXext.so.6 (0x00110000)
libdl.so.2 => /lib/libdl.so.2 (0x00110000)

Which exact library is placed at the fixed address is random. However, any fixed address can be a real problem to ASLR. For example, in the browser context, take a bug such as Chris Rohlf's older but interesting CSS type confusion. Without a fixed address, a crash is a likely outcome. With a fixed address, the exact library mapped at the fixed address could easily be fingerprinted, and the BSS section read to leak heap pointers (e.g. via singleton patterns). Bye bye to both NX and ASLR.

Aside: in the 32-bit browser context with plenty of physical memory, a Javascript-based heap spray could easily fill most of the address space such that the attacker's deference has a low chance of failure.

Aside #2: my guess is that this scheme is designed to prevent a return-to-glibc attack vs. strcpy(), by making sure that all executable addresses contain a NULL byte. I'm probably missing something, but it seems like the fact that strcpy() NULL-terminates, combined with the little-endianness of Intel, makes this not so strong.


2) Missed opportunity to use more entropy on 64-bit

If you look at the maps of a 64-bit process, you'll see most virtual memory areas correspond to the formula 0x7fnnxxxxxxxx where all your stuff is piled together in xxxxxxxx and nn is random. At least, nothing is in or near a predictable location. One way to look at how this could be better is this: If you emit a 4GB heap spray, you have a ~1/256 chance of guessing where it is. Using the additional 7 bits of entropy might be useful, especially for the heap.


3) Bad mmap() randomization

Although the stack, heap and binary are placed at reasonably random locations, unhinted mmap() chunks are sort of just piled up adjacent, typically in a descending-vm-address fashion. This can lead to problems where a buffer overflow crashes into a sensitive mapping -- such a JIT mapping. (This is one reason JIT mappings have their own randomizing allocator in v8).


4) Heap / stack collision likely with ASLR binary

On a 32-bit kernel you might see:

b8105000-b8124000 rw-p 00000000 00:00 0 [heap]
bfae5000-bfb0a000 rw-p 00000000 00:00 0 [stack]

Or on a 64-bit kernel running a 32-bit process:

f7c52000-f7c73000 rw-p 00000000 00:00 0 [heap]
ff948000-ff96d000 rw-p 00000000 00:00 0 [stack]

In both cases, the heap doesn't have to grow too large before it cannot grow any larger. When this happens, most heap implementations fall back to mmap() allocations, and suffer the problems of 3) above. These things chained together with a very minor infoleak such as my cross-browser XSLT heap address leak could in fact leak the location of the executable, leading to a full NX/ASLR bypass.


Conclusion

A 32-bit address space just isn't very big any more, compared with todays large binaries, large number of shared library dependencies and large heaps. It's no surprise that everything is looking a little crammed in. The good news is that there are no obvious and severe problems with the 64-bit situation, although the full entropy isn't used. Applications (such as v8 / Chromium) can and do fix that situation for the most sensitive mappings themselves.

Wednesday, February 29, 2012

Chrome Linux 64-bit and Pepper Flash

Flash on Linux hasn't always been the best experience in the stability and security departments. Users of 64-bit Linux, in particular, have to put up with NSPluginWrapper, a technology which bridges a 64-bit browser process to the 32-bit Flash library.

In terms of sandboxing, your distribution might slap a clunky SELinux or AppArmor policy on Flash, but it may or may not be on by default.

Given the above, and the fact I'm a 64-bit Linux user, I was really happy to see Chrome's latest dev channel include a native 64-bit Pepper Flash plug-in. What does this mean?
  • Security: sandboxing. Pepper plug-ins run inside Chrome's renderer sandbox. On Linux, this is chroot() and PID namespace based, so Flash in this context has no filesystem access, nor the ability to interfere with other processes.

  • Stability: native 64-bit build. Generally, stability and performance should be better than NSPluginWrapper on account of not having to bounce through an extra layer and process.

  • Security: 64-bit address space. It's harder to heap spray or JIT spray a 64-bit address space. Physical memory will typically run out long before the spray achieves a statistical likelihood of being at any particular memory location.

There are some warts of course. Although it works ok on my Ubuntu box, there are lots of comments on the releases blog which indicate Flash is broken, particularly from Fedora users. There's also an ASLR failure (missing position independent executable) which will be fixed in the next revision.

Overall, though, seems like a promising boost to Linux Flash security is heading towards the Chrome stable channel.

Saturday, January 28, 2012

The dirty secret of browser security #1

Here's a curiousity that's developing in modern browser security: The security of a given browser is dominated by how much effort it puts into other peoples' problems.

This may sound absurd at first but we're heading towards a world where the main browsers will have (with a few notable exceptions):
  • Rapid autoupdate to fix security issues.

  • Some form of sandboxing.

  • A long history of fuzzing and security research.
These factors, combined with an ever more balanced distribution of browser usage, are making it uneconomical for mass malware to go after the browsers themselves.

Enter plug-ins

Plug-ins are an attractive target because some of them have drastically more market share than even the most popular browser. And a lot of plug-ins haven't received the same security attention that browsers have over the past years.

The traditional view in security is to look after your own house and let others look after theirs. But is this conscionable in a world where -- as a browser vendor -- you have the power to defend users from other peoples' bugs?

As a robust illustrative point, a lot of security professionals recently noticed some interesting exploit kit data, showing a big difference in exploitation success between Chrome (~0%) and IE / Firefox (~15%).

The particular exploits successfully targeted are largely old, fixed plug-in bugs in Java, Flash and Reader. So why the big difference between browsers?

The answer is largely the investment Chrome's security team has made in defending against other peoples' problems, with initiatives such as:
  • Blocking out-of-date plug-ins by default and encouraging the user to update.

  • Blocking lesser-used plug-ins (such as Java, RealPlayer, Shockwave etc). by default.

  • Having the Flash plug-in bundled such that it is autoupdated using Chrome's fast autoupdate strategy (this is why Chrome probably has the best Flash security story).

  • The inclusion of a lightweight and reasonably sandboxed default PDF viewer (not all sandboxes are created equal!)

  • The Open Type Sanitizer, which defends against a subset of Windows kernel bugs and Freetype bugs. Chrome often autoupdates OTS faster than e.g. Microsoft / Apple / Linux vendors fix the underlying bug.

  • Certificate public key pinning. This new technology defends against the generally gnarly SSL Certificate Authority problem, and caught a serious CA compromise being abused in Iran last year.
In conclusion, some of the biggest browser security wins over the past couple of years have come from browser vendors defending against other peoples' problems. So I repeat the hypothesis:

The security of a given browser is dominated by how much effort it puts into other peoples' problems

Funny world we live in.