Skip to Content.
Sympa Menu

svadev - Re: [svadev] SafeCode and VM's to secure sensitive material

svadev AT lists.siebelschool.illinois.edu

Subject: Svadev mailing list

List archive

Re: [svadev] SafeCode and VM's to secure sensitive material


Chronological Thread 
  • From: John Criswell <criswell AT illinois.edu>
  • To: Fábio Resner <fabiusks AT gmail.com>
  • Cc: svadev AT cs.uiuc.edu
  • Subject: Re: [svadev] SafeCode and VM's to secure sensitive material
  • Date: Mon, 20 Aug 2012 14:30:25 -0500
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/svadev>
  • List-id: <svadev.cs.uiuc.edu>
  • Organization: University of Illinois

On 8/20/12 12:09 PM, Fábio Resner wrote:
Hi,

I'm doing some research on how to secure applications on a embedded device.

By "research," do you mean that you are conducting academic research into novel solutions, or do you mean that you're search for existing solutions to the problem within the research literature?

The main goal actually is to guarantee the safety of sensitive material like cryptographic keys.

It would be perfect if the whole system can have guaranteed secure, but if I can only guarantee
that the sensitive material stay untouched even if the hole system goes down it would be enough.

It sounds like you want to enforce an information flow policy of some sort.  Information flow policies can be enforced at the programming language level (see the Jif language from Andrew Myers at Cornell), at the operating system level (see the Asbestos OS work and the old MITRE work on the Compartmented Mode Workstation), and at the compiler level (see the General Information Flow (GIFT) paper from the ACSAC conference from a few years back).  They can also be enforced at the hardware level although I don't recall any of the papers on that topic off the top of my head.

SAFECode itself does not enforce information flow policies, but you could use it (and its related SVA research work) to protect software written in C/C++ from memory safety attacks (e.g., buffer overflows, invalid read/write errors, return-oriented programming attacks, etc.).



I really didn't understand safecode functionality at all, so I thought about compiling the applications

The simplest (in my opinion) explanation of what SAFECode does is in my SVA paper: http://llvm.org/pubs/2007-SOSP-SVA.html (although you should note that the pool/metapool feature is currently disabled until we make the code sufficiently robust for day-to-day use).  Dinakar's PLDI paper (http://llvm.org/pubs/2006-06-12-PLDI-SAFECode.html) and ICSE paper (http://llvm.org/pubs/2006-05-24-SAFECode-BoundsCheck.html) provide more details on how things work.


that are going to be embedded with safecode, and putting them to run each one in a separated virtual machine,
because for example, even if an attacker were able to exploit the safecode secured program, it would not be 
able to access the memory area of the other apps, so not affecting the whole system.

Am I being redundant? I mean compiling with safecode and running in VM's?

Yes and no.  While SAFECode can be used to isolate multiple programs within a single address space (http://llvm.org/pubs/2005-02-TECS-SAFECode.html), it was an older version of SAFECode that did this, and it imposed some restrictions on the C code that it could support.  We have since updated SAFECode to support the full generality of C and the use of externally compiled libraries, but as a consequence, SAFECode no longer enforces the isolation guarantee as it did in the TECS paper.

Of course, if you read the TECS paper, you could modify SAFECode to do what it did then, and you could use this in place of multiple VMs or OS level process isolation.

Using multiple VMs (or OS process isolation) can solve security problems that SAFECode is not intended to solve (e.g., information flow) and provide defense-in-depth (in case SAFECode has a bug).  Whether it is overkill or not depends on the performance overhead and the value of what you're protecting.

I'm doing kind of individual research on this area, so I'm accepting (and I'll be very glad if you give me some advice) suggestions or recommended readings.

The Memory Safety Menagerie (http://sva.cs.illinois.edu/menagerie/) contains a set of papers on automatic memory safety enforcement, attacks that violate memory safety, and some other related papers.  My opinion on it is obviously biased, but I think it's a good read.

You may also want to read the papers on Software Fault Isolation (SFI); example systems include RockSalt, Google's Native Client, and XFI.

I don't have a similar menagerie for information flow, but I recommend reading about Jif and Asbestos, the information flow paper by Peter and Dorothy Denning, and how Bell-LaPadula and Biba labels work.

-- John T.




Archive powered by MHonArc 2.6.16.

Top of Page