Skip to Content.
Sympa Menu

svadev - Re: [svadev] Proposal: Change DebugRuntime Compilation Settings for Debug build

svadev AT lists.siebelschool.illinois.edu

Subject: Svadev mailing list

List archive

Re: [svadev] Proposal: Change DebugRuntime Compilation Settings for Debug build


Chronological Thread 
  • From: John Criswell <criswell AT illinois.edu>
  • To: Santosh Nagarakatte <santoshn AT cis.upenn.edu>
  • Cc: svadev AT cs.uiuc.edu
  • Subject: Re: [svadev] Proposal: Change DebugRuntime Compilation Settings for Debug build
  • Date: Wed, 14 Dec 2011 12:07:36 -0600
  • List-archive: <http://lists.cs.uiuc.edu/pipermail/svadev>
  • List-id: <svadev.cs.uiuc.edu>
  • Organization: University of Illinois

On 12/14/11 2:05 AM, Santosh Nagarakatte wrote:
Hi all,

I noticed that SafeCode compiles the runtime in Debug mode when the
SafeCode compiler is compiled in Debug+Asserts mode.
However as the runtime is injected into the binary, this translates to
execution time overheads for any binary compiled in Debug mode.
This implies that to generate optimized code with SafeCode, we need to
compile the compiler in Release mode.

Yes, that is correct. However, the 3x figure I measured with SAFECode for LLVM 2.6/2.7 was done using a Debug version of the runtime and without many of the special optimizations that we did in our SAFECode research papers. At this speed, SAFECode was still able to do more accurate checking than Valgrind's memcheck tool while still being much, much faster (3x is still too slow for production-use memory safety, though).

In other words, using a debug runtime seems to have been fast enough in the past when using SAFECode as a debug tool.


This can be very cumbersome especially when we want to test
performance of the application with optimizations when the compilation
of the application is done with optimizations and compiler is compiled
in debug+asserts mode.

We actually used to do what you're suggesting, but I found it cumbersome. The problem is that the LLVM build system really isn't designed to use mismatched Debug/Release files, so I would either forget to compile the run-time library in Release mode, or the runtime library would be hard-coded to Release mode, requiring me to hack the Makefiles to enable Debug mode for debugging, or some other such issue.

IMHO, we should keep it simple: Release builds should be Release builds, and Debug builds should be Debug builds. If you want the fastest runtime possible, do a Release build. With the ability to have multiple object trees compiling from a single source tree, this will only cost compile time.

That said, I would not oppose a configure option or a variable on the "make" command line that enables the behavior you want. I just think that that behavior is confusing and should be enabled explicitly by someone who wants it.


For example SafeCode has 50X overhead for a simple test when the
program is compiled with O3 and the compiler is in compiled in debug
mode.

Do you know what the overhead is when SAFECode is compiled in Release mode?

As I've stated before, SAFECode suffered a serious performance regression when we moved it to LLVM 3.0. There are several possible culprits (that are probably working together, no less):

1) The refactoring to merge it into Clang has changed the optimization order. SAFECode is instrumenting code before all link-time optimizations have been applied. Previously, we ran SAFECode on whole-program bitcode files in which all optimization (including LTO) was done first and then SAFECode instrumented the program.

2) There are several optimizations turned off because the code is not sufficiently robust or has not been updated to LLVM 3.0 (most notably the type-safety check elimination pass and the Monotonic loop optimization pass). We have become far pickier about what gets enabled by default.

3) We are not using the pool allocator run-time (which may have sped up malloc() and friends).

4) We have new checks on the C standard library functions which were not enabled in LLVM 2.6 and 2.7.

5) Incomplete load/store checks actually do something now whereas they were no-ops before (and thus never added to the code).

6) The run-time is not linked in as an LLVM bitcode file. This prevents the inlining of the run-time checks. This could be especially bad for the fastlscheck/exactcheck checks (which, when inlined, can sometimes be eliminated entirely by standard compiler optimizations).

I doubt the runtime library is the only issue.

-- John T.





Archive powered by MHonArc 2.6.16.

Top of Page