about summary refs log tree commit diff
path: root/src/libsyntax/parse
diff options
context:
space:
mode:
authorbors <bors@rust-lang.org>2013-07-07 11:16:59 -0700
committerbors <bors@rust-lang.org>2013-07-07 11:16:59 -0700
commit52abd1cc32f6d08d58dc2b1087ca8c4e129d762e (patch)
treee5c264b66e36713dc8bd49f75a061eec534947a7 /src/libsyntax/parse
parent3c44265d8791d54fa64550c60dc820eef87f9cf5 (diff)
parente41e4358516190bf84172f21d9e25e45da81caf4 (diff)
downloadrust-52abd1cc32f6d08d58dc2b1087ca8c4e129d762e.tar.gz
rust-52abd1cc32f6d08d58dc2b1087ca8c4e129d762e.zip
auto merge of #7636 : dotdash/rust/scope_cleanup, r=graydon
Currently, scopes are tied to LLVM basic blocks. For each scope, there
are two new basic blocks, which means two extra jumps in the unoptimized
IR. These blocks aren't actually required, but only used to act as the
boundary for cleanups.

By keeping track of the current scope within a single basic block, we
can avoid those extra blocks and jumps, shrinking the pre-optimization
IR quite considerably. For example, the IR for trans_intrinsic goes
from ~22k lines to ~16k lines, almost 30% less.

The impact on the build times of optimized builds is rather small (about
1%), but unoptimized builds are about 11% faster. The testsuite for
unoptimized builds runs between 15% (CPU time) and 7.5% (wallclock time on
my i7) faster.

Also, in some situations this helps LLVM to generate better code by
inlining functions that it previously considered to be too large.
Likely because of the pointless blocks/jumps that were still present at
the time the inlining pass runs.

Refs #7462
Diffstat (limited to 'src/libsyntax/parse')
0 files changed, 0 insertions, 0 deletions