diff options
| author | Mazdak Farrokhzad <twingoow@gmail.com> | 2019-06-30 21:57:24 +0200 |
|---|---|---|
| committer | GitHub <noreply@github.com> | 2019-06-30 21:57:24 +0200 |
| commit | 70ea57bcb37bb57444fafbb80be8f6d1846090f3 (patch) | |
| tree | 0908476f929ea46ea1fc0e6583013de8281ddabc /src | |
| parent | 543c4648bd2028672d091cec378a50034c8709a0 (diff) | |
| parent | e2479e263ef1135dcd1318cce51ee8fafeae62ae (diff) | |
| download | rust-70ea57bcb37bb57444fafbb80be8f6d1846090f3.tar.gz rust-70ea57bcb37bb57444fafbb80be8f6d1846090f3.zip | |
Rollup merge of #62063 - ecstatic-morse:dataflow-backward-order, r=nagisa
Use a more efficient iteration order for backward dataflow This applies the same basic principle as #62062 to the reverse dataflow analysis used to compute liveness information. It is functionally equivalent, except that post-order is used instead of reverse post-order. In the long-term, `BitDenotation` should probably be extended to support both forward and backward dataflow, but there's some more work needed to get to that point.
Diffstat (limited to 'src')
| -rw-r--r-- | src/librustc_mir/util/liveness.rs | 21 |
1 files changed, 18 insertions, 3 deletions
diff --git a/src/librustc_mir/util/liveness.rs b/src/librustc_mir/util/liveness.rs index cf0fc09472b..8ead571d966 100644 --- a/src/librustc_mir/util/liveness.rs +++ b/src/librustc_mir/util/liveness.rs @@ -75,9 +75,24 @@ pub fn liveness_of_locals<'tcx>( let mut bits = LiveVarSet::new_empty(num_live_vars); - // queue of things that need to be re-processed, and a set containing - // the things currently in the queue - let mut dirty_queue: WorkQueue<BasicBlock> = WorkQueue::with_all(body.basic_blocks().len()); + // The dirty queue contains the set of basic blocks whose entry sets have changed since they + // were last processed. At the start of the analysis, we initialize the queue in post-order to + // make it more likely that the entry set for a given basic block will have the effects of all + // its successors in the CFG applied before it is processed. + // + // FIXME(ecstaticmorse): Reverse post-order on the reverse CFG may generate a better iteration + // order when cycles are present, but the overhead of computing the reverse CFG may outweigh + // any benefits. Benchmark this and find out. + let mut dirty_queue: WorkQueue<BasicBlock> = WorkQueue::with_none(body.basic_blocks().len()); + for (bb, _) in traversal::postorder(body) { + dirty_queue.insert(bb); + } + + // Add blocks which are not reachable from START_BLOCK to the work queue. These blocks will + // be processed after the ones added above. + for bb in body.basic_blocks().indices() { + dirty_queue.insert(bb); + } let predecessors = body.predecessors(); |
