about summary refs log tree commit diff
path: root/compiler/rustc_mir/src/transform/coverage/graph.rs
diff options
context:
space:
mode:
authorAaron Hill <aa1ronham@gmail.com>2020-09-26 21:56:29 -0400
committerAaron Hill <aa1ronham@gmail.com>2020-10-19 13:59:18 -0400
commit593fdd3d45d7565e34dc429788fa81ca2e25a2d4 (patch)
tree2d3ae4c7a1cb800273757906d1464db7333ee977 /compiler/rustc_mir/src/transform/coverage/graph.rs
parentcb2462c53f2cc3f140c0f1ea0976261cab968a34 (diff)
downloadrust-593fdd3d45d7565e34dc429788fa81ca2e25a2d4.tar.gz
rust-593fdd3d45d7565e34dc429788fa81ca2e25a2d4.zip
Rewrite `collect_tokens` implementations to use a flattened buffer
Instead of trying to collect tokens at each depth, we 'flatten' the
stream as we go allong, pushing open/close delimiters to our buffer
just like regular tokens. One capturing is complete, we reconstruct a
nested `TokenTree::Delimited` structure, producing a normal
`TokenStream`.

The reconstructed `TokenStream` is not created immediately - instead, it is
produced on-demand by a closure (wrapped in a new `LazyTokenStream` type). This
closure stores a clone of the original `TokenCursor`, plus a record of the
number of calls to `next()/next_desugared()`. This is sufficient to reconstruct
the tokenstream seen by the callback without storing any additional state. If
the tokenstream is never used (e.g. when a captured `macro_rules!` argument is
never passed to a proc macro), we never actually create a `TokenStream`.

This implementation has a number of advantages over the previous one:

* It is significantly simpler, with no edge cases around capturing the
  start/end of a delimited group.

* It can be easily extended to allow replacing tokens an an arbitrary
  'depth' by just using `Vec::splice` at the proper position. This is
  important for PR #76130, which requires us to track information about
  attributes along with tokens.

* The lazy approach to `TokenStream` construction allows us to easily
  parse an AST struct, and then decide after the fact whether we need a
  `TokenStream`. This will be useful when we start collecting tokens for
  `Attribute` - we can discard the `LazyTokenStream` if the parsed
  attribute doesn't need tokens (e.g. is a builtin attribute).

The performance impact seems to be neglibile (see
https://github.com/rust-lang/rust/pull/77250#issuecomment-703960604). There is a
small slowdown on a few benchmarks, but it only rises above 1% for incremental
builds, where it represents a larger fraction of the much smaller instruction
count. There a ~1% speedup on a few other incremental benchmarks - my guess is
that the speedups and slowdowns will usually cancel out in practice.
Diffstat (limited to 'compiler/rustc_mir/src/transform/coverage/graph.rs')
0 files changed, 0 insertions, 0 deletions