about summary refs log tree commit diff
path: root/compiler/rustc_mir/src/transform/coverage/graph.rs
diff options
context:
space:
mode:
authorbors <bors@rust-lang.org>2020-10-21 15:03:14 +0000
committerbors <bors@rust-lang.org>2020-10-21 15:03:14 +0000
commit22e6b9c68941996daa45786b4145e6196e51f0f4 (patch)
tree7790ebe8062a83f9eda365328e94140a8daa9d27 /compiler/rustc_mir/src/transform/coverage/graph.rs
parent19356453cbfb734bc60a1853c10e3095d05e0342 (diff)
parent593fdd3d45d7565e34dc429788fa81ca2e25a2d4 (diff)
downloadrust-22e6b9c68941996daa45786b4145e6196e51f0f4.tar.gz
rust-22e6b9c68941996daa45786b4145e6196e51f0f4.zip
Auto merge of #77250 - Aaron1011:feature/flat-token-collection, r=petrochenkov
Rewrite `collect_tokens` implementations to use a flattened buffer

Instead of trying to collect tokens at each depth, we 'flatten' the
stream as we go allong, pushing open/close delimiters to our buffer
just like regular tokens. One capturing is complete, we reconstruct a
nested `TokenTree::Delimited` structure, producing a normal
`TokenStream`.

The reconstructed `TokenStream` is not created immediately - instead, it is
produced on-demand by a closure (wrapped in a new `LazyTokenStream` type). This
closure stores a clone of the original `TokenCursor`, plus a record of the
number of calls to `next()/next_desugared()`. This is sufficient to reconstruct
the tokenstream seen by the callback without storing any additional state. If
the tokenstream is never used (e.g. when a captured `macro_rules!` argument is
never passed to a proc macro), we never actually create a `TokenStream`.

This implementation has a number of advantages over the previous one:

* It is significantly simpler, with no edge cases around capturing the
  start/end of a delimited group.

* It can be easily extended to allow replacing tokens an an arbitrary
  'depth' by just using `Vec::splice` at the proper position. This is
  important for PR #76130, which requires us to track information about
  attributes along with tokens.

* The lazy approach to `TokenStream` construction allows us to easily
  parse an AST struct, and then decide after the fact whether we need a
  `TokenStream`. This will be useful when we start collecting tokens for
  `Attribute` - we can discard the `LazyTokenStream` if the parsed
  attribute doesn't need tokens (e.g. is a builtin attribute).

The performance impact seems to be neglibile (see
https://github.com/rust-lang/rust/pull/77250#issuecomment-703960604). There is a
small slowdown on a few benchmarks, but it only rises above 1% for incremental
builds, where it represents a larger fraction of the much smaller instruction
count. There a ~1% speedup on a few other incremental benchmarks - my guess is
that the speedups and slowdowns will usually cancel out in practice.
Diffstat (limited to 'compiler/rustc_mir/src/transform/coverage/graph.rs')
0 files changed, 0 insertions, 0 deletions