diff options
| author | Aaron Hill <aa1ronham@gmail.com> | 2020-09-26 21:56:29 -0400 |
|---|---|---|
| committer | Aaron Hill <aa1ronham@gmail.com> | 2020-10-19 13:59:18 -0400 |
| commit | 593fdd3d45d7565e34dc429788fa81ca2e25a2d4 (patch) | |
| tree | 2d3ae4c7a1cb800273757906d1464db7333ee977 /compiler/rustc_parse/src/parser/attr.rs | |
| parent | cb2462c53f2cc3f140c0f1ea0976261cab968a34 (diff) | |
| download | rust-593fdd3d45d7565e34dc429788fa81ca2e25a2d4.tar.gz rust-593fdd3d45d7565e34dc429788fa81ca2e25a2d4.zip | |
Rewrite `collect_tokens` implementations to use a flattened buffer
Instead of trying to collect tokens at each depth, we 'flatten' the stream as we go allong, pushing open/close delimiters to our buffer just like regular tokens. One capturing is complete, we reconstruct a nested `TokenTree::Delimited` structure, producing a normal `TokenStream`. The reconstructed `TokenStream` is not created immediately - instead, it is produced on-demand by a closure (wrapped in a new `LazyTokenStream` type). This closure stores a clone of the original `TokenCursor`, plus a record of the number of calls to `next()/next_desugared()`. This is sufficient to reconstruct the tokenstream seen by the callback without storing any additional state. If the tokenstream is never used (e.g. when a captured `macro_rules!` argument is never passed to a proc macro), we never actually create a `TokenStream`. This implementation has a number of advantages over the previous one: * It is significantly simpler, with no edge cases around capturing the start/end of a delimited group. * It can be easily extended to allow replacing tokens an an arbitrary 'depth' by just using `Vec::splice` at the proper position. This is important for PR #76130, which requires us to track information about attributes along with tokens. * The lazy approach to `TokenStream` construction allows us to easily parse an AST struct, and then decide after the fact whether we need a `TokenStream`. This will be useful when we start collecting tokens for `Attribute` - we can discard the `LazyTokenStream` if the parsed attribute doesn't need tokens (e.g. is a builtin attribute). The performance impact seems to be neglibile (see https://github.com/rust-lang/rust/pull/77250#issuecomment-703960604). There is a small slowdown on a few benchmarks, but it only rises above 1% for incremental builds, where it represents a larger fraction of the much smaller instruction count. There a ~1% speedup on a few other incremental benchmarks - my guess is that the speedups and slowdowns will usually cancel out in practice.
Diffstat (limited to 'compiler/rustc_parse/src/parser/attr.rs')
| -rw-r--r-- | compiler/rustc_parse/src/parser/attr.rs | 15 |
1 files changed, 14 insertions, 1 deletions
diff --git a/compiler/rustc_parse/src/parser/attr.rs b/compiler/rustc_parse/src/parser/attr.rs index 98f94098bfc..73439643d69 100644 --- a/compiler/rustc_parse/src/parser/attr.rs +++ b/compiler/rustc_parse/src/parser/attr.rs @@ -4,7 +4,7 @@ use rustc_ast::attr; use rustc_ast::token::{self, Nonterminal}; use rustc_ast_pretty::pprust; use rustc_errors::{error_code, PResult}; -use rustc_span::Span; +use rustc_span::{sym, Span}; use tracing::debug; @@ -302,3 +302,16 @@ impl<'a> Parser<'a> { Err(self.struct_span_err(self.token.span, &msg)) } } + +pub fn maybe_needs_tokens(attrs: &[ast::Attribute]) -> bool { + attrs.iter().any(|attr| { + if let Some(ident) = attr.ident() { + ident.name == sym::derive + // This might apply a custom attribute/derive + || ident.name == sym::cfg_attr + || !rustc_feature::is_builtin_attr_name(ident.name) + } else { + true + } + }) +} |
