about summary refs log tree commit diff
path: root/compiler/rustc_query_system/src/query
AgeCommit message (Collapse)AuthorLines
2022-03-02rename ErrorReported -> ErrorGuaranteedmark-6/+8
2022-02-27Auto merge of #94084 - Mark-Simulacrum:drop-sharded, r=cjgillotbors-184/+149
Avoid query cache sharding code in single-threaded mode In non-parallel compilers, this is just adding needless overhead at compilation time (since there is only one shard statically anyway). This amounts to roughly ~10 seconds reduction in bootstrap time, with overall neutral (some wins, some losses) performance results. Parallel compiler performance should be largely unaffected by this PR; sharding is kept there.
2022-02-23rustc_errors: let `DiagnosticBuilder::emit` return a "guarantee of emission".Eduard-Mihai Burtescu-6/+6
2022-02-20Refactor Sharded out of non-parallel active query mapMark Rousskov-27/+51
2022-02-20Avoid sharding query caches entirely in single-threaded modeMark Rousskov-20/+59
2022-02-20Inline QueryStateShard into QueryStateMark Rousskov-17/+7
2022-02-20Delete QueryLookupMark Rousskov-34/+15
This was largely just caching the shard value at this point, which is not particularly useful -- in the use sites the key was being hashed nearby anyway.
2022-02-20Move Sharded maps into each QueryCache implMark Rousskov-113/+44
2022-02-17Remove SimpleDefKindMark Rousskov-36/+10
2022-02-08Switch QueryJobId to a single global counterMark Rousskov-163/+91
This replaces the per-shard counters with a single global counter, simplifying the JobId struct down to just a u64 and removing the need to pipe a DepKind generic through a bunch of code. The performance implications on non-parallel compilers are likely minimal (this switches to `Cell<u64>` as the backing storage over a `u64`, but the latter was already inside a `RefCell` so it's not really a significance divergence). On parallel compilers, the cost of a single global u64 counter may be more significant: it adds a serialization point in theory. On the other hand, we can imagine changing the counter to have a thread-local component if it becomes worrisome or some similar structure. The new design is sufficiently simpler that it warrants the potential for slight changes down the line if/when we get parallel compilation to be more of a default. A u64 counter, instead of u32 (the old per-shard width), is chosen to avoid possibly overflowing it and causing problems; it is effectively impossible that we would overflow a u64 counter in this context.
2022-01-08Auto merge of #91919 - Aaron1011:query-recursive-read, r=michaelwoeristerbors-2/+7
Don't perform any new queries while reading a query result on disk In addition to being very confusing, this can cause us to add dep node edges between two queries that would not otherwise have an edge. We now panic if any new dep node edges are created during the deserialization of a query result. This requires serializing the full `AdtDef` to disk, instead of just serializing the `DefId` and invoking the `adt_def` query during deserialization. I'll probably split this up into several smaller PRs for perf runs.
2021-12-23Address review commentsAaron Hill-8/+7
2021-12-23Some cleanupAaron Hill-5/+4
2021-12-23Ban deps only during query loading from diskAaron Hill-3/+9
2021-12-23Error if we try to read dep during deserializationAaron Hill-3/+4
2021-12-21Add `#[rustc_clean(loaded_from_disk)]` to assert loading of query resultAaron Hill-0/+4
Currently, you can use `#[rustc_clean]` to assert to that a particular query (technically, a `DepNode`) is green or red. However, a green `DepNode` does not mean that the query result was actually deserialized from disk - we might have never re-run a query that needed the result. Some incremental tests are written as regression tests for ICEs that occured during query result decoding. Using `#[rustc_clean(loaded_from_disk="typeck")]`, you can now assert that the result of a particular query (e.g. `typeck`) was actually loaded from disk, in addition to being green.
2021-11-22Manually outline error on incremental_verify_ichMark Rousskov-24/+73
This reduces codegen for rustc_query_impl by 169k lines of LLVM IR, representing a 1.2% improvement.
2021-10-28Enable verification for 1/32th of queries loaded from diskMark Rousskov-1/+14
2021-10-23Do not require QueryCtxt for cache_on_disk.Camille GILLOT-1/+1
2021-10-23Build the query vtable directly.Camille GILLOT-58/+18
2021-10-21Do not depend on the stored value when trying to cache on disk.Camille GILLOT-5/+5
2021-10-20Address review.Camille GILLOT-1/+1
2021-10-20Compute query vtable manually.Camille GILLOT-27/+29
2021-10-20Build jump table at runtime.Camille GILLOT-55/+16
2021-10-20Invoke callbacks from rustc_middle.Camille GILLOT-7/+1
2021-10-20Merge two query callbacks arrays.Camille GILLOT-9/+9
2021-10-20Make hash_result an Option.Camille GILLOT-18/+10
2021-10-16Adopt let_else across the compilerest31-7/+3
This performs a substitution of code following the pattern: let <id> = if let <pat> = ... { identity } else { ... : ! }; To simplify it to: let <pat> = ... { identity } else { ... : ! }; By adopting the let_else feature.
2021-10-11Remove built-in cache_hit trackingMark Rousskov-22/+1
This was already only enabled in debug_assertions builds. Generally, it seems like most use cases that would use this could also use the -Zself-profile flag which also tracks cache hits (in all builds), and so the extra cfg's and such are not really necessary. This is largely just a small cleanup though, which primarily is intended to make other changes easier by avoiding the need to deal with this field.
2021-10-06Query the fingerprint style during key reconstructionMark Rousskov-2/+2
Keys can be reconstructed from fingerprints that are not DefPathHash, but then we cannot extract a DefId from them.
2021-10-03Access StableHashingContext in rustc_query_system.Camille GILLOT-6/+5
2021-09-11Auto merge of #78780 - cjgillot:req, r=Mark-Simulacrumbors-282/+189
Refactor query forcing The control flow in those functions was very complex, with several layers of continuations. I tried to simplify the implementation, while keeping essentially the same logic. Now, all code paths go through `try_execute_query` for the actual query execution. Communication with the `dep_graph` and the live caches are the only difference between query getting/ensuring/forcing.
2021-09-01Remove redundant `Span` in `QueryJobInfo`Noah Lev-11/+8
Previously, `QueryJobInfo` was composed of two parts: a `QueryInfo` and a `QueryJob`. However, both `QueryInfo` and `QueryJob` have a `span` field, which seem to be the same. So, the `span` was recorded twice. Now, `QueryJobInfo` is composed of a `QueryStackFrame` (the other field of `QueryInfo`) and a `QueryJob`. So, now, the `span` is only recorded once.
2021-08-27Note that trait aliases cannot be recursiveNoah Lev-9/+16
2021-08-27Note that type aliases cannot be recursiveNoah Lev-1/+42
2021-08-22Use variable.Camille GILLOT-12/+5
2021-08-22Unify `with_task` functions.Camille GILLOT-17/+7
Remove with_eval_always_task.
2021-08-22Remove force_query_with_job.Camille GILLOT-88/+53
2021-08-22Split try_execute_query.Camille GILLOT-16/+31
2021-08-22Decouple JobOwner from cache.Camille GILLOT-73/+66
2021-08-22Complete job outside of force_query_with_job.Camille GILLOT-14/+11
2021-08-22Do not compute the dep_node twice.Camille GILLOT-33/+21
2021-08-22Make all query forcing go through try_execute_query.Camille GILLOT-40/+31
try_execute_query is now able to centralize the path for query get/ensure/force. try_execute_query now takes the dep_node as a parameter, so it can accommodate `force`. This dep_node is an Option to avoid computing it in the `get` fast path. try_execute_query now returns both the result and the dep_node_index to allow the caller to handle the dep graph. The caller is responsible for marking the dependency.
2021-08-22Remove try_mark_green_and_read.Camille GILLOT-3/+6
2021-08-22Move assertion inwards.Camille GILLOT-14/+0
`with_taks_impl` is only called from `with_eval_always_task` and `with_task` . The former is only used in query invocation, while the latter is also used to start the `tcx` and to trigger codegen. This move should not change significantly the number of calls to this assertion.
2021-08-22Simplify control flow.Camille GILLOT-33/+28
2021-08-21Improve errors for recursive type aliasesNoah Lev-4/+8
2021-08-22Only clone key when needed.Camille GILLOT-4/+6
2021-08-22Move dep_graph checking into try_load_from_disk_and_cache_in_memory.Camille GILLOT-23/+12
2021-08-12Prevent double panic when handling incremental fingerprint mismatchAaron Hill-6/+27
When an incremental fingerprint mismatch occurs, we debug-print our `DepNode` and query result. Unfortunately, the debug printing process may cause us to run additional queries, which can result in a re-entrant fingerprint mismatch error. To avoid a double panic, this commit adds a thread-local variable to detect re-entrant calls.