| Age | Commit message (Collapse) | Author | Lines |
|
Fix backtraces with `-C panic=abort` on linux; emit unwind tables by default
The linux backtrace unwinder relies on unwind tables to work properly, and generating and printing a backtrace is done by for example the default panic hook.
Begin emitting unwind tables by default again with `-C panic=abort` (see history below) so that backtraces work.
Closes https://github.com/rust-lang/rust/issues/81902 which is **regression-from-stable-to-stable**
Closes https://github.com/rust-lang/rust/issues/94815
### History
Backtraces with `-C panic=abort` used to work in Rust 1.22 but broke in Rust 1.23, because in 1.23 we stopped emitting unwind tables with `-C panic=abort` (see https://github.com/rust-lang/rust/pull/45031 and https://github.com/rust-lang/rust/issues/81902#issuecomment-3046487084).
In 1.45 a workaround in the form of `-C force-unwind-tables=yes` was added (see https://github.com/rust-lang/rust/pull/69984).
`-C panic=abort` was added in [Rust 1.10](https://blog.rust-lang.org/2016/07/07/Rust-1.10/#what-s-in-1-10-stable) and the motivation was binary size and compile time. But given how confusing that behavior has turned out to be, it is better to make binary size optimization opt-in with `-C force-unwind-tables=no` rather than default since the current default breaks backtraces.
Besides, if binary size is a primary concern, there are many other tricks that can be used that has a higher impact.
# Release Note Entry Draft:
## Compatibility Notes
* [Fix backtraces with `-C panic=abort` on Linux by generating unwind tables by default](https://github.com/rust-lang/rust/pull/143613). Build with `-C force-unwind-tables=no` to keep omitting unwind tables.
try-job: aarch64-apple
try-job: armhf-gnu
try-job: aarch64-msvc-1
|
|
The linux backtrace unwinder relies on unwind tables to work properly,
and generating and printing a backtrace is done by for example the
default panic hook.
Begin emitting unwind tables by default again with `-C panic=abort` (see
history below) so that backtraces work.
History
=======
Backtraces with `-C panic=abort` used to work in Rust 1.22 but broke in
Rust 1.23, because in 1.23 we stopped emitting unwind tables with `-C
panic=abort` (see 24cc38e3b00).
In 1.45 (see cda994633ee) a workaround in the form
of `-C force-unwind-tables=yes` was added.
`-C panic=abort` was added in [Rust
1.10](https://blog.rust-lang.org/2016/07/07/Rust-1.10/#what-s-in-1-10-stable)
and the motivation was binary size and compile time. But given how
confusing that behavior has turned out to be, it is better to make
binary size optimization opt-in with `-C force-unwind-tables=no` rather
than default since the current default breaks backtraces.
Besides, if binary size is a primary concern, there are many other
tricks that can be used that has a higher impact.
|
|
debuginfo: add an unstable flag to write split DWARF to an explicit directory
Bazel requires knowledge of outputs from actions at analysis time, including file or directory name. In order to work around the lack of predictable output name for dwo files, we group the dwo files in a subdirectory of --out-dir as a post-processing step before returning control to bazel. Unfortunately some debugging workflows rely on directly opening the dwo file rather than loading the merged dwp file, and our trick of moving the files breaks those users. We can't just hardlink the file or copy it, because with remote build execution we wouldn't end up with the un-moved file copied back to the developer's workstation. As a fix, we add this unstable flag that causes dwo files to be written to a build-system-controllable location, which then lets bazel hoover up the dwo files, but the objects also have the correct path for the dwo files.
r? `@davidtwco`
|
|
compiler: remove AbiAlign inside TargetDataLayout
AbiAlign is a thin wrapper around Align, extant mostly because we used to track a separate quasi-notion of alignment that was never a real notion of alignment and removing all of it at once was too churny. This PR maintains AbiAlign usage in public API and most of the compiler, but direct access of these fields for TargetDataLayout is now in terms of Align only.
|
|
TypeTree support in autodiff
# TypeTrees for Autodiff
## What are TypeTrees?
Memory layout descriptors for Enzyme. Tell Enzyme exactly how types are structured in memory so it can compute derivatives efficiently.
## Structure
```rust
TypeTree(Vec<Type>)
Type {
offset: isize, // byte offset (-1 = everywhere)
size: usize, // size in bytes
kind: Kind, // Float, Integer, Pointer, etc.
child: TypeTree // nested structure
}
```
## Example: `fn compute(x: &f32, data: &[f32]) -> f32`
**Input 0: `x: &f32`**
```rust
TypeTree(vec![Type {
offset: -1, size: 8, kind: Pointer,
child: TypeTree(vec![Type {
offset: -1, size: 4, kind: Float,
child: TypeTree::new()
}])
}])
```
**Input 1: `data: &[f32]`**
```rust
TypeTree(vec![Type {
offset: -1, size: 8, kind: Pointer,
child: TypeTree(vec![Type {
offset: -1, size: 4, kind: Float, // -1 = all elements
child: TypeTree::new()
}])
}])
```
**Output: `f32`**
```rust
TypeTree(vec![Type {
offset: -1, size: 4, kind: Float,
child: TypeTree::new()
}])
```
## Why Needed?
- Enzyme can't deduce complex type layouts from LLVM IR
- Prevents slow memory pattern analysis
- Enables correct derivative computation for nested structures
- Tells Enzyme which bytes are differentiable vs metadata
## What Enzyme Does With This Information:
Without TypeTrees (current state):
```llvm
; Enzyme sees generic LLVM IR:
define float ``@distance(ptr*`` %p1, ptr* %p2) {
; Has to guess what these pointers point to
; Slow analysis of all memory operations
; May miss optimization opportunities
}
```
With TypeTrees (our implementation):
```llvm
define "enzyme_type"="{[]:Float@float}" float ``@distance(``
ptr "enzyme_type"="{[]:Pointer}" %p1,
ptr "enzyme_type"="{[]:Pointer}" %p2
) {
; Enzyme knows exact type layout
; Can generate efficient derivative code directly
}
```
# TypeTrees - Offset and -1 Explained
## Type Structure
```rust
Type {
offset: isize, // WHERE this type starts
size: usize, // HOW BIG this type is
kind: Kind, // WHAT KIND of data (Float, Int, Pointer)
child: TypeTree // WHAT'S INSIDE (for pointers/containers)
}
```
## Offset Values
### Regular Offset (0, 4, 8, etc.)
**Specific byte position within a structure**
```rust
struct Point {
x: f32, // offset 0, size 4
y: f32, // offset 4, size 4
id: i32, // offset 8, size 4
}
```
TypeTree for `&Point` (internal representation):
```rust
TypeTree(vec![
Type { offset: 0, size: 4, kind: Float }, // x at byte 0
Type { offset: 4, size: 4, kind: Float }, // y at byte 4
Type { offset: 8, size: 4, kind: Integer } // id at byte 8
])
```
Generates LLVM:
```llvm
"enzyme_type"="{[]:Float@float}"
```
### Offset -1 (Special: "Everywhere")
**Means "this pattern repeats for ALL elements"**
#### Example 1: Array `[f32; 100]`
```rust
TypeTree(vec![Type {
offset: -1, // ALL positions
size: 4, // each f32 is 4 bytes
kind: Float, // every element is float
}])
```
Instead of listing 100 separate Types with offsets `0,4,8,12...396`
#### Example 2: Slice `&[i32]`
```rust
// Pointer to slice data
TypeTree(vec![Type {
offset: -1, size: 8, kind: Pointer,
child: TypeTree(vec![Type {
offset: -1, // ALL slice elements
size: 4, // each i32 is 4 bytes
kind: Integer
}])
}])
```
#### Example 3: Mixed Structure
```rust
struct Container {
header: i64, // offset 0
data: [f32; 1000], // offset 8, but elements use -1
}
```
```rust
TypeTree(vec![
Type { offset: 0, size: 8, kind: Integer }, // header
Type { offset: 8, size: 4000, kind: Pointer,
child: TypeTree(vec![Type {
offset: -1, size: 4, kind: Float // ALL array elements
}])
}
])
```
|
|
This maintains AbiAlign usage in public API and most of the compiler,
but direct access of these fields is now in terms of Align only.
|
|
Bazel requires knowledge of outputs from actions at analysis time,
including file or directory name. In order to work around the lack of
predictable output name for dwo files, we group the dwo files in a
subdirectory of --out-dir as a post-processing step before returning
control to bazel. Unfortunately some debugging workflows rely on
directly opening the dwo file rather than loading the merged dwp file,
and our trick of moving the files breaks those users. We can't just
hardlink the file or copy it, because with remote build execution we
wouldn't end up with the un-moved file copied back to the developer's
workstation. As a fix, we add this unstable flag that causes dwo files
to be written to a build-system-controllable location, which then lets
bazel hoover up the dwo files, but the objects also have the correct
path for the dwo files.
|
|
r=Urgau,davidtwco
Extends AArch64 branch protection support to include GCS
Extends existing support for AArch64 branch protection to include support for [Guarded Control Stacks](https://community.arm.com/arm-community-blogs/b/architectures-and-processors-blog/posts/arm-a-profile-architecture-2022#guarded-control-stack-gcs:~:text=Extraction%20or%20tracking.-,Guarded%20Control%20Stack%20(GCS),-With%20the%202022).
|
|
|
|
|
|
This helps us avoid the hardcoded lists elsewhere.
|
|
Signed-off-by: Karan Janthe <karanjanthe@gmail.com>
|
|
|
|
- Adds option to rustc config to enable GCS
- Passes `guarded-control-stack` flag to llvm if enabled
|
|
This schema is helpful for people writing custom target spec JSON. It
can provide autocomplete in the editor, and also serves as documentation
when there are documentation comments on the structs, as `schemars` will
put them in the schema.
|
|
|
|
r=rcvalle
Sanitizers target modificators
Depends on bool flag fix: https://github.com/rust-lang/rust/pull/138483.
Some sanitizers need to be target modifiers, and some do not. For now, we should mark all sanitizers as target modifiers except for these: AddressSanitizer, LeakSanitizer
For kCFI, the helper flag -Zsanitizer-cfi-normalize-integers should also be a target modifier.
Many test errors was with sanizer flags inconsistent with std deps. Tests are fixed with `-C unsafe-allow-abi-mismatch`.
|
|
Signed-off-by: Jonathan Brouwer <jonathantbrouwer@gmail.com>
|
|
|
|
|
|
|
|
Lint buffering currently relies on a giant enum `BuiltinLintDiag`
containing all the lints that might potentially get buffered. In
addition to being an unwieldy enum in a central crate, this also makes
`rustc_lint_defs` a build bottleneck: it depends on various types from
various crates (with a steady pressure to add more), and many crates
depend on it.
Having all of these variants in a separate crate also prevents detecting
when a variant becomes unused, which we can do with a dedicated type
defined and used in the same crate.
Refactor this to use a dyn trait, to allow using `LintDiagnostic` types
directly.
This requires boxing, but all of this is already on the slow path
(emitting an error).
Because the existing `BuiltinLintDiag` requires some additional types in
order to decorate some variants, which are only available later in
`rustc_lint`, use an enum `DecorateDiagCompat` to handle both the `dyn
LintDiagnostic` case and the `BuiltinLintDiag` case.
|
|
modifiers with custom consistency check function
|
|
|
|
This is intended to be used for Linux kernel RETPOLINE builds.
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
|
|
|
|
Deduplicate -L search paths
For each -L passed to the compiler, we eagerly scan the whole directory. If it has a lot of files, that results in a lot of allocations. So it's needless to do this if some -L paths are actually duplicated (which can happen e.g. in the situation in the linked issue).
This PR both deduplicates the args, and also teaches rustdoc not to pass duplicated args to merged doctests.
Fixes: https://github.com/rust-lang/rust/issues/145375
|
|
strip prefix of temporary file names when it exceeds filesystem name length limit
When doing lto, rustc generates filenames that are concatenating many information.
In the case of this testcase, it is concatenating crate name and rust file name, plus some hash, and the extension. In some other cases it will concatenate even more information reducing the maximum effective crate name to about 110 chars on linux filesystems where filename max length is 255
This commit is ensuring that the temporary file names are limited in size, while still reasonably ensuring the unicity (with hashing of the stripped part)
Fix: rust-lang/rust#49914
|
|
|
|
|
|
limit
When doing lto, rustc generates filenames that are concatenating many information.
In the case of this testcase, it is concatenating crate name and rust file name, plus some hash, and the extension.
In some other cases it will concatenate even more information reducing the maximum effective crate name to about 110 chars on linux filesystems where
filename max length is 255
This commit is ensuring that the temporary file names are limited in size, while still reasonabily ensuring the unicity (with hashing of the stripped part)
|
|
|
|
|
|
This flag turned out to be less useful than anticipated, and interferes with
work towards expansion support.
|
|
Rollup of 7 pull requests
Successful merges:
- rust-lang/rust#144072 (update `Atomic*::from_ptr` and `Atomic*::as_ptr` docs)
- rust-lang/rust#144151 (`tests/ui/issues/`: The Issues Strike Back [1/N])
- rust-lang/rust#144300 (Clippy fixes for miropt-test-tools)
- rust-lang/rust#144399 (Add a ratchet for moving all standard library tests to separate packages)
- rust-lang/rust#144472 (str: Mark unstable `round_char_boundary` feature functions as const)
- rust-lang/rust#144503 (Various refactors to the codegen coordinator code (part 3))
- rust-lang/rust#144530 (coverage: Infer `instances_used` from `pgo_func_name_var_map`)
r? `@ghost`
`@rustbot` modify labels: rollup
|
|
Various refactors to the codegen coordinator code (part 3)
Continuing from https://github.com/rust-lang/rust/pull/144062 this removes an option without any known users, uses the object crate in favor of LLVM for getting the LTO bitcode and improves the coordinator channel handling.
|
|
Some `let chains` clean-up
Not sure if this kind of clean-up is welcoming because of size, but I decided to try out one
r? compiler
|
|
|
|
Nobody seems to actually use this, while still adding some extra
complexity to the already rather complex codegen coordinator code.
It is also not supported by any backend other than the LLVM backend.
|
|
|
|
Use serde for target spec json deserialize
The previous manual parsing of `serde_json::Value` was a lot of complicated code and extremely error-prone. It was full of janky behavior like sometimes ignoring type errors, sometimes erroring for type errors, sometimes warning for type errors, and sometimes just ICEing for type errors (the icing on the top).
Additionally, many of the error messages about allowed values were out of date because they were in a completely different place than the FromStr impls. Overall, the system caused confusion for users.
I also found the old deserialization code annoying to read. Whenever a `key!` invocation was found, one had to first look for the right macro arm, and no go to definition could help.
This PR replaces all this manual parsing with a 2-step process involving serde.
First, the string is parsed into a `TargetSpecJson` struct. This struct is a 1:1 representation of the spec JSON. It already parses all the enums and is very simple to read and write.
Then, the fields from this struct are copied into the actual `Target`. The reason for this two-step process instead of just serializing into a `Target` is because of a few reasons
1. There are a few transformations performed between the two formats
2. The default logic is implemented this way. Otherwise all the default field values would have to be spelled out again, which is suboptimal. With this logic, they fall out naturally, because everything in the json struct is an `Option`.
Overall, the mapping is pretty simple, with the vast majority of fields just doing a 1:1 mapping that is captured by two macros. I have deliberately avoided making the macros generic to keep them simple.
All the `FromStr` impls now have the error message right inside them, which increases the chance of it being up to date. Some "`from_str`" impls were turned into proper `FromStr` impls to support this.
The new code is much less involved, delegating all the JSON parsing logic to serde, without any manual type matching.
This change introduces a few breaking changes for consumers. While it is possible to use this format on stable, it is very much subject to change, so breaking changes are expected. The hope is also that because of the way stricter behavior, breaking changes are easier to deal with, as they come with clearer error messages.
1. Invalid types now always error, everywhere. Previously, they would sometimes error, and sometimes just be ignored (which meant the users JSON was still broken, just silently!)
2. This now makes use of `deny_unknown_fields` instead of just warning on unused fields, which was done previously. Serde doesn't make it easy to get such warning behavior, which was the primary reason that this now changed. But I think error behavior is very reasonable too. If someone has random stale fields in their JSON, it is likely because these fields did something at some point but no longer do, and the user likely wants to be informed of this so they can figure out what to do.
This is also relevant for the future. If we remove a field but someone has it set, it probably makes sense for them to take a look whether they need this and should look for alternatives, or whether they can just delete it. Overall, the JSON is made more explicit.
This is the only expected breakage, but there could also be small breakage from small mistakes. All targets roundtrip though, so it can't be anything too major.
fixes rust-lang/rust#144153
|
|
The previous manual parsing of `serde_json::Value` was a lot of
complicated code and extremely error-prone. It was full of janky
behavior like sometimes ignoring type errors, sometimes erroring for
type errors, sometimes warning for type errors, and sometimes just
ICEing for type errors (the icing on the top).
Additionally, many of the error messages about allowed values were out
of date because they were in a completely different place than the
FromStr impls. Overall, the system caused confusion for users.
I also found the old deserialization code annoying to read. Whenever a
`key!` invocation was found, one had to first look for the right macro
arm, and no go to definition could help.
This PR replaces all this manual parsing with a 2-step process involving
serde.
First, the string is parsed into a `TargetSpecJson` struct. This struct
is a 1:1 representation of the spec JSON. It already parses all the
enums and is very simple to read and write.
Then, the fields from this struct are copied into the actual `Target`.
The reason for this two-step process instead of just serializing into a
`Target` is because of a few reasons
1. There are a few transformations performed between the two formats
2. The default logic is implemented this way. Otherwise all the default
field values would have to be spelled out again, which is
suboptimal. With this logic, they fall out naturally, because
everything in the json struct is an `Option`.
Overall, the mapping is pretty simple, with the vast majority of fields
just doing a 1:1 mapping that is captured by two macros. I have
deliberately avoided making the macros generic to keep them simple.
All the `FromStr` impls now have the error message right inside them,
which increases the chance of it being up to date. Some "`from_str`"
impls were turned into proper `FromStr` impls to support this.
The new code is much less involved, delegating all the JSON parsing
logic to serde, without any manual type matching.
This change introduces a few breaking changes for consumers. While it is
possible to use this format on stable, it is very much subject to
change, so breaking changes are expected. The hope is also that because
of the way stricter behavior, breaking changes are easier to deal with,
as they come with clearer error messages.
1. Invalid types now always error, everywhere. Previously, they would
sometimes error, and sometimes just be ignored (which meant the users
JSON was still broken, just silently!)
2. This now makes use of `deny_unknown_fields` instead of just warning
on unused fields, which was done previously. Serde doesn't make it
easy to get such warning behavior, which was the primary reason that
this now changed. But I think error behavior is very reasonable too.
If someone has random stale fields in their JSON, it is likely
because these fields did something at some point but no longer do,
and the user likely wants to be informed of this so they can figure
out what to do.
This is also relevant for the future. If we remove a field but
someone has it set, it probably makes sense for them to take a look
whether they need this and should look for alternatives, or whether
they can just delete it. Overall, the JSON is made more explicit.
This is the only expected breakage, but there could also be small
breakage from small mistakes. All targets roundtrip though, so it can't
be anything too major.
|
|
code generation
|
|
Rollup of 11 pull requests
Successful merges:
- rust-lang/rust#142300 (Disable `tests/run-make/mte-ffi` because no CI runners have MTE extensions enabled)
- rust-lang/rust#143271 (Store the type of each GVN value)
- rust-lang/rust#143293 (fix `-Zsanitizer=kcfi` on `#[naked]` functions)
- rust-lang/rust#143719 (Emit warning when there is no space between `-o` and arg)
- rust-lang/rust#143846 (pass --gc-sections if -Zexport-executable-symbols is enabled and improve tests)
- rust-lang/rust#143891 (Port `#[coverage]` to the new attribute system)
- rust-lang/rust#143967 (constify `Option` methods)
- rust-lang/rust#144008 (Fix false positive double negations with macro invocation)
- rust-lang/rust#144010 (Boostrap: add warning on `optimize = false`)
- rust-lang/rust#144049 (rustc-dev-guide subtree update)
- rust-lang/rust#144056 (Copy GCC sources into the build directory even outside CI)
r? `@ghost`
`@rustbot` modify labels: rollup
|
|
Emit warning when there is no space between `-o` and arg
Closes rust-lang/rust#142812
`getopt` doesn't seem to have an API to check this, so we have to check the args manually.
r? compiler
|
|
`-Zhigher-ranked-assumptions`: Consider WF of coroutine witness when proving outlives assumptions
### TL;DR
This PR introduces an unstable flag `-Zhigher-ranked-assumptions` which tests out a new algorithm for dealing with some of the higher-ranked outlives problems that come from auto trait bounds on coroutines. See:
* rust-lang/rust#110338
While it doesn't fix all of the issues, it certainly fixed many of them, so I'd like to get this landed so people can test the flag on their own code.
### Background
Consider, for example:
```rust
use std::future::Future;
trait Client {
type Connecting<'a>: Future + Send
where
Self: 'a;
fn connect(&self) -> Self::Connecting<'_>;
}
fn call_connect<C>(c: C) -> impl Future + Send
where
C: Client + Send + Sync,
{
async move { c.connect().await }
}
```
Due to the fact that we erase the lifetimes in a coroutine, we can think of the interior type of the async block as something like: `exists<'r, 's> { C, &'r C, C::Connecting<'s> }`. The first field is the `c` we capture, the second is the auto-ref that we perform on the call to `.connect()`, and the third is the resulting future we're awaiting at the first and only await point. Note that every region is uniquified differently in the interior types.
For the async block to be `Send`, we must prove that both of the interior types are `Send`. First, we have an `exists<'r, 's>` binder, which needs to be instantiated universally since we treat the regions in this binder as *unknown*[^exist]. This gives us two types: `{ &'!r C, C::Connecting<'!s> }`. Proving `&'!r C: Send` is easy due to a [`Send`](https://doc.rust-lang.org/nightly/std/marker/trait.Send.html#impl-Send-for-%26T) impl for references.
Proving `C::Connecting<'!s>: Send` can only be done via the item bound, which then requires `C: '!s` to hold (due to the `where Self: 'a` on the associated type definition). Unfortunately, we don't know that `C: '!s` since we stripped away any relationship between the interior type and the param `C`. This leads to a bogus borrow checker error today!
### Approach
Coroutine interiors are well-formed by virtue of them being borrow-checked, as long as their callers are invoking their parent functions in a well-formed way, then substitutions should also be well-formed. Therefore, in our example above, we should be able to deduce the assumption that `C: '!s` holds from the well-formedness of the interior type `C::Connecting<'!s>`.
This PR introduces the notion of *coroutine assumptions*, which are the outlives assumptions that we can assume hold due to the well-formedness of a coroutine's interior types. These are computed alongside the coroutine types in the `CoroutineWitnessTypes` struct. When we instantiate the binder when proving an auto trait for a coroutine, we instantiate the `CoroutineWitnessTypes` and stash these newly instantiated assumptions in the region storage in the `InferCtxt`. Later on in lexical region resolution or MIR borrowck, we use these registered assumptions to discharge any placeholder outlives obligations that we would otherwise not be able to prove.
### How well does it work?
I've added a ton of tests of different reported situations that users have shared on issues like rust-lang/rust#110338, and an (anecdotally) large number of those examples end up working straight out of the box! Some limitations are described below.
### How badly does it not work?
The behavior today is quite rudimentary, since we currently discharge the placeholder assumptions pretty early in region resolution. This manifests itself as some limitations on the code that we accept.
For example, `tests/ui/async-await/higher-ranked-auto-trait-11.rs` continues to fail. In that test, we must prove that a placeholder is equal to a universal for a param-env candidate to hold when proving an auto trait, e.g. `'!1 = 'a` is required to prove `T: Trait<'!1>` in a param-env that has `T: Trait<'a>`. Unfortunately, at that point in the MIR body, we only know that the placeholder is equal to some body-local existential NLL var `'?2`, which only gets equated to the universal `'a` when being stored into the return local later on in MIR borrowck.
This could be fixed by integrating these assumptions into the type outlives machinery in a more first-class way, and delaying things to the end of MIR typeck when we know the full relationship between existential and universal NLL vars. Doing this integration today is quite difficult today.
`tests/ui/async-await/higher-ranked-auto-trait-11.rs` fails because we don't compute the full transitive outlives relations between placeholders. In that test, we have in our region assumptions that some `'!1 = '!2` and `'!2 = '!3`, but we must prove `'!1 = '!3`.
This can be fixed by computing the set of coroutine outlives assumptions in a more transitive way, or as I mentioned above, integrating these assumptions into the type outlives machinery in a more first-class way, since it's already responsible for the transitive outlives assumptions of universals.
### Moving forward
I'm still quite happy with this implementation, and I'd like to land it for testing. I may work on overhauling both the way we compute these coroutine assumptions and also how we deal with the assumptions during (lexical/nll) region checking. But for now, I'd like to give users a chance to try out this new `-Zhigher-ranked-assumptions` flag to uncover more shortcomings.
[^exist]: Instantiating this binder with infer regions would be incomplete, since we'd be asking for *some* instantiation of the interior types, not proving something for *all* instantiations of the interior types.
|
|
Signed-off-by: xizheyin <xizheyin@smail.nju.edu.cn>
|
|
|
|
|
|
|