| Age | Commit message (Collapse) | Author | Lines |
|
|
|
|
|
|
|
Include id in Thread's Debug implementation
Since Rust 1.19.0, `id` is a stable method, so there is no reason to not include it in Debug implementation.
|
|
Add some tests
close #52977
It seems that there are no tests for this issue, so I opened this PR.
off-topic: I noticed [this test](https://github.com/rust-lang/rust/blob/master/src/test/ui/existential_types/nested_existential_types.rs)'s indents are bad, could I include commit to fix this, or should I separate?
r? @oli-obk
|
|
Add `Default` to `std::alloc::System`
`System` is a unit struct, thus, it can be constructed without any additional information. Therefore `Default` is a noop. However, in generic code, a `T: Default` may happen as in
```rust
#[derive(Default)]
struct Foo<A> {
allocator: A
}
```
Does this need a feature gate?
Should I also add `PartialEq/Eq/PartialOrd/Ord/Hash`?
|
|
fix: Make incremental artifact deletion more robust
Should fix the intermittent errors reported in #57958
cc #48614
|
|
Generalize diagnostic for `x = y` where `bool` is the expected type
Extracted out of https://github.com/rust-lang/rust/pull/59288.
Currently we special case a diagnostic for `if x = y { ...` since the expected type is `bool` in this case and we instead suggest `if x == y`. This PR generalizes this such that given an expression of form `x = y` (`ExprKind::Assign(..)`) where the expected type is `bool`, we emit a suggestion `x == y`.
r? @oli-obk
Let's do a perf run to make sure this was not the source of regressions in #59288.
|
|
Renames `EvalContext` to `InterpretCx`
This PR renames `EvalContext` to `InterpretCx` in `src/librustc_mir`.
This PR is related to #54395 .
|
|
Reject integer suffix when tuple indexing
Fix #59418.
r? @varkor
|
|
[CI] record docker image info for reuse
This writes an extra `dist/image-$image.txt` which contains the S3 URL
of the cached image and the `sha256` digest of the docker entry point.
This will be uploaded with the rest of the deployed artifacts in the
Travis `after_success` script.
cc rust-lang/rustup.rs#1724
r? @alexcrichton
|
|
Refactor tuple comparison tests
|
|
Make `ptr::eq` documentation mention fat-pointer behavior
Resolves #59214
|
|
add rustfix-able suggestions to trim_{left,right} deprecations
Fixes #53802 (technically already fixed by #58002, but that issue is about these methods).
|
|
adjust MaybeUninit API to discussions
uninitialized -> uninit
into_initialized -> assume_init
read_initialized -> read
set -> write
|
|
Make ASCII case conversions more than 4× faster
Reformatted output of `./x.py bench src/libcore --test-args ascii` below. The `libcore` benchmark calls `[u8]::make_ascii_lowercase`. `lookup` has code (effectively) identical to that before this PR, and ~~`branchless`~~ `mask_shifted_bool_match_range` after this PR.
~~See [code comments](https://github.com/rust-lang/rust/pull/59283/commits/ce933f77c865a15670855ac5941fe200752b739f#diff-01076f91a26400b2db49663d787c2576R3796) in `u8::to_ascii_uppercase` in `src/libcore/num/mod.rs` for an explanation of the branchless algorithm.~~
**Update:** the algorithm was simplified while keeping the performance. See `branchless` v.s. `mask_shifted_bool_match_range` benchmarks.
Credits to @raphlinus for the idea in https://twitter.com/raphlinus/status/1107654782544736261, which extends this algorithm to “fake SIMD” on `u32` to convert four bytes at a time. The `fake_simd_u32` benchmarks implements this with [`let (before, aligned, after) = bytes.align_to_mut::<u32>()`](https://doc.rust-lang.org/std/primitive.slice.html#method.align_to_mut). Note however that this is buggy when addition carries/overflows into the next byte (which does not happen if the input is known to be ASCII).
This could be fixed (to optimize `[u8]::make_ascii_lowercase` and `[u8]::make_ascii_uppercase` in `src/libcore/slice/mod.rs`) either with some more bitwise trickery that I didn’t quite figure out, or by using “real” SIMD intrinsics for byte-wise addition. I did not pursue this however because the current (incorrect) fake SIMD algorithm is only marginally faster than the one-byte-at-a-time branchless algorithm. This is because LLVM auto-vectorizes the latter, as can be seen on https://rust.godbolt.org/z/anKtbR.
Benchmark results on Linux x64 with Intel i7-7700K: (updated from https://github.com/rust-lang/rust/pull/59283#issuecomment-474146863)
```rust
6830 bytes string:
alloc_only ... bench: 112 ns/iter (+/- 0) = 62410 MB/s
black_box_read_each_byte ... bench: 1,733 ns/iter (+/- 8) = 4033 MB/s
lookup_table ... bench: 1,766 ns/iter (+/- 11) = 3958 MB/s
branch_and_subtract ... bench: 417 ns/iter (+/- 1) = 16762 MB/s
branch_and_mask ... bench: 401 ns/iter (+/- 1) = 17431 MB/s
branchless ... bench: 365 ns/iter (+/- 0) = 19150 MB/s
libcore ... bench: 367 ns/iter (+/- 1) = 19046 MB/s
fake_simd_u32 ... bench: 361 ns/iter (+/- 2) = 19362 MB/s
fake_simd_u64 ... bench: 361 ns/iter (+/- 1) = 19362 MB/s
mask_mult_bool_branchy_lookup_table ... bench: 6,309 ns/iter (+/- 19) = 1107 MB/s
mask_mult_bool_lookup_table ... bench: 4,183 ns/iter (+/- 29) = 1671 MB/s
mask_mult_bool_match_range ... bench: 339 ns/iter (+/- 0) = 20619 MB/s
mask_shifted_bool_match_range ... bench: 339 ns/iter (+/- 1) = 20619 MB/s
32 bytes string:
alloc_only ... bench: 15 ns/iter (+/- 0) = 2133 MB/s
black_box_read_each_byte ... bench: 29 ns/iter (+/- 0) = 1103 MB/s
lookup_table ... bench: 24 ns/iter (+/- 4) = 1333 MB/s
branch_and_subtract ... bench: 16 ns/iter (+/- 0) = 2000 MB/s
branch_and_mask ... bench: 16 ns/iter (+/- 0) = 2000 MB/s
branchless ... bench: 16 ns/iter (+/- 0) = 2000 MB/s
libcore ... bench: 15 ns/iter (+/- 0) = 2133 MB/s
fake_simd_u32 ... bench: 17 ns/iter (+/- 0) = 1882 MB/s
fake_simd_u64 ... bench: 16 ns/iter (+/- 0) = 2000 MB/s
mask_mult_bool_branchy_lookup_table ... bench: 42 ns/iter (+/- 0) = 761 MB/s
mask_mult_bool_lookup_table ... bench: 35 ns/iter (+/- 0) = 914 MB/s
mask_mult_bool_match_range ... bench: 16 ns/iter (+/- 0) = 2000 MB/s
mask_shifted_bool_match_range ... bench: 16 ns/iter (+/- 0) = 2000 MB/s
7 bytes string:
alloc_only ... bench: 14 ns/iter (+/- 0) = 500 MB/s
black_box_read_each_byte ... bench: 22 ns/iter (+/- 0) = 318 MB/s
lookup_table ... bench: 16 ns/iter (+/- 0) = 437 MB/s
branch_and_subtract ... bench: 16 ns/iter (+/- 0) = 437 MB/s
branch_and_mask ... bench: 16 ns/iter (+/- 0) = 437 MB/s
branchless ... bench: 19 ns/iter (+/- 0) = 368 MB/s
libcore ... bench: 20 ns/iter (+/- 0) = 350 MB/s
fake_simd_u32 ... bench: 18 ns/iter (+/- 0) = 388 MB/s
fake_simd_u64 ... bench: 21 ns/iter (+/- 0) = 333 MB/s
mask_mult_bool_branchy_lookup_table ... bench: 20 ns/iter (+/- 0) = 350 MB/s
mask_mult_bool_lookup_table ... bench: 19 ns/iter (+/- 0) = 368 MB/s
mask_mult_bool_match_range ... bench: 19 ns/iter (+/- 0) = 368 MB/s
mask_shifted_bool_match_range ... bench: 19 ns/iter (+/- 0) = 368 MB/s
```
|
|
Add suggestion to use `&*var` when `&str: From<String>` is expected
Fix #53879.
|
|
librustc_interface => 2018
r? @oli-obk
This will likely produce an ICE for some reason... so super-WIP.
|
|
librustc_driver => 2018
Transitions `librustc_driver` to Rust 2018; cc #58099
r? @Centril
|
|
syntax: Remove warning for unnecessary path disambiguators
`rustfmt` is now stable and it removes unnecessary turbofishes, so removing the warning as discussed in https://github.com/rust-lang/rust/pull/43540 (where it was introduced).
One hardcoded warning less.
Closes https://github.com/rust-lang/rust/issues/58055
r? @nikomatsakis
|
|
Make some lints incremental
Blocked on https://github.com/rust-lang/rust/pull/57253
r? @michaelwoerister
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The existing `KeywordIdents` lint blindly scans the token stream for a
macro or macro definition. It does not attempt to parse the input,
which means it cannot distinguish between occurrences of `dyn` that
are truly instances of it as an identifier (e.g. `let dyn = 3;`)
versus occurrences that follow its usage as a contextual keyword (e.g.
the type `Box<dyn Trait>`).
In an ideal world the lint would parse the token stream in order to
distinguish such occurrences; but in general we cannot do this,
because a macro_rules definition does not specify what parsing
contexts the macro being defined is allowed to be used within.
So rather than put a lot of work into attempting to come up with a
more precise but still incomplete solution, I am just taking the short
cut of not linting any instance of `dyn` under a macro. This prevents
`rustfix` from injecting bugs into legal 2015 edition code.
|
|
Refactor InferenceFudger (née RegionFudger)
- Rename `RegionFudger` (and related methods) to `InferenceFudger`.
- Take integer and float inference variables into account.
- Refactor `types_created_since_snapshot` and `vars_created_since_snapshot` with the [new version of ena](https://github.com/rust-lang-nursery/ena/pull/21).
- Some other refactoring in the area.
r? @eddyb
|
|
Since Rust 1.19.0, id is a stable method, so there is no reason to
not include it in Debug implementation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|