| Age | Commit message (Collapse) | Author | Lines |
|
|
|
Use `std::mem::{size_of, size_of_val, align_of, align_of_val}` from the
prelude instead of importing or qualifying them.
These functions were added to all preludes in Rust 1.80.
|
|
|
|
This does need EMCC_CFLAGS="-s MAXIMUM_MEMORY=2GB" avoid several OOMs.
|
|
|
|
|
|
Refactor the code in the `convert_while_ascii` helper function to make
it more suitable for auto-vectorization and also process the full ascii
prefix of the string. The generic case conversion logic will only be
invoked starting from the first non-ascii character.
The runtime on microbenchmarks with ascii-only inputs improves between
1.5x for short and 4x for long inputs on x86_64 and aarch64.
The new implementation also encapsulates all unsafe inside the
`convert_while_ascii` function.
Fixes #123712
|
|
|
|
|
|
|
|
The previous commit updated `rustfmt.toml` appropriately. This commit is
the outcome of running `x fmt --all` with the new formatting options.
|
|
|
|
|
|
detects redundant imports that can be eliminated.
for #117772 :
In order to facilitate review and modification, split the checking code and
removing redundant imports code into two PR.
|
|
First cycle runs until we found 2 same elements, second runs after if there any found in the first one. This allows to avoid any memory writes until we found an item which we want to remove.
This leads to significant performance gains if all `Vec` items are kept: -40% on my benchmark with unique integers.
Results of benchmarks before implementation (including new benchmark where nothing needs to be removed):
* vec::bench_dedup_all_100 74.00ns/iter +/- 13.00ns
* vec::bench_dedup_all_1000 572.00ns/iter +/- 272.00ns
* vec::bench_dedup_all_100000 64.42µs/iter +/- 19.47µs
* __vec::bench_dedup_none_100 67.00ns/iter +/- 17.00ns__
* __vec::bench_dedup_none_1000 662.00ns/iter +/- 86.00ns__
* __vec::bench_dedup_none_10000 9.16µs/iter +/- 2.71µs__
* __vec::bench_dedup_none_100000 91.25µs/iter +/- 1.82µs__
* vec::bench_dedup_random_100 105.00ns/iter +/- 11.00ns
* vec::bench_dedup_random_1000 781.00ns/iter +/- 10.00ns
* vec::bench_dedup_random_10000 9.00µs/iter +/- 5.62µs
* vec::bench_dedup_random_100000 449.81µs/iter +/- 74.99µs
* vec::bench_dedup_slice_truncate_100 105.00ns/iter +/- 16.00ns
* vec::bench_dedup_slice_truncate_1000 2.65µs/iter +/- 481.00ns
* vec::bench_dedup_slice_truncate_10000 18.33µs/iter +/- 5.23µs
* vec::bench_dedup_slice_truncate_100000 501.12µs/iter +/- 46.97µs
Results after implementation:
* vec::bench_dedup_all_100 75.00ns/iter +/- 9.00ns
* vec::bench_dedup_all_1000 494.00ns/iter +/- 117.00ns
* vec::bench_dedup_all_100000 58.13µs/iter +/- 8.78µs
* __vec::bench_dedup_none_100 52.00ns/iter +/- 22.00ns__
* __vec::bench_dedup_none_1000 417.00ns/iter +/- 116.00ns__
* __vec::bench_dedup_none_10000 4.11µs/iter +/- 546.00ns__
* __vec::bench_dedup_none_100000 40.47µs/iter +/- 5.36µs__
* vec::bench_dedup_random_100 77.00ns/iter +/- 15.00ns
* vec::bench_dedup_random_1000 681.00ns/iter +/- 86.00ns
* vec::bench_dedup_random_10000 11.66µs/iter +/- 2.22µs
* vec::bench_dedup_random_100000 469.35µs/iter +/- 20.53µs
* vec::bench_dedup_slice_truncate_100 100.00ns/iter +/- 5.00ns
* vec::bench_dedup_slice_truncate_1000 2.55µs/iter +/- 224.00ns
* vec::bench_dedup_slice_truncate_10000 18.95µs/iter +/- 2.59µs
* vec::bench_dedup_slice_truncate_100000 492.85µs/iter +/- 72.84µs
Resolves #77772
|
|
They are for more specific cases than old benches.
Also, better usage of blackbox
|
|
|
|
|
|
Implement more methods for `vec_deque::IntoIter`
This implements a couple `Iterator` methods on `vec_deque::IntoIter` (`(try_)fold`, `(try_)rfold` `advance_(back_)by`, `next_chunk`, `count` and `last`) to allow these to be more efficient than their default implementations, also allowing many other `Iterator` methods that use these under the hood to take advantage of these manual implementations. `vec::IntoIter` has similar implementations for many of these methods. This PR does not yet implement `TrustedRandomAccess` and friends, as I'm not very familiar with the required safety guarantees.
r? `@the8472` (since you also took over my last PR)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
```
test vec::bench_next_chunk ... bench: 696 ns/iter (+/- 22)
x86_64v1, pr
test vec::bench_next_chunk ... bench: 309 ns/iter (+/- 4)
znver2, default
test vec::bench_next_chunk ... bench: 17,272 ns/iter (+/- 117)
znver2, pr
test vec::bench_next_chunk ... bench: 211 ns/iter (+/- 3)
```
The znver2 default impl seems to be slow due to inlining decisions. It goes through `core::array::iter_next_chunk`
which has a deeper call tree.
|
|
|
|
|
|
|
|
These debug assertions are all implemented only at runtime using
`const_eval_select`, and in the error path they execute
`intrinsics::abort` instead of being a normal debug assertion to
minimize the impact of these assertions on code size, when enabled.
Of all these changes, the bounds checks for unchecked indexing are
expected to be most impactful (case in point, they found a problem in
rustc).
|
|
Add `into_iter().filter().collect()` as a comparison point since it was reported to be faster than `retain`.
Remove clone inside benchmark loop to reduce allocator noise.
|
|
|
|
VecDeque: improve performance for From<[T; N]>
Create `VecDeque` directly from the array instead of inserting items one-by-one.
Benchmark
```
./x.py bench library/alloc --test-args vec_deque::bench_from_array_1000
```
* Before
```
test vec_deque::bench_from_array_1000 ... bench: 3,991 ns/iter (+/- 717)
```
* After
```
test vec_deque::bench_from_array_1000 ... bench: 268 ns/iter (+/- 37)
```
|
|
|
|
|
|
|
|
|
|
```
# old
test vec::bench_flat_map_collect ... bench: 2,244,024 ns/iter (+/- 18,903)
# new
test vec::bench_flat_map_collect ... bench: 172,863 ns/iter (+/- 2,141)
```
|
|
|
|
|
|
The unsoundness is not in Peekable per se, it rather is due to the
interaction between Peekable being able to hold an extra item
and vec::IntoIter's clone implementation shortening the allocation.
An alternative solution would be to change IntoIter's clone implementation
to keep enough spare capacity available.
|
|
Many of the Vec benchmarks assert what values should be produced by the
benchmarked code. In some cases, these asserts dominate the runtime of
the benchmarks they are in, causing the benchmarks to understate the
impact of an optimization or regression.
|
|
|
|
|
|
|
|
Move LEN so it is is read in order.
|
|
|
|
|
|
liballoc bench use imported path Bencher
test is already in scope, no need to use the full path
|
|
Avoid useless sift_down when std::collections::binary_heap::PeekMut is never mutably dereferenced
If `deref_mut` is never called then it's not possible for the element to be mutated without internal mutability, meaning there's no need to call `sift_down`.
This could be a little improvement in cases where you want to mutate the biggest element of the heap only if it satisfies a certain predicate that needs only read access to the element.
|