| Age | Commit message (Collapse) | Author | Lines |
|
|
|
Remove needless unsafety from BTreeMap::drain_filter
Remove one piece of unsafe code in the iteration over the iterator returned by BTreeMap::drain_filter.
- Changes an explicitly unspecified part of the API: when the user-supplied predicate (or some of BTreeMap's code) panicked, and the caller tries to use the iterator again, we no longer offer the same key/value pair to the predicate again but pretend the iterator has finished. Note that Miri does not find UB in the test case added here with the unsafe code (or without).
- Makes the code a little easier on the eyes.
- Makes the code a little harder on the CPU:
```
benchcmp c0 c2 --threshold 3
name c0 ns/iter c2 ns/iter diff ns/iter diff % speedup
btree::set::clone_100_and_drain_all 2,794 2,900 106 3.79% x 0.96
btree::set::clone_100_and_drain_half 2,604 2,964 360 13.82% x 0.88
btree::set::clone_10k_and_drain_half 287,770 322,755 34,985 12.16% x 0.89
```
r? @Amanieu
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Update BTreeMap::new() doc
Updates the documentation according to [this comment](https://github.com/rust-lang/rust/pull/72876/files/0c5c644c91edf6ed949cfa5ffc524f43369df604#r433232581) on #72876
|
|
Mention that BTreeMap::new() doesn't allocate
I think it would be nice to mention this, so you don't have to dig through the src to look at the definition of new().
|
|
|
|
|
|
I think it would be nice to mention this, so you don't have to dig through the src to look at the definition of new().
|
|
This adds new optional methods on `Extend`: `extend_one` add a single
element to the collection, and `extend_reserve` pre-allocates space for
the predicted number of incoming elements. These are used in `Iterator`
for `partition` and `unzip` as they shuffle elements one-at-a-time into
their respective collections.
|
|
Use min_specialization in liballoc
- Remove a type parameter from `[A]RcFromIter`.
- Remove an implementation of `[A]RcFromIter` that didn't actually
specialize anything.
- Remove unused implementation of `IsZero` for `Option<&mut T>`.
- Change specializations of `[A]RcEqIdent` to use a marker trait version
of `Eq`.
- Remove `BTreeClone`. I couldn't find a way to make this work with
`min_specialization`.
- Add `rustc_unsafe_specialization_marker` to `Copy` and `TrustedLen`.
After this only libcore is the only standard library crate using `feature(specialization)`.
cc #31844
|
|
Make BTreeMap::new and BTreeSet::new const
|
|
Btreemap iter intertwined
3 commits:
1. Introduced benchmarks for `BTreeMap::iter()`. Benchmarks named `iter_20` were of the whole iteration process, so I renamed them. Also the benchmarks of `range` that I wrote earlier weren't very good. I included an (awkwardly named) one that compares `iter()` to `range(..)` on the same set, because the contrast is surprising:
```
name ns/iter
btree::map::range_unbounded_unbounded 28,176
btree::map::range_unbounded_vs_iter 89,369
```
Both dig up the same pair of leaf edges. `range(..)` also checks that some keys are correctly ordered, the only thing `iter()` does more is to copy the map's length.
2. Slightly refactoring the code to what I find more readable (not in chronological order of discovery), boosts performance:
```
>cargo-benchcmp.exe benchcmp a1 a2 --threshold 5
name a1 ns/iter a2 ns/iter diff ns/iter diff % speedup
btree::map::find_rand_100 18 17 -1 -5.56% x 1.06
btree::map::first_and_last_10k 64 71 7 10.94% x 0.90
btree::map::iter_0 2,939 2,209 -730 -24.84% x 1.33
btree::map::iter_1 6,845 2,696 -4,149 -60.61% x 2.54
btree::map::iter_100 8,556 3,672 -4,884 -57.08% x 2.33
btree::map::iter_10k 9,292 5,884 -3,408 -36.68% x 1.58
btree::map::iter_1m 10,268 6,510 -3,758 -36.60% x 1.58
btree::map::iteration_mut_100000 478,575 453,050 -25,525 -5.33% x 1.06
btree::map::range_unbounded_unbounded 28,176 36,169 7,993 28.37% x 0.78
btree::map::range_unbounded_vs_iter 89,369 38,290 -51,079 -57.16% x 2.33
btree::set::clone_100_and_remove_all 4,801 4,245 -556 -11.58% x 1.13
btree::set::clone_10k_and_remove_all 529,450 496,030 -33,420 -6.31% x 1.07
```
But you can tell from the `range_unbounded_*` lines that, despite an unwarranted, vengeful attack on the range_unbounded_unbounded benchmark, this change still doesn't allow `iter()` to catch up with `range(..)`.
3. I guess that `range(..)` copes so well because it intertwines the leftmost and rightmost descend towards leaf edges, doing the two root node accesses close together, perhaps exploiting a CPU's internal pipelining? So the third commit distils a version of `range_search` (which we can't use directly because of the `Ord` bound), and we get another boost:
```
cargo-benchcmp.exe benchcmp a2 a3 --threshold 5
name a2 ns/iter a3 ns/iter diff ns/iter diff % speedup
btree::map::first_and_last_100 40 43 3 7.50% x 0.93
btree::map::first_and_last_10k 71 64 -7 -9.86% x 1.11
btree::map::iter_0 2,209 1,719 -490 -22.18% x 1.29
btree::map::iter_1 2,696 2,205 -491 -18.21% x 1.22
btree::map::iter_100 3,672 2,943 -729 -19.85% x 1.25
btree::map::iter_10k 5,884 3,929 -1,955 -33.23% x 1.50
btree::map::iter_1m 6,510 5,532 -978 -15.02% x 1.18
btree::map::iteration_mut_100000 453,050 476,667 23,617 5.21% x 0.95
btree::map::range_included_excluded 405,075 371,297 -33,778 -8.34% x 1.09
btree::map::range_included_included 427,577 397,440 -30,137 -7.05% x 1.08
btree::map::range_unbounded_unbounded 36,169 28,175 -7,994 -22.10% x 1.28
btree::map::range_unbounded_vs_iter 38,290 30,838 -7,452 -19.46% x 1.24
```
But I think this is just fake news from the microbenchmarking media. `iter()` is still trying to catch up with `range(..)`. And we can sure do without another function. So I would skip this 3rd commit.
r? @Mark-Simulacrum
|
|
It looks like they were copied from the `or_insert` docs. This change
makes the example more like the hash_map::VacantEntry::insert docs.
|
|
|
|
remove Unique::from for shared pointer types
r? @SimonSapin
|
|
|
|
- Remove a type parameter from `[A]RcFromIter`.
- Remove an implementation of `[A]RcFromIter` that didn't actually
specialize anything.
- Remove unused implementation of `IsZero` for `Option<&mut T>`.
- Change specializations of `[A]RcEqIdent` to use a marker trait version
of `Eq`.
- Remove `BTreeClone`. I couldn't find a way to make this work with
`min_specialization`.
- Add `rustc_unsafe_specialization_marker` to `Copy` and `TrustedLen`.
|
|
|
|
stabilize BTreeMap::remove_entry
This PR stabilizes `BTreeMap::remove_entry` as implemented in https://github.com/rust-lang/rust/pull/68378.
Closes https://github.com/rust-lang/rust/issues/66714
|
|
|
|
The unsafe code can be justified within range_search, as it makes sure to not
overlap the returned references, but from the callers perspective it's an
entirely safe algorithm and there's no need for the caller to know about the
duplication.
|
|
big-O notation: parenthesis for function calls, explicit multiplication
I saw `O(n m log n)` in the docs and found that really hard to parse. In particular, I don't think we should use blank space as syntax for *both* multiplication and function calls, that is just confusing.
This PR makes both multiplication and function calls explicit using Rust-like syntax. If you prefer, I can also leave one of them implicit, but I believe explicit is better here.
While I was at it I also added backticks consistently.
|
|
|
|
|
|
|
|
Add or_insert_with_key to Entry of HashMap/BTreeMap
Going along with `or_insert_with`, `or_insert_with_key` provides the `Entry`'s key to the lambda, avoiding the need to either clone the key or the need to reimplement this body of this method from scratch each time.
This is useful when the initial value for a map entry is derived from the key. For example, the introductory Rust book has an example Cacher struct that takes an expensive-to-compute lambda and then can, given an argument to the lambda, produce either the cached result or execute the lambda.
---
I'm fairly new to Rust, so any optimizations, corrections to types, better names, better documentation, or whatever else would be appreciated. I'd like to thank Arnavion on freenode for helping me to implement a very similar method when I found that `or_insert_with_key` was unavailable.
As a somewhat-related note, this implements https://github.com/rust-lang/rfcs/issues/1202 from 2015, so if this pull request is accepted, that should be closed.
|
|
|
|
|
|
Going along with or_insert_with, or_insert_with_key provides the
Entry's key to the lambda, avoiding the need to either clone the
key or the need to reimplement this body of this method from
scratch each time.
This is useful when the initial value for a map entry is derived
from the key. For example, the introductory Rust book has an
example Cacher struct that takes an expensive-to-compute lambda and
then can, given an argument to the lambda, produce either the
cached result or execute the lambda.
|
|
Rearrange BTreeMap::into_iter to match range_mut.
r? @Mark-Simulacrum
I wondered why you catered for the optional root differently in `into_iter` than in `range_mut`.
|
|
Follow up on BTreeMap comments
r? @Amanieu (for the first commit)
|
|
Remove the Ord bound that was plaguing drain_filter
Now that #70795 made it superfluous. Also removes superfluous lifetime specifiers (at least I think they are).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Keep track of position when deleting from a BTreeMap
This improves the performance of drain_filter and is needed for future Cursor support for BTreeMap.
cc @ssomers
r? @Mark-Simulacrum
|
|
|
|
This improves the performance of drain_filter and is needed for
future Cursor support for BTreeMap.
|
|
This commit changes some usage of mem::forget into mem::ManuallyDrop
in some Vec, VecDeque, BTreeMap and Box methods.
Before the commit, the generated IR for some of the methods was
longer, and even after optimization, some unwinding artifacts were
still present.
|