about summary refs log tree commit diff
path: root/src/liballoc
AgeCommit message (Collapse)AuthorLines
2020-05-27Rollup merge of #72533 - Diggsey:db-fix-arc-ub2, r=dtolnayDylan DPC-9/+26
Resolve UB in Arc/Weak interaction (2) Use raw pointers to avoid making any assertions about the data field. Follow up from #72479, see that PR for more detail on the motivation. @RalfJung I was able to avoid a lot of the changes to `Weak`, by making a helper type (`WeakInner`) - because of auto-deref and because the fields have the same name, the rest of the code continues to compile.
2020-05-25Fix UB in ArcDiggory Blake-9/+26
Use raw pointers to avoid making any assertions about the data field.
2020-05-20Adjust the zero check in `RawVec::grow`.Nicholas Nethercote-4/+3
This was supposed to land as part of #72227. (I wish `git push` would abort when you have uncommited changes.)
2020-05-19Auto merge of #72227 - nnethercote:tiny-vecs-are-dumb, r=Amanieubors-5/+141
Tiny Vecs are dumb. Currently, if you repeatedly push to an empty vector, the capacity growth sequence is 0, 1, 2, 4, 8, 16, etc. This commit changes the relevant code (the "amortized" growth strategy) to skip 1 and 2, instead using 0, 4, 8, 16, etc. (You can still get a capacity of 1 or 2 using the "exact" growth strategy, e.g. via `reserve_exact()`.) This idea (along with the phrase "tiny Vecs are dumb") comes from the "doubling" growth strategy that was removed from `RawVec` in #72013. That strategy was barely ever used -- only when a `VecDeque` was grown, oddly enough -- which is why it was removed in #72013. (Fun fact: until just a few days ago, I thought the "doubling" strategy was used for repeated push case. In other words, this commit makes `Vec`s behave the way I always thought they behaved.) This change reduces the number of allocations done by rustc itself by 10% or more. It speeds up rustc, and will also speed up any other Rust program that uses `Vec`s a lot. In theory, the change could increase memory usage, but in practice it doesn't. It would be an unusual program where very small `Vec`s having a capacity of 4 rather than 1 or 2 would make a difference. You'd need a *lot* of very small `Vec`s, and/or some very small `Vec`s with very large elements. r? @Amanieu
2020-05-19Auto merge of #71447 - cuviper:unsized_cow, r=dtolnaybors-0/+103
impl From<Cow> for Box, Rc, and Arc These forward `Borrowed`/`Owned` values to existing `From` impls. - `Box<T>` is a fundamental type, so it would be a breaking change to add a blanket impl. Therefore, `From<Cow>` is only implemented for `[T]`, `str`, `CStr`, `OsStr`, and `Path`. - For `Rc<T>` and `Arc<T>`, `From<Cow>` is implemented for everything that implements `From` the borrowed and owned types separately.
2020-05-18Tiny Vecs are dumb.Nicholas Nethercote-5/+141
Currently, if you repeatedly push to an empty vector, the capacity growth sequence is 0, 1, 2, 4, 8, 16, etc. This commit changes the relevant code (the "amortized" growth strategy) to skip 1 and 2 in most cases, instead using 0, 4, 8, 16, etc. (You can still get a capacity of 1 or 2 using the "exact" growth strategy, e.g. via `reserve_exact()`.) This idea (along with the phrase "tiny Vecs are dumb") comes from the "doubling" growth strategy that was removed from `RawVec` in #72013. That strategy was barely ever used -- only when a `VecDeque` was grown, oddly enough -- which is why it was removed in #72013. (Fun fact: until just a few days ago, I thought the "doubling" strategy was used for repeated push case. In other words, this commit makes `Vec`s behave the way I always thought they behaved.) This change reduces the number of allocations done by rustc itself by 10% or more. It speeds up rustc, and will also speed up any other Rust program that uses `Vec`s a lot.
2020-05-17Auto merge of #72204 - RalfJung:abort, r=Mark-Simulacrumbors-0/+9
make abort intrinsic safe, and correct its documentation Turns out `std::process::abort` is not the same as the intrinsic, the comment was just wrong. Quoting from the unix implementation: ``` // On Unix-like platforms, libc::abort will unregister signal handlers // including the SIGABRT handler, preventing the abort from being blocked, and // fclose streams, with the side effect of flushing them so libc buffered // output will be printed. Additionally the shell will generally print a more // understandable error message like "Abort trap" rather than "Illegal // instruction" that intrinsics::abort would cause, as intrinsics::abort is // implemented as an illegal instruction. ```
2020-05-17make abort intrinsic safe, and correct its documentationRalf Jung-0/+9
2020-05-14Auto merge of #71321 - matthewjasper:alloc-min-spec, r=sfacklerbors-120/+42
Use min_specialization in liballoc - Remove a type parameter from `[A]RcFromIter`. - Remove an implementation of `[A]RcFromIter` that didn't actually specialize anything. - Remove unused implementation of `IsZero` for `Option<&mut T>`. - Change specializations of `[A]RcEqIdent` to use a marker trait version of `Eq`. - Remove `BTreeClone`. I couldn't find a way to make this work with `min_specialization`. - Add `rustc_unsafe_specialization_marker` to `Copy` and `TrustedLen`. After this only libcore is the only standard library crate using `feature(specialization)`. cc #31844
2020-05-14Fix Arc::decr_strong_count doc testTomasz Miąsko-3/+5
2020-05-13Auto merge of #72013 - nnethercote:make-RawVec-grow-mostly-non-generic, ↵bors-149/+95
r=Amanieu Make `RawVec::grow` mostly non-generic. `cargo-llvm-lines` shows that, in various benchmarks, `RawVec::grow` is instantiated 10s or 100s of times and accounts for 1-8% of lines of generated LLVM IR. This commit moves most of `RawVec::grow` into a separate function that isn't parameterized by `T`, which means it doesn't need to be instantiated many times. This reduces compile time significantly. r? @ghost
2020-05-12Rollup merge of #71737 - RalfJung:miri-test-threads, r=shepmasterDylan DPC-9/+10
Miri: run liballoc tests with threads Miri now supports threads, so we can run these tests. :)
2020-05-12Split `RawVec::grow` up.Nicholas Nethercote-50/+79
The amortized case is much more common than the exact case, and it is typically instantiated many times. Also, we can put a chunk of the code into a function that isn't generic over T, which reduces the amount of LLVM IR generated quite a lot, improving compile times.
2020-05-12Remove `RawVec::double`.Nicholas Nethercote-88/+23
It's only used once, for `VecDeque`, and can easily be replaced by something else. The commit changes `grow_if_necessary` to `grow` to avoid some small regressions caused by changed inlining. The commit also removes `Strategy::Double`, and streamlines the remaining variants of `Strategy`. It's a compile time win on some benchmarks because the many instantations of `RawVec::grow` are a little smaller.
2020-05-12Remove `RawVec::double_in_place`.Nicholas Nethercote-18/+0
It's unused.
2020-05-11fix test_weak_count_locked for MiriRalf Jung-1/+5
2020-05-09Rollup merge of #71839 - LG3696:master, r=cramertjDylan DPC-2/+5
Make BTreeMap::new and BTreeSet::new const
2020-05-07Rollup merge of #70733 - yoshuawuyts:arc-increment-refcount, r=Mark-SimulacrumDylan DPC-5/+80
Add Arc::{incr,decr}_strong_count This adds two `unsafe` methods to `Arc`: `incr_strong_count` and `decr_strong_count`. A suggestion to add methods to change the strong count in `Arc` came up in during review in https://github.com/rust-lang/rust/pull/68700#discussion_r396169064, and from asking a few people this seemed like generally useful to have. References: - [Motivation from #68700](https://github.com/rust-lang/rust/pull/68700#discussion_r396169064) - [Real world example in an executor](https://docs.rs/extreme/666.666.666666/src/extreme/lib.rs.html#13)
2020-05-07Add Arc::{incr,decr}_strong_countYoshua Wuyts-5/+80
2020-05-06grammar: dealing-withJosh Soref-1/+1
2020-05-06grammar: simplify to avoid thatJosh Soref-1/+1
2020-05-06grammar: stray commaJosh Soref-1/+1
2020-05-06grammar: noun not verbJosh Soref-2/+2
2020-05-06grammar: subject-verb not subject-verb-verbJosh Soref-1/+1
2020-05-06grammar: disambiguate space-characterJosh Soref-1/+1
2020-05-06grammar: count-agreement default ... isJosh Soref-1/+1
2020-05-06grammar: which vs thatJosh Soref-4/+4
2020-05-06Rollup merge of #71510 - ssomers:btreemap_iter_intertwined, r=Mark-SimulacrumDylan DPC-75/+119
Btreemap iter intertwined 3 commits: 1. Introduced benchmarks for `BTreeMap::iter()`. Benchmarks named `iter_20` were of the whole iteration process, so I renamed them. Also the benchmarks of `range` that I wrote earlier weren't very good. I included an (awkwardly named) one that compares `iter()` to `range(..)` on the same set, because the contrast is surprising: ``` name ns/iter btree::map::range_unbounded_unbounded 28,176 btree::map::range_unbounded_vs_iter 89,369 ``` Both dig up the same pair of leaf edges. `range(..)` also checks that some keys are correctly ordered, the only thing `iter()` does more is to copy the map's length. 2. Slightly refactoring the code to what I find more readable (not in chronological order of discovery), boosts performance: ``` >cargo-benchcmp.exe benchcmp a1 a2 --threshold 5 name a1 ns/iter a2 ns/iter diff ns/iter diff % speedup btree::map::find_rand_100 18 17 -1 -5.56% x 1.06 btree::map::first_and_last_10k 64 71 7 10.94% x 0.90 btree::map::iter_0 2,939 2,209 -730 -24.84% x 1.33 btree::map::iter_1 6,845 2,696 -4,149 -60.61% x 2.54 btree::map::iter_100 8,556 3,672 -4,884 -57.08% x 2.33 btree::map::iter_10k 9,292 5,884 -3,408 -36.68% x 1.58 btree::map::iter_1m 10,268 6,510 -3,758 -36.60% x 1.58 btree::map::iteration_mut_100000 478,575 453,050 -25,525 -5.33% x 1.06 btree::map::range_unbounded_unbounded 28,176 36,169 7,993 28.37% x 0.78 btree::map::range_unbounded_vs_iter 89,369 38,290 -51,079 -57.16% x 2.33 btree::set::clone_100_and_remove_all 4,801 4,245 -556 -11.58% x 1.13 btree::set::clone_10k_and_remove_all 529,450 496,030 -33,420 -6.31% x 1.07 ``` But you can tell from the `range_unbounded_*` lines that, despite an unwarranted, vengeful attack on the range_unbounded_unbounded benchmark, this change still doesn't allow `iter()` to catch up with `range(..)`. 3. I guess that `range(..)` copes so well because it intertwines the leftmost and rightmost descend towards leaf edges, doing the two root node accesses close together, perhaps exploiting a CPU's internal pipelining? So the third commit distils a version of `range_search` (which we can't use directly because of the `Ord` bound), and we get another boost: ``` cargo-benchcmp.exe benchcmp a2 a3 --threshold 5 name a2 ns/iter a3 ns/iter diff ns/iter diff % speedup btree::map::first_and_last_100 40 43 3 7.50% x 0.93 btree::map::first_and_last_10k 71 64 -7 -9.86% x 1.11 btree::map::iter_0 2,209 1,719 -490 -22.18% x 1.29 btree::map::iter_1 2,696 2,205 -491 -18.21% x 1.22 btree::map::iter_100 3,672 2,943 -729 -19.85% x 1.25 btree::map::iter_10k 5,884 3,929 -1,955 -33.23% x 1.50 btree::map::iter_1m 6,510 5,532 -978 -15.02% x 1.18 btree::map::iteration_mut_100000 453,050 476,667 23,617 5.21% x 0.95 btree::map::range_included_excluded 405,075 371,297 -33,778 -8.34% x 1.09 btree::map::range_included_included 427,577 397,440 -30,137 -7.05% x 1.08 btree::map::range_unbounded_unbounded 36,169 28,175 -7,994 -22.10% x 1.28 btree::map::range_unbounded_vs_iter 38,290 30,838 -7,452 -19.46% x 1.24 ``` But I think this is just fake news from the microbenchmarking media. `iter()` is still trying to catch up with `range(..)`. And we can sure do without another function. So I would skip this 3rd commit. r? @Mark-Simulacrum
2020-05-05Rollup merge of #71892 - integer32llc:btreemap-entry-vacant-docs, ↵Dylan DPC-6/+5
r=jonas-schievink Update btree_map::VacantEntry::insert docs to actually call insert It looks like they were copied from the `or_insert` docs. This change makes the example more like the hash_map::VacantEntry::insert docs.
2020-05-04Update btree_map::VacantEntry::insert docs to actually call insertCarol (Nichols || Goulding)-6/+5
It looks like they were copied from the `or_insert` docs. This change makes the example more like the hash_map::VacantEntry::insert docs.
2020-05-04whoopsmain()-1/+1
2020-05-04Add remove_current_as_list to LinkedList's CursorMutmain()-0/+25
The `remove_current` method only returns the inner `T` and deallocates the list node. This is unnecessary for move operations, where the element is going to be linked back into this (or even a different) `LinkedList`. The `remove_current_as_list` method avoids this by returning the unlinked list node as a new single-element `LinkedList` structure .
2020-05-03Make BTreeMap::new and BTreeSet::new constLuca Gladiator-2/+5
2020-05-01liballoc tests: Miri supports threads nowRalf Jung-8/+5
2020-04-30Rollup merge of #71148 - bluss:vec-drop-raw-slice, r=RalfJungTyler Mandry-4/+10
Vec drop and truncate: drop using raw slice *mut [T] By creating a *mut [T] directly (without going through &mut [T]), avoid questions of validity of the contents of the slice. Consider the following risky code: ```rust unsafe { let mut v = Vec::<bool>::with_capacity(16); v.set_len(16); } ``` The intention is that with this change, we avoid one of the soundness questions about the above snippet, because Vec::drop no longer produces a mutable slice of the vector's contents. r? @RalfJung
2020-04-30rename-unique: Rename Unique::empty() to Unique::dangling()cohenarthur-5/+5
rename-unique: Change calls and doc in raw_vec.rs rename-unique: Change empty() -> dangling() in const-ptr-unique-rpass.rs
2020-04-28Vec IntoIter: Drop using raw sliceUlrik Sverdrup-2/+8
Update Vec drop with a comment to explain why we want to use a raw slice, and extend this pattern to also include the Vec's IntoIter.
2020-04-27Rollup merge of #71589 - RalfJung:unique-no-shr, r=SimonSapinDylan DPC-3/+3
remove Unique::from for shared pointer types r? @SimonSapin
2020-04-26Rollup merge of #71421 - elichai:2020-04-boxed-slice, r=sfacklerDylan DPC-0/+10
Add a function to turn Box<T> into Box<[T]> Hi, I think this is very useful, as currently it's not possible in safe rust to do this without re-allocating. an alternative implementation of the same function can be: ```rust pub fn into_boxed_slice<T>(boxed: Box<T>) -> Box<[T]> { unsafe { let slice = slice::from_raw_parts_mut(Box::into_raw(boxed), 1); Box::from_raw(slice) } } ``` The only thing that makes me a little uncomfortable is this line : > The alignment of array types is greater or equal to the alignment of its element type from https://rust-lang.github.io/unsafe-code-guidelines/layout/arrays-and-slices.html But then I see: > The alignment of &T, &mut T, *const T and *mut T are the same, and are at least the word size. > The alignment of &[T] is the word size. from https://rust-lang.github.io/unsafe-code-guidelines/layout/pointers.html#representation So I do believe this is valid(FWIW it also passes in miri https://play.rust-lang.org/?gist=c002b99364ee6b29862aeb3565a91c19)
2020-04-26remove Unique::from for shared pointer typesRalf Jung-3/+3
2020-04-26Use min_specialization in liballocMatthew Jasper-120/+42
- Remove a type parameter from `[A]RcFromIter`. - Remove an implementation of `[A]RcFromIter` that didn't actually specialize anything. - Remove unused implementation of `IsZero` for `Option<&mut T>`. - Change specializations of `[A]RcEqIdent` to use a marker trait version of `Eq`. - Remove `BTreeClone`. I couldn't find a way to make this work with `min_specialization`. - Add `rustc_unsafe_specialization_marker` to `Copy` and `TrustedLen`.
2020-04-26Add a function to turn Box<T> into Box<[T]> (into_boxed_slice)Elichai Turkel-0/+10
2020-04-26Rollup merge of #71575 - jplatte:patch-4, r=Mark-SimulacrumDylan DPC-1/+1
Fix stable(since) attribute for BTreeMap::remove_entry Stabilized in #70712. Maybe checking that the since attributes are added correctly should be automated through tidy? This is the third PR I'm opening that fixes a stable(since) attribute for something meant to be stabilized in 1.43 / 1.44 initially but then only stabilized in 1.45. (the other two are #71571, #71574)
2020-04-26Fix stable(since) attribute for BTreeMap::remove_entryJonas Platte-1/+1
2020-04-26fix more clippy warningsMatthias Krüger-2/+2
clippy::{redundant_pattern_matching, clone_on_copy, iter_cloned_collect, option_as_ref_deref, match_ref_pats}
2020-04-25Auto merge of #71556 - Dylan-DPC:rollup-9ll4shr, r=Dylan-DPCbors-33/+59
Rollup of 7 pull requests Successful merges: - #69041 (proc_macro: Stabilize `Span::resolved_at` and `Span::located_at`) - #69813 (Implement BitOr and BitOrAssign for the NonZero integer types) - #70712 (stabilize BTreeMap::remove_entry) - #71168 (Deprecate `{Box,Rc,Arc}::into_raw_non_null`) - #71544 (Replace filter_map().next() calls with find_map()) - #71545 (Fix comment in docstring example for Error::kind) - #71548 (Add missing Send and Sync impls for linked list Cursor and CursorMut.) Failed merges: r? @ghost
2020-04-25Rollup merge of #71548 - crlf0710:cursor_bounds, r=AmanieuDylan DPC-0/+12
Add missing Send and Sync impls for linked list Cursor and CursorMut. Someone pointed out these to me, and i think it's indeed reasonable to add those impl. r? @Amanieu
2020-04-25Rollup merge of #71168 - SimonSapin:into_raw_non_null, r=AmanieuDylan DPC-31/+46
Deprecate `{Box,Rc,Arc}::into_raw_non_null` Per ongoing FCP at https://github.com/rust-lang/rust/issues/47336#issuecomment-586589016 See also https://github.com/rust-lang/rust/issues/47336#issuecomment-614054164
2020-04-25Rollup merge of #70712 - :stabilize-remove-entry, r=AmanieuDylan DPC-2/+1
stabilize BTreeMap::remove_entry This PR stabilizes `BTreeMap::remove_entry` as implemented in https://github.com/rust-lang/rust/pull/68378. Closes https://github.com/rust-lang/rust/issues/66714
2020-04-25Use the correct bound for `Cursor` `Send`Charles Lew-1/+1
Co-Authored-By: Amanieu d'Antras <amanieu@gmail.com>