about summary refs log tree commit diff
path: root/library/alloc/src
AgeCommit message (Collapse)AuthorLines
2021-05-09PR feedbackScott McMurray-2/+2
2021-05-07BTree: no longer copy keys and values before dropping themStein Somers-45/+95
2021-05-06Perf Experiment: Wait, what if I just skip the trait aliasScott McMurray-4/+4
2021-05-06Bootstrapping preparation for the libraryScott McMurray-3/+4
Since just `ops::Try` will need to change meaning.
2021-05-06Rollup merge of #84328 - Folyd:stablize_map_into_keys_values, r=m-ou-seDylan DPC-16/+14
Stablize {HashMap,BTreeMap}::into_{keys,values} I would propose to stabilize `{HashMap,BTreeMap}::into_{keys,values}`( aka. `map_into_keys_values`). Closes #75294.
2021-05-05alloc: Add unstable Cfg feature `no-global_oom_handling`John Ericson-27/+322
For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-05-05Bump map_into_keys_values stable version to 1.54.0.Mara Bos-14/+14
2021-05-03Fix stability attributes of byte-to-string specializationLingMan-2/+2
2021-05-03Auto merge of #84842 - blkerby:null_lowercase, r=joshtriplettbors-1/+1
Replace 'NULL' with 'null' This replaces occurrences of "NULL" with "null" in docs, comments, and compiler error/lint messages. This is for the sake of consistency, as the lowercase "null" is already the dominant form in Rust. The all-caps NULL looks like the C macro (or SQL keyword), which seems out of place in a Rust context, given that NULL does not exist in the Rust language or standard library (instead having [`ptr::null()`](https://doc.rust-lang.org/stable/std/ptr/fn.null.html)).
2021-05-02Change 'NULL' to 'null'Brent Kerby-1/+1
2021-05-02Auto merge of #82576 - gilescope:to_string, r=Amanieubors-0/+41
i8 and u8::to_string() specialisation (far less asm). Take 2. Around 1/6th of the assembly to without specialisation. https://godbolt.org/z/bzz8Mq (partially fixes #73533 )
2021-04-28Minor grammar tweaks for readabilityBen-Lichtman-4/+4
2021-04-28Stabilize vec_extend_from_withinAmanieu d'Antras-4/+1
2021-04-26Auto merge of #84174 - camsteffen:slice-diag, r=Mark-Simulacrumbors-1/+0
Remove slice diagnostic item ...because it is unusally placed on an impl and is redundant with a lang item. Depends on rust-lang/rust-clippy#7074 (next clippy sync). ~I expect clippy tests to fail in the meantime.~ Nope tests passed... CC `@flip1995`
2021-04-25get rid of min_const_fn references in library/ and rustdocRalf Jung-10/+4
2021-04-24Auto merge of #84310 - RalfJung:const-fn-feature-flags, r=oli-obkbors-1/+2
further split up const_fn feature flag This continues the work on splitting up `const_fn` into separate feature flags: * `const_fn_trait_bound` for `const fn` with trait bounds * `const_fn_unsize` for unsizing coercions in `const fn` (looks like only `dyn` unsizing is still guarded here) I don't know if there are even any things left that `const_fn` guards... at least libcore and liballoc do not need it any more. `@oli-obk` are you currently able to do reviews?
2021-04-24Rollup merge of #84453 - notriddle:waker-from-docs, r=cramertjYuki Okushi-0/+6
Document From implementations for Waker and RawWaker CC #51430
2021-04-24Rollup merge of #84248 - calebsander:refactor/vec-functions, r=AmanieuYuki Okushi-2/+1
Remove duplicated fn(Box<[T]>) -> Vec<T> `<[T]>::into_vec()` does the same thing as `Vec::from::<Box<[T]>>()`, so they can be implemented in terms of each other. This was the previous implementation of `Vec::from()`, but was changed in #78461. I'm not sure what the rationale was for that change, but it seems preferable to maintain a single implementation.
2021-04-22Document From implementations for Waker and RawWakerMichael Howell-0/+6
2021-04-22Improve BinaryHeap::retain.Mara Bos-32/+53
It now doesn't fully rebuild the heap, but only the parts that are necessary.
2021-04-21Remove duplicated fn(Box<[T]>) -> Vec<T>Caleb Sander-2/+1
2021-04-21Rollup merge of #84013 - CDirkx:fmt, r=m-ou-seMara Bos-4/+4
Replace all `fmt.pad` with `debug_struct` This replaces any occurrence of: - `f.pad("X")` with `f.debug_struct("X").finish()` - `f.pad("X { .. }")` with `f.debug_struct("X").finish_non_exhaustive()` This is in line with existing formatting code such as https://github.com/rust-lang/rust/blob/125505306744a0a5bb01d62337260a95d9ff8d57/library/std/src/sync/mpsc/mod.rs#L1470-L1475
2021-04-21Fix `alloc::test::test_show`Christiaan Dirkx-4/+4
2021-04-19Stablize {HashMap,BTreeMap}::into_{keys,values}Folyd-16/+14
2021-04-18separate feature flag for unsizing casts in const fnRalf Jung-1/+1
2021-04-18move 'trait bounds on const fn' to separate feature gateRalf Jung-0/+1
2021-04-18Slightly change wording and fix typo in vec/mod.rsWaffle Lapkin-2/+2
2021-04-16Rollup merge of #84145 - vojtechkral:vecdeque-binary-search, r=m-ou-seDylan DPC-3/+69
Address comments for vecdeque_binary_search #78021
2021-04-16Auto merge of #84220 - gpluscb:weak_doc, r=jyn514bors-2/+2
Correct outdated documentation for rc::Weak This was overlooked in ~~#50357~~ #51901
2021-04-15VecDeque: Improve doc comments in binary search fnsVojtech Kral-5/+23
Co-authored-by: Mara Bos <m-ou.se@m-ou.se>
2021-04-15VecDeque: Add partition_point() #78021Vojtech Kral-0/+45
2021-04-15VecDeque: binary_search_by(): return right away if hit found at back.first() ↵Vojtech Kral-1/+4
#78021
2021-04-15Correct outdated rc::Weak::default documentationMarRue-2/+2
2021-04-15Merge same condition branch in vec spec_extendIvan Tham-4/+2
2021-04-13Remove slice diagnostic itemCameron Steffen-1/+0
2021-04-13Auto merge of #84135 - rust-lang:GuillaumeGomez-patch-1, r=kennytmbors-1/+1
Improve code example for length comparison Small fix/improvement: it's much safer to check that you're under the length of an array rather than chacking that you're equal to it. It's even more true in case you update the length of the array while iterating.
2021-04-12Improve code example for length comparisonGuillaume Gomez-1/+1
2021-04-12Stabilize BTree{Map,Set}::retainJubilee Young-4/+2
2021-04-10fix incorrect from_raw_in doctestRalf Jung-1/+1
2021-04-08add TrustedRandomAccess specialization to vec::extendThe8472-25/+59
This should do roughly the same as the TrustedLen specialization but result in less IR by using __iterator_get_unchecked instead of iterator.for_each.
2021-04-07Rollup merge of #83476 - mystor:rc_mutate_strong_count, r=m-ou-seDylan DPC-0/+67
Add strong_count mutation methods to Rc The corresponding methods were stabilized on `Arc` in #79285 (tracking: #71983). This patch implements and stabilizes identical methods on the `Rc` types as well.
2021-04-04Auto merge of #83530 - Mark-Simulacrum:bootstrap-bump, r=Mark-Simulacrumbors-3/+1
Bump bootstrap to 1.52 beta This includes the standard bump, but also a workaround for new cargo behavior around clearing out the doc directory when the rustdoc version changes.
2021-04-04Bump cfgsMark Rousskov-3/+1
2021-04-04Rollup merge of #82726 - ssomers:btree_node_rearange, r=Mark-SimulacrumDylan DPC-167/+165
BTree: move blocks around in node.rs Without changing any names or implementation, reorder some members: - Move down the ones defined long ago on the demised `struct Root`, to below the definition of their current host `struct NodeRef`. - Move up some defined on `struct NodeRef` that are interspersed with those defined on `struct Handle`. - Move up the `correct_…` methods squeezed between the two flavours of `push`. - Move the unchecked static downcasts (`cast_to_…`) after the upcasts (`forget_`) and the (weirdly named) dynamic downcasts (`force`). r? ````@Mark-Simulacrum````
2021-04-04Auto merge of #83267 - ssomers:btree_prune_range_search_overlap, ↵bors-27/+43
r=Mark-Simulacrum BTree: no longer search arrays twice to check Ord A possible addition to / partial replacement of #83147: no longer linearly search the upper bound of a range in the initial portion of the keys we already know are below the lower bound. - Should be faster: fewer key comparisons at the cost of some instructions dealing with offsets - Makes code a little more complicated. - No longer detects ill-defined `Ord` implementations, but that wasn't a publicised feature, and was quite incomplete, and was only done in the `range` and `range_mut` methods. r? `@Mark-Simulacrum`
2021-04-02Rollup merge of #83629 - the8472:fix-inplace-panic-on-drop, r=m-ou-seDylan DPC-11/+20
Fix double-drop in `Vec::from_iter(vec.into_iter())` specialization when items drop during panic This fixes the double-drop but it leaves a behavioral difference compared to the default implementation intact: In the default implementation the source and the destination vec are separate objects, so they get dropped separately. Here they share an allocation and the latter only exists as a pointer into the former. So if dropping the former panics then this fix will leak more items than the default implementation would. Is this acceptable or should the specialization also mimic the default implementation's drops-during-panic behavior? Fixes #83618 `@rustbot` label T-libs-impl
2021-03-31panic early when TrustedLen indicates a length > usize::MAXThe8472-8/+30
2021-03-30Auto merge of #83357 - saethlin:vec-reserve-inlining, r=dtolnaybors-1/+17
Reduce the impact of Vec::reserve calls that do not cause any allocation I think a lot of callers expect `Vec::reserve` to be nearly free when no resizing is required, but unfortunately that isn't the case. LLVM makes remarkably poor inlining choices (along the path from `Vec::reserve` to `RawVec::grow_amortized`), so depending on the surrounding context you either get a huge blob of `RawVec`'s resizing logic inlined into some seemingly-unrelated function, or not enough inlining happens and/or the actual check in `needs_to_grow` ends up behind a function call. My goal is to make the codegen for `Vec::reserve` match the mental that callers seem to have: It's reliably just a `sub cmp ja` if there is already sufficient capacity. This patch has the following impact on the serde_json benchmarks: https://github.com/serde-rs/json-benchmark/tree/ca3efde8a5b75ff59271539b67452911860248c7 run with `cargo +stage1 run --release -- -n 1024` Before: ``` DOM STRUCT ======= serde_json ======= parse|stringify ===== parse|stringify ==== data/canada.json 340 MB/s 490 MB/s 630 MB/s 370 MB/s data/citm_catalog.json 460 MB/s 540 MB/s 1010 MB/s 550 MB/s data/twitter.json 330 MB/s 840 MB/s 640 MB/s 630 MB/s ======= json-rust ======== parse|stringify ===== parse|stringify ==== data/canada.json 580 MB/s 990 MB/s data/citm_catalog.json 720 MB/s 660 MB/s data/twitter.json 570 MB/s 960 MB/s ``` After: ``` DOM STRUCT ======= serde_json ======= parse|stringify ===== parse|stringify ==== data/canada.json 330 MB/s 510 MB/s 610 MB/s 380 MB/s data/citm_catalog.json 450 MB/s 640 MB/s 970 MB/s 830 MB/s data/twitter.json 330 MB/s 880 MB/s 670 MB/s 960 MB/s ======= json-rust ======== parse|stringify ===== parse|stringify ==== data/canada.json 560 MB/s 1130 MB/s data/citm_catalog.json 710 MB/s 880 MB/s data/twitter.json 530 MB/s 1230 MB/s ``` That's approximately a one-third increase in throughput on two of the benchmarks, and no effect on one (The benchmark suite has sufficient jitter that I could pick a run where there are no regressions, so I'm not convinced they're meaningful here). This also produces perf increases on the order of 3-5% in a few other microbenchmarks that I'm tracking. It might be useful to see if this has a cascading effect on inlining choices in some large codebases. Compiling this simple program demonstrates the change in codegen that causes the perf impact: ```rust fn main() { reserve(&mut Vec::new()); } #[inline(never)] fn reserve(v: &mut Vec<u8>) { v.reserve(1234); } ``` Before: ```rust 00000000000069b0 <scratch::reserve>: 69b0: 53 push %rbx 69b1: 48 83 ec 30 sub $0x30,%rsp 69b5: 48 8b 47 08 mov 0x8(%rdi),%rax 69b9: 48 8b 4f 10 mov 0x10(%rdi),%rcx 69bd: 48 89 c2 mov %rax,%rdx 69c0: 48 29 ca sub %rcx,%rdx 69c3: 48 81 fa d1 04 00 00 cmp $0x4d1,%rdx 69ca: 77 73 ja 6a3f <scratch::reserve+0x8f> 69cc: 48 81 c1 d2 04 00 00 add $0x4d2,%rcx 69d3: 72 75 jb 6a4a <scratch::reserve+0x9a> 69d5: 48 89 fb mov %rdi,%rbx 69d8: 48 8d 14 00 lea (%rax,%rax,1),%rdx 69dc: 48 39 ca cmp %rcx,%rdx 69df: 48 0f 47 ca cmova %rdx,%rcx 69e3: 48 83 f9 08 cmp $0x8,%rcx 69e7: be 08 00 00 00 mov $0x8,%esi 69ec: 48 0f 47 f1 cmova %rcx,%rsi 69f0: 48 85 c0 test %rax,%rax 69f3: 74 17 je 6a0c <scratch::reserve+0x5c> 69f5: 48 8b 0b mov (%rbx),%rcx 69f8: 48 89 0c 24 mov %rcx,(%rsp) 69fc: 48 89 44 24 08 mov %rax,0x8(%rsp) 6a01: 48 c7 44 24 10 01 00 movq $0x1,0x10(%rsp) 6a08: 00 00 6a0a: eb 08 jmp 6a14 <scratch::reserve+0x64> 6a0c: 48 c7 04 24 00 00 00 movq $0x0,(%rsp) 6a13: 00 6a14: 48 8d 7c 24 18 lea 0x18(%rsp),%rdi 6a19: 48 89 e1 mov %rsp,%rcx 6a1c: ba 01 00 00 00 mov $0x1,%edx 6a21: e8 9a fe ff ff call 68c0 <alloc::raw_vec::finish_grow> 6a26: 48 8b 7c 24 20 mov 0x20(%rsp),%rdi 6a2b: 48 8b 74 24 28 mov 0x28(%rsp),%rsi 6a30: 48 83 7c 24 18 01 cmpq $0x1,0x18(%rsp) 6a36: 74 0d je 6a45 <scratch::reserve+0x95> 6a38: 48 89 3b mov %rdi,(%rbx) 6a3b: 48 89 73 08 mov %rsi,0x8(%rbx) 6a3f: 48 83 c4 30 add $0x30,%rsp 6a43: 5b pop %rbx 6a44: c3 ret 6a45: 48 85 f6 test %rsi,%rsi 6a48: 75 08 jne 6a52 <scratch::reserve+0xa2> 6a4a: ff 15 38 c4 03 00 call *0x3c438(%rip) # 42e88 <_GLOBAL_OFFSET_TABLE_+0x490> 6a50: 0f 0b ud2 6a52: ff 15 f0 c4 03 00 call *0x3c4f0(%rip) # 42f48 <_GLOBAL_OFFSET_TABLE_+0x550> 6a58: 0f 0b ud2 6a5a: 66 0f 1f 44 00 00 nopw 0x0(%rax,%rax,1) ``` After: ```asm 0000000000006910 <scratch::reserve>: 6910: 48 8b 47 08 mov 0x8(%rdi),%rax 6914: 48 8b 77 10 mov 0x10(%rdi),%rsi 6918: 48 29 f0 sub %rsi,%rax 691b: 48 3d d1 04 00 00 cmp $0x4d1,%rax 6921: 77 05 ja 6928 <scratch::reserve+0x18> 6923: e9 e8 fe ff ff jmp 6810 <alloc::raw_vec::RawVec<T,A>::reserve::do_reserve_and_handle> 6928: c3 ret 6929: 0f 1f 80 00 00 00 00 nopl 0x0(%rax) ```
2021-03-30Rollup merge of #82331 - frol:feat/std-binary-heap-as-slice, r=AmanieuDylan DPC-0/+21
alloc: Added `as_slice` method to `BinaryHeap` collection I initially asked about whether it is useful addition on https://internals.rust-lang.org/t/should-i-add-as-slice-method-to-binaryheap/13816, and it seems there were no objections, so went ahead with this PR. > There is [`BinaryHeap::into_vec`](https://doc.rust-lang.org/std/collections/struct.BinaryHeap.html#method.into_vec), but it consumes the value. I wonder if there is API design limitation that should be taken into account. Implementation-wise, the inner buffer is just a Vec, so it is trivial to expose as_slice from it. Please, guide me through if I need to add tests or something else. UPD: Tracking issue #83659
2021-03-29Updated the tracking issue #Vlad Frolov-1/+1