about summary refs log tree commit diff
path: root/library/alloc/src
AgeCommit message (Collapse)AuthorLines
2024-06-05Rollup merge of #123168 - joshtriplett:size-of-prelude, r=AmanieuJubilee-4/+0
Add `size_of` and `size_of_val` and `align_of` and `align_of_val` to the prelude (Note: need to update the PR to add `align_of` and `align_of_val`, and remove the second commit with the myriad changes to appease the lint.) Many, many projects use `size_of` to get the size of a type. However, it's also often equally easy to hardcode a size (e.g. `8` instead of `size_of::<u64>()`). Minimizing friction in the use of `size_of` helps ensure that people use it and make code more self-documenting. The name `size_of` is unambiguous: the name alone, without any prefix or path, is self-explanatory and unmistakeable for any other functionality. Adding it to the prelude cannot produce any name conflicts, as any local definition will silently shadow the one from the prelude. Thus, we don't need to wait for a new edition prelude to add it.
2024-06-04Make deleting on LinkedList aware of the allocatorAndrei Damian-2/+41
2024-06-04update tracking issue for `const_binary_heap_new_in`coekjan-1/+1
2024-06-01stablize `const_binary_heap_constructor` & create an unstable feature ↵coekjan-2/+5
`const_binary_heap_new_in` for `BinaryHeap::new_in`
2024-05-26Rollup merge of #125561 - Cyborus04:stabilize-slice-flatten, r=scottmcmMatthias Krüger-3/+1
Stabilize `slice_flatten`
2024-05-26Stabilize `slice_flatten`Cyborus-3/+1
2024-05-25Rollup merge of #123803 - Sp00ph:shrink_to_fix, r=Mark-SimulacrumMatthias Krüger-1/+65
Fix `VecDeque::shrink_to` UB when `handle_alloc_error` unwinds. Fixes #123369 For `VecDeque` it's relatively simple to restore the buffer into a consistent state so this PR does just that. Note that with its current implementation, `shrink_to` may change the internal arrangement of elements in the buffer, so e.g. `[D, <uninit>, A, B, C]` will become `[<uninit>, A, B, C, D]` and `[<uninit>, <uninit>, A, B, C]` may become `[B, C, <uninit>, <uninit>, A]` if `shrink_to` unwinds. This shouldn't be an issue though as we don't make any guarantees about the stability of the internal buffer arrangement (and this case is impossible to hit on stable anyways). This PR also includes a test with code adapted from #123369 which fails without the new `shrink_to` code. Does this suffice or do we maybe need more exhaustive tests like in #108475? cc `@Amanieu` `@rustbot` label +T-libs
2024-05-20Add the impls for Box<[T]>: IntoIteratorMichael Goulet-0/+86
Co-authored-by: ltdk <usr@ltdk.xyz>
2024-05-20Rollup merge of #125283 - zachs18:arc-default-shared, r=dtolnayMatthias Krüger-31/+56
Use a single static for all default slice Arcs. Also adds debug_asserts in Drop for Weak/Arc that the shared static is not being "dropped"/"deallocated". As per https://github.com/rust-lang/rust/pull/124640#pullrequestreview-2064962003 r? dtolnay
2024-05-20Rollup merge of #125093 - zachs18:rc-into-raw-with-allocator-only, ↵Matthias Krüger-10/+107
r=Mark-Simulacrum Add `fn into_raw_with_allocator` to Rc/Arc/Weak. Split out from #119761 Add `fn into_raw_with_allocator` for `Rc`/`rc::Weak`[^1]/`Arc`/`sync::Weak`. * Pairs with `from_raw_in` (which already exists on all 4 types). * Name matches `Box::into_raw_with_allocator`. * Associated fns on `Rc`/`Arc`, methods on `Weak`s. <details> <summary>Future PR/ACP</summary> As a follow-on to this PR, I plan to make a PR/ACP later to move `into_raw(_parts)` from `Container<_, A: Allocator>` to only `Container<_, Global>` (where `Container` = `Vec`/`Box`/`Rc`/`rc::Weak`/`Arc`/`sync::Weak`) so that users of non-`Global` allocators have to explicitly handle the allocator when using `into_raw`-like APIs. The current behaviors of stdlib containers are inconsistent with respect to what happens to the allocator when `into_raw` is called (which does not return the allocator) | Type | `into_raw` currently callable with | behavior of `into_raw`| | --- | --- | --- | | `Box` | any allocator | allocator is [dropped](https://doc.rust-lang.org/nightly/src/alloc/boxed.rs.html#1060) | | `Vec` | any allocator | allocator is [forgotten](https://doc.rust-lang.org/nightly/src/alloc/vec/mod.rs.html#884) | | `Arc`/`Rc`/`Weak` | any allocator | allocator is [forgotten](https://doc.rust-lang.org/src/alloc/sync.rs.html#1487)(Arc) [(sync::Weak)](https://doc.rust-lang.org/src/alloc/sync.rs.html#2726) [(Rc)](https://doc.rust-lang.org/src/alloc/rc.rs.html#1352) [(rc::Weak)](https://doc.rust-lang.org/src/alloc/rc.rs.html#2993) | In my opinion, neither implicitly dropping nor implicitly forgetting the allocator is ideal; dropping it could immediately invalidate the returned pointer, and forgetting it could unintentionally leak memory. My (to-be) proposed solution is to just forbid calling `into_raw(_parts)` on containers with non-`Global` allocators, and require calling `into_raw_with_allocator`(/`Vec::into_raw_parts_with_alloc`) </details> [^1]: Technically, `rc::Weak::into_raw_with_allocator` is not newly added, as it was modified and renamed from `rc::Weak::into_raw_and_alloc`.
2024-05-20Auto merge of #123878 - jwong101:inplacecollect, r=jhprattbors-2/+58
optimize inplace collection of Vec This PR has the following changes: 1. Using `usize::unchecked_mul` in https://github.com/rust-lang/rust/blob/79424056b05eaa9563d16dfab9b9a0c8f033f220/library/alloc/src/vec/in_place_collect.rs#L262 as LLVM, does not know that the operation can't wrap, since that's the size of the original allocation. Given the following: ```rust pub struct Foo([usize; 3]); pub fn unwrap_copy(v: Vec<Foo>) -> Vec<[usize; 3]> { v.into_iter().map(|f| f.0).collect() } ``` <details> <summary>Before this commit:</summary> ```llvm define void `@unwrap_copy(ptr` noalias nocapture noundef writeonly sret([24 x i8]) align 8 dereferenceable(24) %_0, ptr noalias nocapture noundef readonly align 8 dereferenceable(24) %iter) { start: %me.sroa.0.0.copyload.i = load i64, ptr %iter, align 8 %me.sroa.4.0.self.sroa_idx.i = getelementptr inbounds i8, ptr %iter, i64 8 %me.sroa.4.0.copyload.i = load ptr, ptr %me.sroa.4.0.self.sroa_idx.i, align 8 %me.sroa.5.0.self.sroa_idx.i = getelementptr inbounds i8, ptr %iter, i64 16 %me.sroa.5.0.copyload.i = load i64, ptr %me.sroa.5.0.self.sroa_idx.i, align 8 %_19.i.idx = mul nsw i64 %me.sroa.5.0.copyload.i, 24 %0 = udiv i64 %_19.i.idx, 24 ; Unnecessary calculation %_16.i.i = mul i64 %me.sroa.0.0.copyload.i, 24 %dst_cap.i.i = udiv i64 %_16.i.i, 24 store i64 %dst_cap.i.i, ptr %_0, align 8 %1 = getelementptr inbounds i8, ptr %_0, i64 8 store ptr %me.sroa.4.0.copyload.i, ptr %1, align 8 %2 = getelementptr inbounds i8, ptr %_0, i64 16 store i64 %0, ptr %2, align 8 ret void } ``` </details> <details> <summary>After:</summary> ```llvm define void `@unwrap_copy(ptr` noalias nocapture noundef writeonly sret([24 x i8]) align 8 dereferenceable(24) %_0, ptr noalias nocapture noundef readonly align 8 dereferenceable(24) %iter) { start: %me.sroa.0.0.copyload.i = load i64, ptr %iter, align 8 %me.sroa.4.0.self.sroa_idx.i = getelementptr inbounds i8, ptr %iter, i64 8 %me.sroa.4.0.copyload.i = load ptr, ptr %me.sroa.4.0.self.sroa_idx.i, align 8 %me.sroa.5.0.self.sroa_idx.i = getelementptr inbounds i8, ptr %iter, i64 16 %me.sroa.5.0.copyload.i = load i64, ptr %me.sroa.5.0.self.sroa_idx.i, align 8 %_19.i.idx = mul nsw i64 %me.sroa.5.0.copyload.i, 24 %0 = udiv i64 %_19.i.idx, 24 store i64 %me.sroa.0.0.copyload.i, ptr %_0, align 8 %1 = getelementptr inbounds i8, ptr %_0, i64 8 store ptr %me.sroa.4.0.copyload.i, ptr %1, align 8 %2 = getelementptr inbounds i8, ptr %_0, i64 16 store i64 %0, ptr %2, align 8, !alias.scope !9, !noalias !14 ret void } ``` </details> Note that there is still one more `mul,udiv` pair that I couldn't get rid of. The root cause is the same issue as https://github.com/rust-lang/rust/issues/121239, the `nuw` gets stripped off of `ptr::sub_ptr`. 2. `Iterator::try_fold` gets called on the underlying Iterator in `SpecInPlaceCollect::collect_in_place` whenever it does not implement `TrustedRandomAccess`. For types that impl `Drop`, LLVM currently can't tell that the drop can never occur, when using the default `Iterator::try_fold` implementation. For example, given the following code from #120493 ```rust #[repr(transparent)] struct WrappedClone { inner: String } #[no_mangle] pub fn unwrap_clone(list: Vec<WrappedClone>) -> Vec<String> { list.into_iter().map(|s| s.inner).collect() } ``` <details> <summary>The asm for the `unwrap_clone` method is currently:</summary> ```asm unwrap_clone: push rbp push r15 push r14 push r13 push r12 push rbx push rax mov rbx, rdi mov r12, qword ptr [rsi] mov rdi, qword ptr [rsi + 8] mov rax, qword ptr [rsi + 16] movabs rsi, -6148914691236517205 mov r14, r12 test rax, rax je .LBB0_10 lea rcx, [rax + 2*rax] lea r14, [r12 + 8*rcx] shl rax, 3 lea rax, [rax + 2*rax] xor ecx, ecx .LBB0_2: cmp qword ptr [r12 + rcx], 0 je .LBB0_4 add rcx, 24 cmp rax, rcx jne .LBB0_2 jmp .LBB0_10 .LBB0_4: lea rdx, [rax - 24] lea r14, [r12 + rcx] cmp rdx, rcx je .LBB0_10 mov qword ptr [rsp], rdi sub rax, rcx add rax, -24 mul rsi mov r15, rdx lea rbp, [r12 + rcx] add rbp, 32 shr r15, 4 mov r13, qword ptr [rip + __rust_dealloc@GOTPCREL] jmp .LBB0_6 .LBB0_8: add rbp, 24 dec r15 je .LBB0_9 .LBB0_6: mov rsi, qword ptr [rbp] test rsi, rsi je .LBB0_8 mov rdi, qword ptr [rbp - 8] mov edx, 1 call r13 jmp .LBB0_8 .LBB0_9: mov rdi, qword ptr [rsp] movabs rsi, -6148914691236517205 .LBB0_10: sub r14, r12 mov rax, r14 mul rsi shr rdx, 4 mov qword ptr [rbx], r12 mov qword ptr [rbx + 8], rdi mov qword ptr [rbx + 16], rdx mov rax, rbx add rsp, 8 pop rbx pop r12 pop r13 pop r14 pop r15 pop rbp ret ``` </details> <details> <summary>After this PR:</summary> ```asm unwrap_clone: mov rax, rdi movups xmm0, xmmword ptr [rsi] mov rcx, qword ptr [rsi + 16] movups xmmword ptr [rdi], xmm0 mov qword ptr [rdi + 16], rcx ret ``` </details> Fixes https://github.com/rust-lang/rust/issues/120493
2024-05-19Fix typo in assert messageZachary S-1/+1
2024-05-19cfg-out unused code under no_global_oom_handlingZachary S-0/+1
2024-05-19fmtZachary S-5/+6
2024-05-19Fix stacked borrows violationZachary S-1/+5
2024-05-19Use a single static for all default slice Arcs.Zachary S-29/+48
Also adds debug_asserts in Drop for Weak/Arc that the shared static is not being "dropped"/"deallocated".
2024-05-19Auto merge of #124640 - Billy-Sheppard:master, r=dtolnaybors-0/+113
Fix #124275: Implemented Default for `Arc<str>` With added implementations. ``` GOOD Arc<CStr> BROKEN Arc<OsStr> // removed GOOD Rc<str> GOOD Rc<CStr> BROKEN Rc<OsStr> // removed GOOD Rc<[T]> GOOD Arc<[T]> ``` For discussion of https://github.com/rust-lang/rust/pull/124367#issuecomment-2091940137. Key pain points currently: > I've had a guess at the best locations/feature attrs for them but they might not be correct. > However I'm unclear how to get the OsStr impl to compile, which file should they go in to avoid the error below? Is it possible, perhaps with some special std rust lib magic?
2024-05-19Auto merge of #99969 - calebsander:feature/collect-box-str, r=dtolnaybors-4/+54
alloc: implement FromIterator for Box<str> `Box<[T]>` implements `FromIterator<T>` using `Vec<T>` + `into_boxed_slice()`. Add analogous `FromIterator` implementations for `Box<str>` matching the current implementations for `String`. Remove the `Global` allocator requirement for `FromIterator<Box<str>>` too. ACP: https://github.com/rust-lang/libs-team/issues/196
2024-05-18use `Result::into_ok` on infallible result.Joshua Wong-4/+3
2024-05-18specialize `Iterator::fold` for `vec::IntoIter`Joshua Wong-2/+27
LLVM currently adds a redundant check for the returned option, in addition to the `self.ptr != self.end` check when using the default `Iterator::fold` method that calls `vec::IntoIter::next` in a loop.
2024-05-18optimize in_place_collect with vec::IntoIter::try_foldJoshua Wong-0/+29
`Iterator::try_fold` gets called on the underlying Iterator in `SpecInPlaceCollect::collect_in_place` whenever it does not implement `TrustedRandomAccess`. For types that impl `Drop`, LLVM currently can't tell that the drop can never occur, when using the default `Iterator::try_fold` implementation. For example, the asm from the `unwrap_clone` method is currently: ``` unwrap_clone: push rbp push r15 push r14 push r13 push r12 push rbx push rax mov rbx, rdi mov r12, qword ptr [rsi] mov rdi, qword ptr [rsi + 8] mov rax, qword ptr [rsi + 16] movabs rsi, -6148914691236517205 mov r14, r12 test rax, rax je .LBB0_10 lea rcx, [rax + 2*rax] lea r14, [r12 + 8*rcx] shl rax, 3 lea rax, [rax + 2*rax] xor ecx, ecx .LBB0_2: cmp qword ptr [r12 + rcx], 0 je .LBB0_4 add rcx, 24 cmp rax, rcx jne .LBB0_2 jmp .LBB0_10 .LBB0_4: lea rdx, [rax - 24] lea r14, [r12 + rcx] cmp rdx, rcx je .LBB0_10 mov qword ptr [rsp], rdi sub rax, rcx add rax, -24 mul rsi mov r15, rdx lea rbp, [r12 + rcx] add rbp, 32 shr r15, 4 mov r13, qword ptr [rip + __rust_dealloc@GOTPCREL] jmp .LBB0_6 .LBB0_8: add rbp, 24 dec r15 je .LBB0_9 .LBB0_6: mov rsi, qword ptr [rbp] test rsi, rsi je .LBB0_8 mov rdi, qword ptr [rbp - 8] mov edx, 1 call r13 jmp .LBB0_8 .LBB0_9: mov rdi, qword ptr [rsp] movabs rsi, -6148914691236517205 .LBB0_10: sub r14, r12 mov rax, r14 mul rsi shr rdx, 4 mov qword ptr [rbx], r12 mov qword ptr [rbx + 8], rdi mov qword ptr [rbx + 16], rdx mov rax, rbx add rsp, 8 pop rbx pop r12 pop r13 pop r14 pop r15 pop rbp ret ``` After this PR: ``` unwrap_clone: mov rax, rdi movups xmm0, xmmword ptr [rsi] mov rcx, qword ptr [rsi + 16] movups xmmword ptr [rdi], xmm0 mov qword ptr [rdi + 16], rcx ret ``` Fixes #120493
2024-05-18optimize in-place collection of `Vec`Joshua Wong-3/+6
LLVM does not know that the multiplication never overflows, which causes it to generate unnecessary instructions. Use `usize::unchecked_mul`, so that it can fold the `dst_cap` calculation when `size_of::<I::SRC>() == size_of::<T>()`. Running: ``` rustc -C llvm-args=-x86-asm-syntax=intel -O src/lib.rs --emit asm` ``` ```rust pub struct Foo([usize; 3]); pub fn unwrap_copy(v: Vec<Foo>) -> Vec<[usize; 3]> { v.into_iter().map(|f| f.0).collect() } ``` Before this commit: ``` define void @unwrap_copy(ptr noalias nocapture noundef writeonly sret([24 x i8]) align 8 dereferenceable(24) %_0, ptr noalias nocapture noundef readonly align 8 dereferenceable(24) %iter) { start: %me.sroa.0.0.copyload.i = load i64, ptr %iter, align 8 %me.sroa.4.0.self.sroa_idx.i = getelementptr inbounds i8, ptr %iter, i64 8 %me.sroa.4.0.copyload.i = load ptr, ptr %me.sroa.4.0.self.sroa_idx.i, align 8 %me.sroa.5.0.self.sroa_idx.i = getelementptr inbounds i8, ptr %iter, i64 16 %me.sroa.5.0.copyload.i = load i64, ptr %me.sroa.5.0.self.sroa_idx.i, align 8 %_19.i.idx = mul nsw i64 %me.sroa.5.0.copyload.i, 24 %0 = udiv i64 %_19.i.idx, 24 %_16.i.i = mul i64 %me.sroa.0.0.copyload.i, 24 %dst_cap.i.i = udiv i64 %_16.i.i, 24 store i64 %dst_cap.i.i, ptr %_0, align 8 %1 = getelementptr inbounds i8, ptr %_0, i64 8 store ptr %me.sroa.4.0.copyload.i, ptr %1, align 8 %2 = getelementptr inbounds i8, ptr %_0, i64 16 store i64 %0, ptr %2, align 8 ret void } ``` After: ``` define void @unwrap_copy(ptr noalias nocapture noundef writeonly sret([24 x i8]) align 8 dereferenceable(24) %_0, ptr noalias nocapture noundef readonly align 8 dereferenceable(24) %iter) { start: %me.sroa.0.0.copyload.i = load i64, ptr %iter, align 8 %me.sroa.4.0.self.sroa_idx.i = getelementptr inbounds i8, ptr %iter, i64 8 %me.sroa.4.0.copyload.i = load ptr, ptr %me.sroa.4.0.self.sroa_idx.i, align 8 %me.sroa.5.0.self.sroa_idx.i = getelementptr inbounds i8, ptr %iter, i64 16 %me.sroa.5.0.copyload.i = load i64, ptr %me.sroa.5.0.self.sroa_idx.i, align 8 %_19.i.idx = mul nsw i64 %me.sroa.5.0.copyload.i, 24 %0 = udiv i64 %_19.i.idx, 24 store i64 %me.sroa.0.0.copyload.i, ptr %_0, align 8 %1 = getelementptr inbounds i8, ptr %_0, i64 8 store ptr %me.sroa.4.0.copyload.i, ptr %1, align 8 %2 = getelementptr inbounds i8, ptr %_0, i64 16 store i64 %0, ptr %2, align 8, !alias.scope !9, !noalias !14 ret void } ``` Note that there is still one more `mul,udiv` pair that I couldn't get rid of. The root cause is the same issue as #121239, the `nuw` gets stripped off of `ptr::sub_ptr`.
2024-05-18Clarify how String::leak and into_boxed_str differJon Gjengset-5/+7
2024-05-16Access alloc field directly in Arc/Rc::into_raw_with_allocator.Zachary S-4/+4
... since fn allocator doesn't exist yet.
2024-05-16Fix linkchecker doc errorsLukas Bergdoll-3/+3
Also includes small doc fixes.
2024-05-16Move BufGuard impl outside of functionLukas Bergdoll-11/+10
2024-05-16Replace sort implementationsLukas Bergdoll-117/+114
- `slice::sort` -> driftsort https://github.com/Voultapher/sort-research-rs/blob/main/writeup/driftsort_introduction/text.md - `slice::sort_unstable` -> ipnsort https://github.com/Voultapher/sort-research-rs/blob/main/writeup/ipnsort_introduction/text.md Replaces the sort implementations with tailor made ones that strike a balance of run-time, compile-time and binary-size, yielding run-time and compile-time improvements. Regressing binary-size for `slice::sort` while improving it for `slice::sort_unstable`. All while upholding the existing soft and hard safety guarantees, and even extending the soft guarantees, detecting strict weak ordering violations with a high chance and reporting it to users via a panic. In addition the implementation of `select_nth_unstable` is also adapted as it uses `slice::sort_unstable` internals.
2024-05-13Add fn into_raw_with_allocator to Rc/Arc/Weak.Zachary S-10/+107
2024-05-13Add `size_of`, `size_of_val`, `align_of`, and `align_of_val` to the preludeJosh Triplett-4/+0
Many, many projects use `size_of` to get the size of a type. However, it's also often equally easy to hardcode a size (e.g. `8` instead of `size_of::<u64>()`). Minimizing friction in the use of `size_of` helps ensure that people use it and make code more self-documenting. The name `size_of` is unambiguous: the name alone, without any prefix or path, is self-explanatory and unmistakeable for any other functionality. Adding it to the prelude cannot produce any name conflicts, as any local definition will silently shadow the one from the prelude. Thus, we don't need to wait for a new edition prelude to add it. Add `size_of_val`, `align_of`, and `align_of_val` as well, with similar justification: widely useful, self-explanatory, unmistakeable for anything else, won't produce conflicts.
2024-05-12Use shared statics for the ArcInner for Arc<str, CStr>::default, and for ↵Zachary S-14/+51
Arc<[T]>::default where alignof(T) <= 16.
2024-05-12Add note about possible allocation-sharing to Arc/Rc<str/[T]/CStr>::default.Zachary S-0/+12
2024-05-12added Default implsBilly Sheppard-0/+64
reorganised attrs removed OsStr impls added backticks
2024-05-12Auto merge of #125012 - RalfJung:format-error, r=Mark-Simulacrum,workingjubileebors-1/+3
io::Write::write_fmt: panic if the formatter fails when the stream does not fail Follow-up to https://github.com/rust-lang/rust/pull/124954
2024-05-11Rollup merge of #124981 - zachs18:rc-allocator-generalize-1, r=Mark-SimulacrumMatthias Krüger-32/+33
Relax allocator requirements on some Rc/Arc APIs. Split out from #119761 * Remove `A: Clone` bound from `Rc::assume_init`(s), `Rc::downcast`, and `Rc::downcast_unchecked` (`Arc` methods were already relaxed by #120445) * Make `From<Rc<[T; N]>> for Rc<[T]>` allocator-aware (`Arc`'s already is). * Remove `A: Clone` from `Rc/Arc::unwrap_or_clone` Internal changes: * Made `Arc::internal_into_inner_with_allocator` method into `Arc::into_inner_with_allocator` associated fn. * Add private `Rc::into_inner_with_allocator` (to match Arc), so other fns don't have to juggle `ManuallyDrop`.
2024-05-11io::Write::write_fmt: panic if the formatter fails when the stream does not failRalf Jung-1/+3
2024-05-10Relax A: Clone requirement on Rc/Arc::unwrap_or_clone.Zachary S-0/+4
2024-05-10Relax allocator requirements on some Rc APIs.Zachary S-32/+29
* Remove A: Clone bound from Rc::assume_init, Rc::downcast, and Rc::downcast_unchecked. * Make From<Rc<[T; N]>> for Rc<[T]> allocator-aware. Internal changes: * Made Arc::internal_into_inner_with_allocator method into Arc::into_inner_with_allocator associated fn. * Add private Rc::into_inner_with_allocator (to match Arc), so other fns don't have to juggle ManuallyDrop.
2024-05-10Add fn allocator method to rc/sync::Weak. Relax Rc<T>/Arc<T>::allocator to ↵Zachary S-20/+36
allow unsized T.
2024-05-09Document proper usage of `fmt::Error` and `fmt()`'s `Result`.Kevin Reid-1/+1
Documentation of these properties previously existed in a lone paragraph in the `fmt` module's documentation: <https://doc.rust-lang.org/1.78.0/std/fmt/index.html#formatting-traits> However, users looking to implement a formatting trait won't necessarily look there. Therefore, let's add the critical information (that formatting per se is infallible) to all the involved items.
2024-05-08fix #124714 str.to_lowercase sigma handlingMarcondiro-4/+6
2024-05-07Move `test_shrink_to_unwind` to its own file.Markus Everling-55/+1
This way, no other test can be tripped up by `test_shrink_to_unwind` changing the alloc error hook.
2024-05-07Fix `VecDeque::shrink_to` UB when `handle_alloc_error` unwinds.Markus Everling-2/+120
Luckily it's comparatively simple to just restore the `VecDeque` into a valid state on unwinds.
2024-05-05alloc: implement FromIterator for Box<str>Caleb Sander-4/+54
Box<[T]> implements FromIterator<T> using Vec<T> + into_boxed_slice(). Add analogous FromIterator implementations for Box<str> matching the current implementations for String. Remove the Global allocator requirement for FromIterator<Box<str>> too.
2024-05-05Rollup merge of #124749 - RossSmyth:stable_range, r=davidtwcoGuillaume Gomez-1/+1
Stabilize exclusive_range_pattern (v2) This PR is identical to #124459, which was approved and merged but then removed from master by a force-push due to a [CI bug](https://rust-lang.zulipchat.com/#narrow/stream/242791-t-infra/topic/ci.20broken.3F). r? ghost Original PR description: --- Stabilization report: https://github.com/rust-lang/rust/issues/37854#issuecomment-1842398130 FCP: https://github.com/rust-lang/rust/issues/37854#issuecomment-1872520294 Stabilization was blocked by a lint that was merged here: #118879 Documentation PR is here: rust-lang/reference#1484 `@rustbot` label +F-exclusive_range_pattern +T-lang
2024-05-03Rollup merge of #124593 - GKFX:cstr-literals-in-api-docs, r=workingjubileeMatthias Krüger-9/+5
Describe and use CStr literals in CStr and CString docs Mention CStr literals in the description of both types, and use them in some of the code samples for CStr. This is intended to make C string literals more discoverable. Additionally, I don't think the orange "This example is not tested" warnings are very encouraging, so I have made the examples on `CStr` build.
2024-05-03Rollup merge of #124441 - bravequickcleverfibreyarn:string.rs, r=AmanieuMatthias Krüger-1/+1
String.truncate comment microfix (greater or equal) String.truncate calls Vec.truncate, in turn, and that states "is greater or equal to". Beside common sense.
2024-05-03Rollup merge of #123480 - Nadrieril:impl-all-derefpures, r=compiler-errorsMatthias Krüger-1/+4
deref patterns: impl `DerefPure` for more std types Context: [deref patterns](https://github.com/rust-lang/rust/issues/87121). The requirements of `DerefPure` aren't precise yet, but these types unambiguously satisfy them. Interestingly, a hypothetical `impl DerefMut for Cow` that does a `Clone` would *not* be eligible for `DerefPure` if we allow mixing deref patterns with normal patterns. If the following is exhaustive then the `DerefMut` would cause UB: ```rust match &mut Cow::Borrowed(&()) { Cow::Owned(_) => ..., // Doesn't match deref!(_x) if false => ..., // Causes the variant to switch to `Owned` Cow::Borrowed(_) => ..., // Doesn't match // We reach unreachable } ```
2024-05-02Stabilize exclusive_rangeRoss Smyth-1/+1
2024-05-01Step bootstrap cfgsMark Rousskov-3/+1
2024-05-01Replace version placeholders for 1.79Mark Rousskov-1/+1