diff options
| author | bors <bors@rust-lang.org> | 2018-04-13 10:33:51 +0000 |
|---|---|---|
| committer | bors <bors@rust-lang.org> | 2018-04-13 10:33:51 +0000 |
| commit | 99d4886ead646da864cff7963f540a44acd4af05 (patch) | |
| tree | 8c17370e0f443b182d30bba73bbd48cc6a768001 /src/liballoc | |
| parent | f9f9050f507cd14839f868917b5b4fc370eed54b (diff) | |
| parent | c5ffdd787d134c06735a1dc4457515a63bbce5f5 (diff) | |
| download | rust-99d4886ead646da864cff7963f540a44acd4af05.tar.gz rust-99d4886ead646da864cff7963f540a44acd4af05.zip | |
Auto merge of #49669 - SimonSapin:global-alloc, r=alexcrichton
Add GlobalAlloc trait + tweaks for initial stabilization
This is the outcome of discussion at the Rust All Hands in Berlin. The high-level goal is stabilizing sooner rather than later the ability to [change the global allocator](https://github.com/rust-lang/rust/issues/27389), as well as allocating memory without abusing `Vec::with_capacity` + `mem::forget`.
Since we’re not ready to settle every detail of the `Alloc` trait for the purpose of collections that are generic over the allocator type (for example the possibility of a separate trait for deallocation only, and what that would look like exactly), we propose introducing separately **a new `GlobalAlloc` trait**, for use with the `#[global_allocator]` attribute.
We also propose a number of changes to existing APIs. They are batched in this one PR in order to minimize disruption to Nightly users.
The plan for initial stabilization is detailed in the tracking issue https://github.com/rust-lang/rust/issues/49668.
CC @rust-lang/libs, @glandium
## Immediate breaking changes to unstable features
* For pointers to allocated memory, change the pointed type from `u8` to `Opaque`, a new public [extern type](https://github.com/rust-lang/rust/issues/43467). Since extern types are not `Sized`, `<*mut _>::offset` cannot be used without first casting to another pointer type. (We hope that extern types can also be stabilized soon.)
* In the `Alloc` trait, change these pointers to `ptr::NonNull` and change the `AllocErr` type to a zero-size struct. This makes return types `Result<ptr::NonNull<Opaque>, AllocErr>` be pointer-sized.
* Instead of a new `Layout`, `realloc` takes only a new size (in addition to the pointer and old `Layout`). Changing the alignment is not supported with `realloc`.
* Change the return type of `Layout::from_size_align` from `Option<Self>` to `Result<Self, LayoutErr>`, with `LayoutErr` a new opaque struct.
* A `static` item registered as the global allocator with the `#[global_allocator]` **must now implement the new `GlobalAlloc` trait** instead of `Alloc`.
## Eventually-breaking changes to unstable features, with a deprecation period
* Rename the respective `heap` modules to `alloc` in the `core`, `alloc`, and `std` crates. (Yes, this does mean that `::alloc::alloc::Alloc::alloc` is a valid path to a trait method if you have `exetrn crate alloc;`)
* Rename the the `Heap` type to `Global`, since it is the entry point for what’s registered with `#[global_allocator]`.
Old names remain available for now, as deprecated `pub use` reexports.
## Backward-compatible changes
* Add a new [extern type](https://github.com/rust-lang/rust/issues/43467) `Opaque`, for use in pointers to allocated memory.
* Add a new `GlobalAlloc` trait shown below. Unlike `Alloc`, it uses bare `*mut Opaque` without `NonNull` or `Result`. NULL in return values indicates an error (of unspecified nature). This is easier to implement on top of `malloc`-like APIs.
* Add impls of `GlobalAlloc` for both the `Global` and `System` types, in addition to existing impls of `Alloc`. This enables calling `GlobalAlloc` methods on the stable channel before `Alloc` is stable. Implementing two traits with identical method names can make some calls ambiguous, but most code is expected to have no more than one of the two traits in scope. Erroneous code like `use std::alloc::Global; #[global_allocator] static A: Global = Global;` (where `Global` is defined to call itself, causing infinite recursion) is not statically prevented by the type system, but we count on it being hard enough to do accidentally and easy enough to diagnose.
```rust
extern {
pub type Opaque;
}
pub unsafe trait GlobalAlloc {
unsafe fn alloc(&self, layout: Layout) -> *mut Opaque;
unsafe fn dealloc(&self, ptr: *mut Opaque, layout: Layout);
unsafe fn alloc_zeroed(&self, layout: Layout) -> *mut Opaque {
// Default impl: self.alloc() and ptr::write_bytes()
}
unsafe fn realloc(&self, ptr: *mut Opaque, old_layout: Layout, new_size: usize) -> *mut Opaque {
// Default impl: self.alloc() and ptr::copy_nonoverlapping() and self.dealloc()
}
fn oom(&self) -> ! {
// intrinsics::abort
}
// More methods with default impls may be added in the future
}
```
## Bikeshed
The tracking issue https://github.com/rust-lang/rust/issues/49668 lists some open questions. If consensus is reached before this PR is merged, changes can be integrated.
Diffstat (limited to 'src/liballoc')
| -rw-r--r-- | src/liballoc/alloc.rs | 215 | ||||
| -rw-r--r-- | src/liballoc/arc.rs | 23 | ||||
| -rw-r--r-- | src/liballoc/btree/node.rs | 23 | ||||
| -rw-r--r-- | src/liballoc/heap.rs | 285 | ||||
| -rw-r--r-- | src/liballoc/lib.rs | 22 | ||||
| -rw-r--r-- | src/liballoc/raw_vec.rs | 90 | ||||
| -rw-r--r-- | src/liballoc/rc.rs | 25 | ||||
| -rw-r--r-- | src/liballoc/tests/heap.rs | 7 | ||||
| -rw-r--r-- | src/liballoc/tests/string.rs | 12 | ||||
| -rw-r--r-- | src/liballoc/tests/vec.rs | 16 | ||||
| -rw-r--r-- | src/liballoc/tests/vec_deque.rs | 12 |
11 files changed, 385 insertions, 345 deletions
diff --git a/src/liballoc/alloc.rs b/src/liballoc/alloc.rs new file mode 100644 index 00000000000..68a617e0ffe --- /dev/null +++ b/src/liballoc/alloc.rs @@ -0,0 +1,215 @@ +// Copyright 2014-2015 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or +// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license +// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +#![unstable(feature = "allocator_api", + reason = "the precise API and guarantees it provides may be tweaked \ + slightly, especially to possibly take into account the \ + types being stored to make room for a future \ + tracing garbage collector", + issue = "32838")] + +use core::intrinsics::{min_align_of_val, size_of_val}; +use core::ptr::NonNull; +use core::usize; + +#[doc(inline)] +pub use core::alloc::*; + +#[cfg(stage0)] +extern "Rust" { + #[allocator] + #[rustc_allocator_nounwind] + fn __rust_alloc(size: usize, align: usize, err: *mut u8) -> *mut u8; + #[cold] + #[rustc_allocator_nounwind] + fn __rust_oom(err: *const u8) -> !; + #[rustc_allocator_nounwind] + fn __rust_dealloc(ptr: *mut u8, size: usize, align: usize); + #[rustc_allocator_nounwind] + fn __rust_realloc(ptr: *mut u8, + old_size: usize, + old_align: usize, + new_size: usize, + new_align: usize, + err: *mut u8) -> *mut u8; + #[rustc_allocator_nounwind] + fn __rust_alloc_zeroed(size: usize, align: usize, err: *mut u8) -> *mut u8; +} + +#[cfg(not(stage0))] +extern "Rust" { + #[allocator] + #[rustc_allocator_nounwind] + fn __rust_alloc(size: usize, align: usize) -> *mut u8; + #[cold] + #[rustc_allocator_nounwind] + fn __rust_oom() -> !; + #[rustc_allocator_nounwind] + fn __rust_dealloc(ptr: *mut u8, size: usize, align: usize); + #[rustc_allocator_nounwind] + fn __rust_realloc(ptr: *mut u8, + old_size: usize, + align: usize, + new_size: usize) -> *mut u8; + #[rustc_allocator_nounwind] + fn __rust_alloc_zeroed(size: usize, align: usize) -> *mut u8; +} + +#[derive(Copy, Clone, Default, Debug)] +pub struct Global; + +#[unstable(feature = "allocator_api", issue = "32838")] +#[rustc_deprecated(since = "1.27.0", reason = "type renamed to `Global`")] +pub type Heap = Global; + +#[unstable(feature = "allocator_api", issue = "32838")] +#[rustc_deprecated(since = "1.27.0", reason = "type renamed to `Global`")] +#[allow(non_upper_case_globals)] +pub const Heap: Global = Global; + +unsafe impl GlobalAlloc for Global { + #[inline] + unsafe fn alloc(&self, layout: Layout) -> *mut Opaque { + #[cfg(not(stage0))] + let ptr = __rust_alloc(layout.size(), layout.align()); + #[cfg(stage0)] + let ptr = __rust_alloc(layout.size(), layout.align(), &mut 0); + ptr as *mut Opaque + } + + #[inline] + unsafe fn dealloc(&self, ptr: *mut Opaque, layout: Layout) { + __rust_dealloc(ptr as *mut u8, layout.size(), layout.align()) + } + + #[inline] + unsafe fn realloc(&self, ptr: *mut Opaque, layout: Layout, new_size: usize) -> *mut Opaque { + #[cfg(not(stage0))] + let ptr = __rust_realloc(ptr as *mut u8, layout.size(), layout.align(), new_size); + #[cfg(stage0)] + let ptr = __rust_realloc(ptr as *mut u8, layout.size(), layout.align(), + new_size, layout.align(), &mut 0); + ptr as *mut Opaque + } + + #[inline] + unsafe fn alloc_zeroed(&self, layout: Layout) -> *mut Opaque { + #[cfg(not(stage0))] + let ptr = __rust_alloc_zeroed(layout.size(), layout.align()); + #[cfg(stage0)] + let ptr = __rust_alloc_zeroed(layout.size(), layout.align(), &mut 0); + ptr as *mut Opaque + } + + #[inline] + fn oom(&self) -> ! { + unsafe { + #[cfg(not(stage0))] + __rust_oom(); + #[cfg(stage0)] + __rust_oom(&mut 0); + } + } +} + +unsafe impl Alloc for Global { + #[inline] + unsafe fn alloc(&mut self, layout: Layout) -> Result<NonNull<Opaque>, AllocErr> { + NonNull::new(GlobalAlloc::alloc(self, layout)).ok_or(AllocErr) + } + + #[inline] + unsafe fn dealloc(&mut self, ptr: NonNull<Opaque>, layout: Layout) { + GlobalAlloc::dealloc(self, ptr.as_ptr(), layout) + } + + #[inline] + unsafe fn realloc(&mut self, + ptr: NonNull<Opaque>, + layout: Layout, + new_size: usize) + -> Result<NonNull<Opaque>, AllocErr> + { + NonNull::new(GlobalAlloc::realloc(self, ptr.as_ptr(), layout, new_size)).ok_or(AllocErr) + } + + #[inline] + unsafe fn alloc_zeroed(&mut self, layout: Layout) -> Result<NonNull<Opaque>, AllocErr> { + NonNull::new(GlobalAlloc::alloc_zeroed(self, layout)).ok_or(AllocErr) + } + + #[inline] + fn oom(&mut self) -> ! { + GlobalAlloc::oom(self) + } +} + +/// The allocator for unique pointers. +// This function must not unwind. If it does, MIR trans will fail. +#[cfg(not(test))] +#[lang = "exchange_malloc"] +#[inline] +unsafe fn exchange_malloc(size: usize, align: usize) -> *mut u8 { + if size == 0 { + align as *mut u8 + } else { + let layout = Layout::from_size_align_unchecked(size, align); + let ptr = Global.alloc(layout); + if !ptr.is_null() { + ptr as *mut u8 + } else { + Global.oom() + } + } +} + +#[cfg_attr(not(test), lang = "box_free")] +#[inline] +pub(crate) unsafe fn box_free<T: ?Sized>(ptr: *mut T) { + let size = size_of_val(&*ptr); + let align = min_align_of_val(&*ptr); + // We do not allocate for Box<T> when T is ZST, so deallocation is also not necessary. + if size != 0 { + let layout = Layout::from_size_align_unchecked(size, align); + Global.dealloc(ptr as *mut Opaque, layout); + } +} + +#[cfg(test)] +mod tests { + extern crate test; + use self::test::Bencher; + use boxed::Box; + use alloc::{Global, Alloc, Layout}; + + #[test] + fn allocate_zeroed() { + unsafe { + let layout = Layout::from_size_align(1024, 1).unwrap(); + let ptr = Global.alloc_zeroed(layout.clone()) + .unwrap_or_else(|_| Global.oom()); + + let mut i = ptr.cast::<u8>().as_ptr(); + let end = i.offset(layout.size() as isize); + while i < end { + assert_eq!(*i, 0); + i = i.offset(1); + } + Global.dealloc(ptr, layout); + } + } + + #[bench] + fn alloc_owned_small(b: &mut Bencher) { + b.iter(|| { + let _: Box<_> = box 10; + }) + } +} diff --git a/src/liballoc/arc.rs b/src/liballoc/arc.rs index ccf2e2768d1..225b055d8ee 100644 --- a/src/liballoc/arc.rs +++ b/src/liballoc/arc.rs @@ -21,7 +21,6 @@ use core::sync::atomic::Ordering::{Acquire, Relaxed, Release, SeqCst}; use core::borrow; use core::fmt; use core::cmp::Ordering; -use core::heap::{Alloc, Layout}; use core::intrinsics::abort; use core::mem::{self, align_of_val, size_of_val, uninitialized}; use core::ops::Deref; @@ -32,7 +31,7 @@ use core::hash::{Hash, Hasher}; use core::{isize, usize}; use core::convert::From; -use heap::{Heap, box_free}; +use alloc::{Global, Alloc, Layout, box_free}; use boxed::Box; use string::String; use vec::Vec; @@ -513,15 +512,13 @@ impl<T: ?Sized> Arc<T> { // Non-inlined part of `drop`. #[inline(never)] unsafe fn drop_slow(&mut self) { - let ptr = self.ptr.as_ptr(); - // Destroy the data at this time, even though we may not free the box // allocation itself (there may still be weak pointers lying around). ptr::drop_in_place(&mut self.ptr.as_mut().data); if self.inner().weak.fetch_sub(1, Release) == 1 { atomic::fence(Acquire); - Heap.dealloc(ptr as *mut u8, Layout::for_value(&*ptr)) + Global.dealloc(self.ptr.as_opaque(), Layout::for_value(self.ptr.as_ref())) } } @@ -555,11 +552,11 @@ impl<T: ?Sized> Arc<T> { let layout = Layout::for_value(&*fake_ptr); - let mem = Heap.alloc(layout) - .unwrap_or_else(|e| Heap.oom(e)); + let mem = Global.alloc(layout) + .unwrap_or_else(|_| Global.oom()); // Initialize the real ArcInner - let inner = set_data_ptr(ptr as *mut T, mem) as *mut ArcInner<T>; + let inner = set_data_ptr(ptr as *mut T, mem.as_ptr() as *mut u8) as *mut ArcInner<T>; ptr::write(&mut (*inner).strong, atomic::AtomicUsize::new(1)); ptr::write(&mut (*inner).weak, atomic::AtomicUsize::new(1)); @@ -626,7 +623,7 @@ impl<T: Clone> ArcFromSlice<T> for Arc<[T]> { // In the event of a panic, elements that have been written // into the new ArcInner will be dropped, then the memory freed. struct Guard<T> { - mem: *mut u8, + mem: NonNull<u8>, elems: *mut T, layout: Layout, n_elems: usize, @@ -640,7 +637,7 @@ impl<T: Clone> ArcFromSlice<T> for Arc<[T]> { let slice = from_raw_parts_mut(self.elems, self.n_elems); ptr::drop_in_place(slice); - Heap.dealloc(self.mem, self.layout.clone()); + Global.dealloc(self.mem.as_opaque(), self.layout.clone()); } } } @@ -656,7 +653,7 @@ impl<T: Clone> ArcFromSlice<T> for Arc<[T]> { let elems = &mut (*ptr).data as *mut [T] as *mut T; let mut guard = Guard{ - mem: mem, + mem: NonNull::new_unchecked(mem), elems: elems, layout: layout, n_elems: 0, @@ -1148,8 +1145,6 @@ impl<T: ?Sized> Drop for Weak<T> { /// assert!(other_weak_foo.upgrade().is_none()); /// ``` fn drop(&mut self) { - let ptr = self.ptr.as_ptr(); - // If we find out that we were the last weak pointer, then its time to // deallocate the data entirely. See the discussion in Arc::drop() about // the memory orderings @@ -1161,7 +1156,7 @@ impl<T: ?Sized> Drop for Weak<T> { if self.inner().weak.fetch_sub(1, Release) == 1 { atomic::fence(Acquire); unsafe { - Heap.dealloc(ptr as *mut u8, Layout::for_value(&*ptr)) + Global.dealloc(self.ptr.as_opaque(), Layout::for_value(self.ptr.as_ref())) } } } diff --git a/src/liballoc/btree/node.rs b/src/liballoc/btree/node.rs index 49109d522e9..d6346662314 100644 --- a/src/liballoc/btree/node.rs +++ b/src/liballoc/btree/node.rs @@ -41,14 +41,13 @@ // - A node of length `n` has `n` keys, `n` values, and (in an internal node) `n + 1` edges. // This implies that even an empty internal node has at least one edge. -use core::heap::{Alloc, Layout}; use core::marker::PhantomData; use core::mem; use core::ptr::{self, Unique, NonNull}; use core::slice; +use alloc::{Global, Alloc, Layout}; use boxed::Box; -use heap::Heap; const B: usize = 6; pub const MIN_LEN: usize = B - 1; @@ -237,7 +236,7 @@ impl<K, V> Root<K, V> { pub fn pop_level(&mut self) { debug_assert!(self.height > 0); - let top = self.node.ptr.as_ptr() as *mut u8; + let top = self.node.ptr; self.node = unsafe { BoxedNode::from_ptr(self.as_mut() @@ -250,7 +249,7 @@ impl<K, V> Root<K, V> { self.as_mut().as_leaf_mut().parent = ptr::null(); unsafe { - Heap.dealloc(top, Layout::new::<InternalNode<K, V>>()); + Global.dealloc(NonNull::from(top).as_opaque(), Layout::new::<InternalNode<K, V>>()); } } } @@ -434,9 +433,9 @@ impl<K, V> NodeRef<marker::Owned, K, V, marker::Leaf> { marker::Edge > > { - let ptr = self.as_leaf() as *const LeafNode<K, V> as *const u8 as *mut u8; + let node = self.node; let ret = self.ascend().ok(); - Heap.dealloc(ptr, Layout::new::<LeafNode<K, V>>()); + Global.dealloc(node.as_opaque(), Layout::new::<LeafNode<K, V>>()); ret } } @@ -455,9 +454,9 @@ impl<K, V> NodeRef<marker::Owned, K, V, marker::Internal> { marker::Edge > > { - let ptr = self.as_internal() as *const InternalNode<K, V> as *const u8 as *mut u8; + let node = self.node; let ret = self.ascend().ok(); - Heap.dealloc(ptr, Layout::new::<InternalNode<K, V>>()); + Global.dealloc(node.as_opaque(), Layout::new::<InternalNode<K, V>>()); ret } } @@ -1239,13 +1238,13 @@ impl<'a, K, V> Handle<NodeRef<marker::Mut<'a>, K, V, marker::Internal>, marker:: ).correct_parent_link(); } - Heap.dealloc( - right_node.node.as_ptr() as *mut u8, + Global.dealloc( + right_node.node.as_opaque(), Layout::new::<InternalNode<K, V>>(), ); } else { - Heap.dealloc( - right_node.node.as_ptr() as *mut u8, + Global.dealloc( + right_node.node.as_opaque(), Layout::new::<LeafNode<K, V>>(), ); } diff --git a/src/liballoc/heap.rs b/src/liballoc/heap.rs index 9296a113071..faac38ca7ce 100644 --- a/src/liballoc/heap.rs +++ b/src/liballoc/heap.rs @@ -8,282 +8,103 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. -#![unstable(feature = "allocator_api", - reason = "the precise API and guarantees it provides may be tweaked \ - slightly, especially to possibly take into account the \ - types being stored to make room for a future \ - tracing garbage collector", - issue = "32838")] +#![allow(deprecated)] -use core::intrinsics::{min_align_of_val, size_of_val}; -use core::mem::{self, ManuallyDrop}; -use core::usize; +pub use alloc::{Layout, AllocErr, CannotReallocInPlace, Opaque}; +use core::alloc::Alloc as CoreAlloc; +use core::ptr::NonNull; -pub use core::heap::*; #[doc(hidden)] pub mod __core { pub use core::*; } -extern "Rust" { - #[allocator] - #[rustc_allocator_nounwind] - fn __rust_alloc(size: usize, align: usize, err: *mut u8) -> *mut u8; - #[cold] - #[rustc_allocator_nounwind] - fn __rust_oom(err: *const u8) -> !; - #[rustc_allocator_nounwind] - fn __rust_dealloc(ptr: *mut u8, size: usize, align: usize); - #[rustc_allocator_nounwind] - fn __rust_usable_size(layout: *const u8, - min: *mut usize, - max: *mut usize); - #[rustc_allocator_nounwind] - fn __rust_realloc(ptr: *mut u8, - old_size: usize, - old_align: usize, - new_size: usize, - new_align: usize, - err: *mut u8) -> *mut u8; - #[rustc_allocator_nounwind] - fn __rust_alloc_zeroed(size: usize, align: usize, err: *mut u8) -> *mut u8; - #[rustc_allocator_nounwind] - fn __rust_alloc_excess(size: usize, - align: usize, - excess: *mut usize, - err: *mut u8) -> *mut u8; - #[rustc_allocator_nounwind] - fn __rust_realloc_excess(ptr: *mut u8, - old_size: usize, - old_align: usize, - new_size: usize, - new_align: usize, - excess: *mut usize, - err: *mut u8) -> *mut u8; - #[rustc_allocator_nounwind] - fn __rust_grow_in_place(ptr: *mut u8, - old_size: usize, - old_align: usize, - new_size: usize, - new_align: usize) -> u8; - #[rustc_allocator_nounwind] - fn __rust_shrink_in_place(ptr: *mut u8, - old_size: usize, - old_align: usize, - new_size: usize, - new_align: usize) -> u8; -} +#[derive(Debug)] +pub struct Excess(pub *mut u8, pub usize); -#[derive(Copy, Clone, Default, Debug)] -pub struct Heap; +/// Compatibility with older versions of #[global_allocator] during bootstrap +pub unsafe trait Alloc { + unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr>; + unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout); + fn oom(&mut self, err: AllocErr) -> !; + fn usable_size(&self, layout: &Layout) -> (usize, usize); + unsafe fn realloc(&mut self, + ptr: *mut u8, + layout: Layout, + new_layout: Layout) -> Result<*mut u8, AllocErr>; + unsafe fn alloc_zeroed(&mut self, layout: Layout) -> Result<*mut u8, AllocErr>; + unsafe fn alloc_excess(&mut self, layout: Layout) -> Result<Excess, AllocErr>; + unsafe fn realloc_excess(&mut self, + ptr: *mut u8, + layout: Layout, + new_layout: Layout) -> Result<Excess, AllocErr>; + unsafe fn grow_in_place(&mut self, + ptr: *mut u8, + layout: Layout, + new_layout: Layout) -> Result<(), CannotReallocInPlace>; + unsafe fn shrink_in_place(&mut self, + ptr: *mut u8, + layout: Layout, + new_layout: Layout) -> Result<(), CannotReallocInPlace>; +} -unsafe impl Alloc for Heap { - #[inline] +unsafe impl<T> Alloc for T where T: CoreAlloc { unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr> { - let mut err = ManuallyDrop::new(mem::uninitialized::<AllocErr>()); - let ptr = __rust_alloc(layout.size(), - layout.align(), - &mut *err as *mut AllocErr as *mut u8); - if ptr.is_null() { - Err(ManuallyDrop::into_inner(err)) - } else { - Ok(ptr) - } + CoreAlloc::alloc(self, layout).map(|ptr| ptr.cast().as_ptr()) } - #[inline] - #[cold] - fn oom(&mut self, err: AllocErr) -> ! { - unsafe { - __rust_oom(&err as *const AllocErr as *const u8) - } + unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout) { + let ptr = NonNull::new_unchecked(ptr as *mut Opaque); + CoreAlloc::dealloc(self, ptr, layout) } - #[inline] - unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout) { - __rust_dealloc(ptr, layout.size(), layout.align()) + fn oom(&mut self, _: AllocErr) -> ! { + CoreAlloc::oom(self) } - #[inline] fn usable_size(&self, layout: &Layout) -> (usize, usize) { - let mut min = 0; - let mut max = 0; - unsafe { - __rust_usable_size(layout as *const Layout as *const u8, - &mut min, - &mut max); - } - (min, max) + CoreAlloc::usable_size(self, layout) } - #[inline] unsafe fn realloc(&mut self, ptr: *mut u8, layout: Layout, - new_layout: Layout) - -> Result<*mut u8, AllocErr> - { - let mut err = ManuallyDrop::new(mem::uninitialized::<AllocErr>()); - let ptr = __rust_realloc(ptr, - layout.size(), - layout.align(), - new_layout.size(), - new_layout.align(), - &mut *err as *mut AllocErr as *mut u8); - if ptr.is_null() { - Err(ManuallyDrop::into_inner(err)) - } else { - mem::forget(err); - Ok(ptr) - } + new_layout: Layout) -> Result<*mut u8, AllocErr> { + let ptr = NonNull::new_unchecked(ptr as *mut Opaque); + CoreAlloc::realloc(self, ptr, layout, new_layout.size()).map(|ptr| ptr.cast().as_ptr()) } - #[inline] unsafe fn alloc_zeroed(&mut self, layout: Layout) -> Result<*mut u8, AllocErr> { - let mut err = ManuallyDrop::new(mem::uninitialized::<AllocErr>()); - let ptr = __rust_alloc_zeroed(layout.size(), - layout.align(), - &mut *err as *mut AllocErr as *mut u8); - if ptr.is_null() { - Err(ManuallyDrop::into_inner(err)) - } else { - Ok(ptr) - } + CoreAlloc::alloc_zeroed(self, layout).map(|ptr| ptr.cast().as_ptr()) } - #[inline] unsafe fn alloc_excess(&mut self, layout: Layout) -> Result<Excess, AllocErr> { - let mut err = ManuallyDrop::new(mem::uninitialized::<AllocErr>()); - let mut size = 0; - let ptr = __rust_alloc_excess(layout.size(), - layout.align(), - &mut size, - &mut *err as *mut AllocErr as *mut u8); - if ptr.is_null() { - Err(ManuallyDrop::into_inner(err)) - } else { - Ok(Excess(ptr, size)) - } + CoreAlloc::alloc_excess(self, layout) + .map(|e| Excess(e.0 .cast().as_ptr(), e.1)) } - #[inline] unsafe fn realloc_excess(&mut self, ptr: *mut u8, layout: Layout, new_layout: Layout) -> Result<Excess, AllocErr> { - let mut err = ManuallyDrop::new(mem::uninitialized::<AllocErr>()); - let mut size = 0; - let ptr = __rust_realloc_excess(ptr, - layout.size(), - layout.align(), - new_layout.size(), - new_layout.align(), - &mut size, - &mut *err as *mut AllocErr as *mut u8); - if ptr.is_null() { - Err(ManuallyDrop::into_inner(err)) - } else { - Ok(Excess(ptr, size)) - } + let ptr = NonNull::new_unchecked(ptr as *mut Opaque); + CoreAlloc::realloc_excess(self, ptr, layout, new_layout.size()) + .map(|e| Excess(e.0 .cast().as_ptr(), e.1)) } - #[inline] unsafe fn grow_in_place(&mut self, ptr: *mut u8, layout: Layout, - new_layout: Layout) - -> Result<(), CannotReallocInPlace> - { - debug_assert!(new_layout.size() >= layout.size()); - debug_assert!(new_layout.align() == layout.align()); - let ret = __rust_grow_in_place(ptr, - layout.size(), - layout.align(), - new_layout.size(), - new_layout.align()); - if ret != 0 { - Ok(()) - } else { - Err(CannotReallocInPlace) - } + new_layout: Layout) -> Result<(), CannotReallocInPlace> { + let ptr = NonNull::new_unchecked(ptr as *mut Opaque); + CoreAlloc::grow_in_place(self, ptr, layout, new_layout.size()) } - #[inline] unsafe fn shrink_in_place(&mut self, ptr: *mut u8, layout: Layout, new_layout: Layout) -> Result<(), CannotReallocInPlace> { - debug_assert!(new_layout.size() <= layout.size()); - debug_assert!(new_layout.align() == layout.align()); - let ret = __rust_shrink_in_place(ptr, - layout.size(), - layout.align(), - new_layout.size(), - new_layout.align()); - if ret != 0 { - Ok(()) - } else { - Err(CannotReallocInPlace) - } - } -} - -/// The allocator for unique pointers. -// This function must not unwind. If it does, MIR trans will fail. -#[cfg(not(test))] -#[lang = "exchange_malloc"] -#[inline] -unsafe fn exchange_malloc(size: usize, align: usize) -> *mut u8 { - if size == 0 { - align as *mut u8 - } else { - let layout = Layout::from_size_align_unchecked(size, align); - Heap.alloc(layout).unwrap_or_else(|err| { - Heap.oom(err) - }) - } -} - -#[cfg_attr(not(test), lang = "box_free")] -#[inline] -pub(crate) unsafe fn box_free<T: ?Sized>(ptr: *mut T) { - let size = size_of_val(&*ptr); - let align = min_align_of_val(&*ptr); - // We do not allocate for Box<T> when T is ZST, so deallocation is also not necessary. - if size != 0 { - let layout = Layout::from_size_align_unchecked(size, align); - Heap.dealloc(ptr as *mut u8, layout); - } -} - -#[cfg(test)] -mod tests { - extern crate test; - use self::test::Bencher; - use boxed::Box; - use heap::{Heap, Alloc, Layout}; - - #[test] - fn allocate_zeroed() { - unsafe { - let layout = Layout::from_size_align(1024, 1).unwrap(); - let ptr = Heap.alloc_zeroed(layout.clone()) - .unwrap_or_else(|e| Heap.oom(e)); - - let end = ptr.offset(layout.size() as isize); - let mut i = ptr; - while i < end { - assert_eq!(*i, 0); - i = i.offset(1); - } - Heap.dealloc(ptr, layout); - } - } - - #[bench] - fn alloc_owned_small(b: &mut Bencher) { - b.iter(|| { - let _: Box<_> = box 10; - }) + let ptr = NonNull::new_unchecked(ptr as *mut Opaque); + CoreAlloc::shrink_in_place(self, ptr, layout, new_layout.size()) } } diff --git a/src/liballoc/lib.rs b/src/liballoc/lib.rs index 5ca39442342..3a106a2ff5c 100644 --- a/src/liballoc/lib.rs +++ b/src/liballoc/lib.rs @@ -57,7 +57,7 @@ //! //! ## Heap interfaces //! -//! The [`heap`](heap/index.html) module defines the low-level interface to the +//! The [`alloc`](alloc/index.html) module defines the low-level interface to the //! default global allocator. It is not compatible with the libc allocator API. #![allow(unused_attributes)] @@ -97,7 +97,9 @@ #![feature(from_ref)] #![feature(fundamental)] #![feature(lang_items)] +#![feature(libc)] #![feature(needs_allocator)] +#![feature(nonnull_cast)] #![feature(nonzero)] #![feature(optin_builtin_traits)] #![feature(pattern)] @@ -141,10 +143,26 @@ mod macros; #[rustc_deprecated(since = "1.27.0", reason = "use the heap module in core, alloc, or std instead")] #[unstable(feature = "allocator_api", issue = "32838")] -pub use core::heap as allocator; +/// Use the `alloc` module instead. +pub mod allocator { + pub use alloc::*; +} // Heaps provided for low-level allocation strategies +pub mod alloc; + +#[unstable(feature = "allocator_api", issue = "32838")] +#[rustc_deprecated(since = "1.27.0", reason = "module renamed to `alloc`")] +/// Use the `alloc` module instead. +#[cfg(not(stage0))] +pub mod heap { + pub use alloc::*; +} + +#[unstable(feature = "allocator_api", issue = "32838")] +#[rustc_deprecated(since = "1.27.0", reason = "module renamed to `alloc`")] +#[cfg(stage0)] pub mod heap; // Primitive types using the heaps above diff --git a/src/liballoc/raw_vec.rs b/src/liballoc/raw_vec.rs index 3edce8aebdf..214cc7d7d0c 100644 --- a/src/liballoc/raw_vec.rs +++ b/src/liballoc/raw_vec.rs @@ -8,13 +8,12 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. +use alloc::{Alloc, Layout, Global}; use core::cmp; -use core::heap::{Alloc, Layout}; use core::mem; use core::ops::Drop; -use core::ptr::{self, Unique}; +use core::ptr::{self, NonNull, Unique}; use core::slice; -use heap::Heap; use super::boxed::Box; use super::allocator::CollectionAllocErr; use super::allocator::CollectionAllocErr::*; @@ -47,7 +46,7 @@ use super::allocator::CollectionAllocErr::*; /// field. This allows zero-sized types to not be special-cased by consumers of /// this type. #[allow(missing_debug_implementations)] -pub struct RawVec<T, A: Alloc = Heap> { +pub struct RawVec<T, A: Alloc = Global> { ptr: Unique<T>, cap: usize, a: A, @@ -91,7 +90,7 @@ impl<T, A: Alloc> RawVec<T, A> { // handles ZSTs and `cap = 0` alike let ptr = if alloc_size == 0 { - mem::align_of::<T>() as *mut u8 + NonNull::<T>::dangling().as_opaque() } else { let align = mem::align_of::<T>(); let result = if zeroed { @@ -101,12 +100,12 @@ impl<T, A: Alloc> RawVec<T, A> { }; match result { Ok(ptr) => ptr, - Err(err) => a.oom(err), + Err(_) => a.oom(), } }; RawVec { - ptr: Unique::new_unchecked(ptr as *mut _), + ptr: ptr.cast().into(), cap, a, } @@ -114,14 +113,14 @@ impl<T, A: Alloc> RawVec<T, A> { } } -impl<T> RawVec<T, Heap> { +impl<T> RawVec<T, Global> { /// Creates the biggest possible RawVec (on the system heap) /// without allocating. If T has positive size, then this makes a /// RawVec with capacity 0. If T has 0 size, then it makes a /// RawVec with capacity `usize::MAX`. Useful for implementing /// delayed allocation. pub fn new() -> Self { - Self::new_in(Heap) + Self::new_in(Global) } /// Creates a RawVec (on the system heap) with exactly the @@ -141,13 +140,13 @@ impl<T> RawVec<T, Heap> { /// Aborts on OOM #[inline] pub fn with_capacity(cap: usize) -> Self { - RawVec::allocate_in(cap, false, Heap) + RawVec::allocate_in(cap, false, Global) } /// Like `with_capacity` but guarantees the buffer is zeroed. #[inline] pub fn with_capacity_zeroed(cap: usize) -> Self { - RawVec::allocate_in(cap, true, Heap) + RawVec::allocate_in(cap, true, Global) } } @@ -168,7 +167,7 @@ impl<T, A: Alloc> RawVec<T, A> { } } -impl<T> RawVec<T, Heap> { +impl<T> RawVec<T, Global> { /// Reconstitutes a RawVec from a pointer, capacity. /// /// # Undefined Behavior @@ -180,7 +179,7 @@ impl<T> RawVec<T, Heap> { RawVec { ptr: Unique::new_unchecked(ptr), cap, - a: Heap, + a: Global, } } @@ -310,14 +309,13 @@ impl<T, A: Alloc> RawVec<T, A> { // `from_size_align_unchecked`. let new_cap = 2 * self.cap; let new_size = new_cap * elem_size; - let new_layout = Layout::from_size_align_unchecked(new_size, cur.align()); alloc_guard(new_size).expect("capacity overflow"); - let ptr_res = self.a.realloc(self.ptr.as_ptr() as *mut u8, + let ptr_res = self.a.realloc(NonNull::from(self.ptr).as_opaque(), cur, - new_layout); + new_size); match ptr_res { - Ok(ptr) => (new_cap, Unique::new_unchecked(ptr as *mut T)), - Err(e) => self.a.oom(e), + Ok(ptr) => (new_cap, ptr.cast().into()), + Err(_) => self.a.oom(), } } None => { @@ -326,7 +324,7 @@ impl<T, A: Alloc> RawVec<T, A> { let new_cap = if elem_size > (!0) / 8 { 1 } else { 4 }; match self.a.alloc_array::<T>(new_cap) { Ok(ptr) => (new_cap, ptr.into()), - Err(e) => self.a.oom(e), + Err(_) => self.a.oom(), } } }; @@ -371,9 +369,7 @@ impl<T, A: Alloc> RawVec<T, A> { let new_cap = 2 * self.cap; let new_size = new_cap * elem_size; alloc_guard(new_size).expect("capacity overflow"); - let ptr = self.ptr() as *mut _; - let new_layout = Layout::from_size_align_unchecked(new_size, old_layout.align()); - match self.a.grow_in_place(ptr, old_layout, new_layout) { + match self.a.grow_in_place(NonNull::from(self.ptr).as_opaque(), old_layout, new_size) { Ok(_) => { // We can't directly divide `size`. self.cap = new_cap; @@ -423,19 +419,19 @@ impl<T, A: Alloc> RawVec<T, A> { // Nothing we can really do about these checks :( let new_cap = used_cap.checked_add(needed_extra_cap).ok_or(CapacityOverflow)?; - let new_layout = Layout::array::<T>(new_cap).ok_or(CapacityOverflow)?; + let new_layout = Layout::array::<T>(new_cap).map_err(|_| CapacityOverflow)?; alloc_guard(new_layout.size())?; let res = match self.current_layout() { Some(layout) => { - let old_ptr = self.ptr.as_ptr() as *mut u8; - self.a.realloc(old_ptr, layout, new_layout) + debug_assert!(new_layout.align() == layout.align()); + self.a.realloc(NonNull::from(self.ptr).as_opaque(), layout, new_layout.size()) } None => self.a.alloc(new_layout), }; - self.ptr = Unique::new_unchecked(res? as *mut T); + self.ptr = res?.cast().into(); self.cap = new_cap; Ok(()) @@ -445,7 +441,7 @@ impl<T, A: Alloc> RawVec<T, A> { pub fn reserve_exact(&mut self, used_cap: usize, needed_extra_cap: usize) { match self.try_reserve_exact(used_cap, needed_extra_cap) { Err(CapacityOverflow) => panic!("capacity overflow"), - Err(AllocErr(e)) => self.a.oom(e), + Err(AllocErr) => self.a.oom(), Ok(()) => { /* yay */ } } } @@ -531,20 +527,20 @@ impl<T, A: Alloc> RawVec<T, A> { } let new_cap = self.amortized_new_size(used_cap, needed_extra_cap)?; - let new_layout = Layout::array::<T>(new_cap).ok_or(CapacityOverflow)?; + let new_layout = Layout::array::<T>(new_cap).map_err(|_| CapacityOverflow)?; // FIXME: may crash and burn on over-reserve alloc_guard(new_layout.size())?; let res = match self.current_layout() { Some(layout) => { - let old_ptr = self.ptr.as_ptr() as *mut u8; - self.a.realloc(old_ptr, layout, new_layout) + debug_assert!(new_layout.align() == layout.align()); + self.a.realloc(NonNull::from(self.ptr).as_opaque(), layout, new_layout.size()) } None => self.a.alloc(new_layout), }; - self.ptr = Unique::new_unchecked(res? as *mut T); + self.ptr = res?.cast().into(); self.cap = new_cap; Ok(()) @@ -555,7 +551,7 @@ impl<T, A: Alloc> RawVec<T, A> { pub fn reserve(&mut self, used_cap: usize, needed_extra_cap: usize) { match self.try_reserve(used_cap, needed_extra_cap) { Err(CapacityOverflow) => panic!("capacity overflow"), - Err(AllocErr(e)) => self.a.oom(e), + Err(AllocErr) => self.a.oom(), Ok(()) => { /* yay */ } } } @@ -601,11 +597,12 @@ impl<T, A: Alloc> RawVec<T, A> { // (regardless of whether `self.cap - used_cap` wrapped). // Therefore we can safely call grow_in_place. - let ptr = self.ptr() as *mut _; let new_layout = Layout::new::<T>().repeat(new_cap).unwrap().0; // FIXME: may crash and burn on over-reserve alloc_guard(new_layout.size()).expect("capacity overflow"); - match self.a.grow_in_place(ptr, old_layout, new_layout) { + match self.a.grow_in_place( + NonNull::from(self.ptr).as_opaque(), old_layout, new_layout.size(), + ) { Ok(_) => { self.cap = new_cap; true @@ -665,12 +662,11 @@ impl<T, A: Alloc> RawVec<T, A> { let new_size = elem_size * amount; let align = mem::align_of::<T>(); let old_layout = Layout::from_size_align_unchecked(old_size, align); - let new_layout = Layout::from_size_align_unchecked(new_size, align); - match self.a.realloc(self.ptr.as_ptr() as *mut u8, + match self.a.realloc(NonNull::from(self.ptr).as_opaque(), old_layout, - new_layout) { - Ok(p) => self.ptr = Unique::new_unchecked(p as *mut T), - Err(err) => self.a.oom(err), + new_size) { + Ok(p) => self.ptr = p.cast().into(), + Err(_) => self.a.oom(), } } self.cap = amount; @@ -678,7 +674,7 @@ impl<T, A: Alloc> RawVec<T, A> { } } -impl<T> RawVec<T, Heap> { +impl<T> RawVec<T, Global> { /// Converts the entire buffer into `Box<[T]>`. /// /// While it is not *strictly* Undefined Behavior to call @@ -702,8 +698,7 @@ impl<T, A: Alloc> RawVec<T, A> { let elem_size = mem::size_of::<T>(); if elem_size != 0 { if let Some(layout) = self.current_layout() { - let ptr = self.ptr() as *mut u8; - self.a.dealloc(ptr, layout); + self.a.dealloc(NonNull::from(self.ptr).as_opaque(), layout); } } } @@ -739,6 +734,7 @@ fn alloc_guard(alloc_size: usize) -> Result<(), CollectionAllocErr> { #[cfg(test)] mod tests { use super::*; + use alloc::Opaque; #[test] fn allocator_param() { @@ -758,18 +754,18 @@ mod tests { // before allocation attempts start failing. struct BoundedAlloc { fuel: usize } unsafe impl Alloc for BoundedAlloc { - unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr> { + unsafe fn alloc(&mut self, layout: Layout) -> Result<NonNull<Opaque>, AllocErr> { let size = layout.size(); if size > self.fuel { - return Err(AllocErr::Unsupported { details: "fuel exhausted" }); + return Err(AllocErr); } - match Heap.alloc(layout) { + match Global.alloc(layout) { ok @ Ok(_) => { self.fuel -= size; ok } err @ Err(_) => err, } } - unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout) { - Heap.dealloc(ptr, layout) + unsafe fn dealloc(&mut self, ptr: NonNull<Opaque>, layout: Layout) { + Global.dealloc(ptr, layout) } } diff --git a/src/liballoc/rc.rs b/src/liballoc/rc.rs index 8bdc57f96a6..de0422d82bb 100644 --- a/src/liballoc/rc.rs +++ b/src/liballoc/rc.rs @@ -250,7 +250,6 @@ use core::cell::Cell; use core::cmp::Ordering; use core::fmt; use core::hash::{Hash, Hasher}; -use core::heap::{Alloc, Layout}; use core::intrinsics::abort; use core::marker; use core::marker::{Unsize, PhantomData}; @@ -260,7 +259,7 @@ use core::ops::CoerceUnsized; use core::ptr::{self, NonNull}; use core::convert::From; -use heap::{Heap, box_free}; +use alloc::{Global, Alloc, Layout, Opaque, box_free}; use string::String; use vec::Vec; @@ -668,11 +667,11 @@ impl<T: ?Sized> Rc<T> { let layout = Layout::for_value(&*fake_ptr); - let mem = Heap.alloc(layout) - .unwrap_or_else(|e| Heap.oom(e)); + let mem = Global.alloc(layout) + .unwrap_or_else(|_| Global.oom()); // Initialize the real RcBox - let inner = set_data_ptr(ptr as *mut T, mem) as *mut RcBox<T>; + let inner = set_data_ptr(ptr as *mut T, mem.as_ptr() as *mut u8) as *mut RcBox<T>; ptr::write(&mut (*inner).strong, Cell::new(1)); ptr::write(&mut (*inner).weak, Cell::new(1)); @@ -738,7 +737,7 @@ impl<T: Clone> RcFromSlice<T> for Rc<[T]> { // In the event of a panic, elements that have been written // into the new RcBox will be dropped, then the memory freed. struct Guard<T> { - mem: *mut u8, + mem: NonNull<Opaque>, elems: *mut T, layout: Layout, n_elems: usize, @@ -752,7 +751,7 @@ impl<T: Clone> RcFromSlice<T> for Rc<[T]> { let slice = from_raw_parts_mut(self.elems, self.n_elems); ptr::drop_in_place(slice); - Heap.dealloc(self.mem, self.layout.clone()); + Global.dealloc(self.mem, self.layout.clone()); } } } @@ -761,14 +760,14 @@ impl<T: Clone> RcFromSlice<T> for Rc<[T]> { let v_ptr = v as *const [T]; let ptr = Self::allocate_for_ptr(v_ptr); - let mem = ptr as *mut _ as *mut u8; + let mem = ptr as *mut _ as *mut Opaque; let layout = Layout::for_value(&*ptr); // Pointer to first element let elems = &mut (*ptr).value as *mut [T] as *mut T; let mut guard = Guard{ - mem: mem, + mem: NonNull::new_unchecked(mem), elems: elems, layout: layout, n_elems: 0, @@ -835,8 +834,6 @@ unsafe impl<#[may_dangle] T: ?Sized> Drop for Rc<T> { /// ``` fn drop(&mut self) { unsafe { - let ptr = self.ptr.as_ptr(); - self.dec_strong(); if self.strong() == 0 { // destroy the contained object @@ -847,7 +844,7 @@ unsafe impl<#[may_dangle] T: ?Sized> Drop for Rc<T> { self.dec_weak(); if self.weak() == 0 { - Heap.dealloc(ptr as *mut u8, Layout::for_value(&*ptr)); + Global.dealloc(self.ptr.as_opaque(), Layout::for_value(self.ptr.as_ref())); } } } @@ -1267,13 +1264,11 @@ impl<T: ?Sized> Drop for Weak<T> { /// ``` fn drop(&mut self) { unsafe { - let ptr = self.ptr.as_ptr(); - self.dec_weak(); // the weak count starts at 1, and will only go to zero if all // the strong pointers have disappeared. if self.weak() == 0 { - Heap.dealloc(ptr as *mut u8, Layout::for_value(&*ptr)); + Global.dealloc(self.ptr.as_opaque(), Layout::for_value(self.ptr.as_ref())); } } } diff --git a/src/liballoc/tests/heap.rs b/src/liballoc/tests/heap.rs index d3ce12056bb..6fa88ce969a 100644 --- a/src/liballoc/tests/heap.rs +++ b/src/liballoc/tests/heap.rs @@ -9,7 +9,7 @@ // except according to those terms. use alloc_system::System; -use std::heap::{Heap, Alloc, Layout}; +use std::alloc::{Global, Alloc, Layout}; /// https://github.com/rust-lang/rust/issues/45955 /// @@ -22,7 +22,7 @@ fn alloc_system_overaligned_request() { #[test] fn std_heap_overaligned_request() { - check_overalign_requests(Heap) + check_overalign_requests(Global) } fn check_overalign_requests<T: Alloc>(mut allocator: T) { @@ -34,7 +34,8 @@ fn check_overalign_requests<T: Alloc>(mut allocator: T) { allocator.alloc(Layout::from_size_align(size, align).unwrap()).unwrap() }).collect(); for &ptr in &pointers { - assert_eq!((ptr as usize) % align, 0, "Got a pointer less aligned than requested") + assert_eq!((ptr.as_ptr() as usize) % align, 0, + "Got a pointer less aligned than requested") } // Clean up diff --git a/src/liballoc/tests/string.rs b/src/liballoc/tests/string.rs index 17d53e4cf3e..befb36baeef 100644 --- a/src/liballoc/tests/string.rs +++ b/src/liballoc/tests/string.rs @@ -575,11 +575,11 @@ fn test_try_reserve() { } else { panic!("usize::MAX should trigger an overflow!") } } else { // Check isize::MAX + 1 is an OOM - if let Err(AllocErr(_)) = empty_string.try_reserve(MAX_CAP + 1) { + if let Err(AllocErr) = empty_string.try_reserve(MAX_CAP + 1) { } else { panic!("isize::MAX + 1 should trigger an OOM!") } // Check usize::MAX is an OOM - if let Err(AllocErr(_)) = empty_string.try_reserve(MAX_USIZE) { + if let Err(AllocErr) = empty_string.try_reserve(MAX_USIZE) { } else { panic!("usize::MAX should trigger an OOM!") } } } @@ -599,7 +599,7 @@ fn test_try_reserve() { if let Err(CapacityOverflow) = ten_bytes.try_reserve(MAX_CAP - 9) { } else { panic!("isize::MAX + 1 should trigger an overflow!"); } } else { - if let Err(AllocErr(_)) = ten_bytes.try_reserve(MAX_CAP - 9) { + if let Err(AllocErr) = ten_bytes.try_reserve(MAX_CAP - 9) { } else { panic!("isize::MAX + 1 should trigger an OOM!") } } // Should always overflow in the add-to-len @@ -637,10 +637,10 @@ fn test_try_reserve_exact() { if let Err(CapacityOverflow) = empty_string.try_reserve_exact(MAX_USIZE) { } else { panic!("usize::MAX should trigger an overflow!") } } else { - if let Err(AllocErr(_)) = empty_string.try_reserve_exact(MAX_CAP + 1) { + if let Err(AllocErr) = empty_string.try_reserve_exact(MAX_CAP + 1) { } else { panic!("isize::MAX + 1 should trigger an OOM!") } - if let Err(AllocErr(_)) = empty_string.try_reserve_exact(MAX_USIZE) { + if let Err(AllocErr) = empty_string.try_reserve_exact(MAX_USIZE) { } else { panic!("usize::MAX should trigger an OOM!") } } } @@ -659,7 +659,7 @@ fn test_try_reserve_exact() { if let Err(CapacityOverflow) = ten_bytes.try_reserve_exact(MAX_CAP - 9) { } else { panic!("isize::MAX + 1 should trigger an overflow!"); } } else { - if let Err(AllocErr(_)) = ten_bytes.try_reserve_exact(MAX_CAP - 9) { + if let Err(AllocErr) = ten_bytes.try_reserve_exact(MAX_CAP - 9) { } else { panic!("isize::MAX + 1 should trigger an OOM!") } } if let Err(CapacityOverflow) = ten_bytes.try_reserve_exact(MAX_USIZE) { diff --git a/src/liballoc/tests/vec.rs b/src/liballoc/tests/vec.rs index 2895c53009d..e329b45a617 100644 --- a/src/liballoc/tests/vec.rs +++ b/src/liballoc/tests/vec.rs @@ -1016,11 +1016,11 @@ fn test_try_reserve() { } else { panic!("usize::MAX should trigger an overflow!") } } else { // Check isize::MAX + 1 is an OOM - if let Err(AllocErr(_)) = empty_bytes.try_reserve(MAX_CAP + 1) { + if let Err(AllocErr) = empty_bytes.try_reserve(MAX_CAP + 1) { } else { panic!("isize::MAX + 1 should trigger an OOM!") } // Check usize::MAX is an OOM - if let Err(AllocErr(_)) = empty_bytes.try_reserve(MAX_USIZE) { + if let Err(AllocErr) = empty_bytes.try_reserve(MAX_USIZE) { } else { panic!("usize::MAX should trigger an OOM!") } } } @@ -1040,7 +1040,7 @@ fn test_try_reserve() { if let Err(CapacityOverflow) = ten_bytes.try_reserve(MAX_CAP - 9) { } else { panic!("isize::MAX + 1 should trigger an overflow!"); } } else { - if let Err(AllocErr(_)) = ten_bytes.try_reserve(MAX_CAP - 9) { + if let Err(AllocErr) = ten_bytes.try_reserve(MAX_CAP - 9) { } else { panic!("isize::MAX + 1 should trigger an OOM!") } } // Should always overflow in the add-to-len @@ -1063,7 +1063,7 @@ fn test_try_reserve() { if let Err(CapacityOverflow) = ten_u32s.try_reserve(MAX_CAP/4 - 9) { } else { panic!("isize::MAX + 1 should trigger an overflow!"); } } else { - if let Err(AllocErr(_)) = ten_u32s.try_reserve(MAX_CAP/4 - 9) { + if let Err(AllocErr) = ten_u32s.try_reserve(MAX_CAP/4 - 9) { } else { panic!("isize::MAX + 1 should trigger an OOM!") } } // Should fail in the mul-by-size @@ -1103,10 +1103,10 @@ fn test_try_reserve_exact() { if let Err(CapacityOverflow) = empty_bytes.try_reserve_exact(MAX_USIZE) { } else { panic!("usize::MAX should trigger an overflow!") } } else { - if let Err(AllocErr(_)) = empty_bytes.try_reserve_exact(MAX_CAP + 1) { + if let Err(AllocErr) = empty_bytes.try_reserve_exact(MAX_CAP + 1) { } else { panic!("isize::MAX + 1 should trigger an OOM!") } - if let Err(AllocErr(_)) = empty_bytes.try_reserve_exact(MAX_USIZE) { + if let Err(AllocErr) = empty_bytes.try_reserve_exact(MAX_USIZE) { } else { panic!("usize::MAX should trigger an OOM!") } } } @@ -1125,7 +1125,7 @@ fn test_try_reserve_exact() { if let Err(CapacityOverflow) = ten_bytes.try_reserve_exact(MAX_CAP - 9) { } else { panic!("isize::MAX + 1 should trigger an overflow!"); } } else { - if let Err(AllocErr(_)) = ten_bytes.try_reserve_exact(MAX_CAP - 9) { + if let Err(AllocErr) = ten_bytes.try_reserve_exact(MAX_CAP - 9) { } else { panic!("isize::MAX + 1 should trigger an OOM!") } } if let Err(CapacityOverflow) = ten_bytes.try_reserve_exact(MAX_USIZE) { @@ -1146,7 +1146,7 @@ fn test_try_reserve_exact() { if let Err(CapacityOverflow) = ten_u32s.try_reserve_exact(MAX_CAP/4 - 9) { } else { panic!("isize::MAX + 1 should trigger an overflow!"); } } else { - if let Err(AllocErr(_)) = ten_u32s.try_reserve_exact(MAX_CAP/4 - 9) { + if let Err(AllocErr) = ten_u32s.try_reserve_exact(MAX_CAP/4 - 9) { } else { panic!("isize::MAX + 1 should trigger an OOM!") } } if let Err(CapacityOverflow) = ten_u32s.try_reserve_exact(MAX_USIZE - 20) { diff --git a/src/liballoc/tests/vec_deque.rs b/src/liballoc/tests/vec_deque.rs index 75d3f01f8b6..4d55584e2f4 100644 --- a/src/liballoc/tests/vec_deque.rs +++ b/src/liballoc/tests/vec_deque.rs @@ -1073,7 +1073,7 @@ fn test_try_reserve() { // VecDeque starts with capacity 7, always adds 1 to the capacity // and also rounds the number to next power of 2 so this is the // furthest we can go without triggering CapacityOverflow - if let Err(AllocErr(_)) = empty_bytes.try_reserve(MAX_CAP) { + if let Err(AllocErr) = empty_bytes.try_reserve(MAX_CAP) { } else { panic!("isize::MAX + 1 should trigger an OOM!") } } } @@ -1093,7 +1093,7 @@ fn test_try_reserve() { if let Err(CapacityOverflow) = ten_bytes.try_reserve(MAX_CAP - 9) { } else { panic!("isize::MAX + 1 should trigger an overflow!"); } } else { - if let Err(AllocErr(_)) = ten_bytes.try_reserve(MAX_CAP - 9) { + if let Err(AllocErr) = ten_bytes.try_reserve(MAX_CAP - 9) { } else { panic!("isize::MAX + 1 should trigger an OOM!") } } // Should always overflow in the add-to-len @@ -1116,7 +1116,7 @@ fn test_try_reserve() { if let Err(CapacityOverflow) = ten_u32s.try_reserve(MAX_CAP/4 - 9) { } else { panic!("isize::MAX + 1 should trigger an overflow!"); } } else { - if let Err(AllocErr(_)) = ten_u32s.try_reserve(MAX_CAP/4 - 9) { + if let Err(AllocErr) = ten_u32s.try_reserve(MAX_CAP/4 - 9) { } else { panic!("isize::MAX + 1 should trigger an OOM!") } } // Should fail in the mul-by-size @@ -1160,7 +1160,7 @@ fn test_try_reserve_exact() { // VecDeque starts with capacity 7, always adds 1 to the capacity // and also rounds the number to next power of 2 so this is the // furthest we can go without triggering CapacityOverflow - if let Err(AllocErr(_)) = empty_bytes.try_reserve_exact(MAX_CAP) { + if let Err(AllocErr) = empty_bytes.try_reserve_exact(MAX_CAP) { } else { panic!("isize::MAX + 1 should trigger an OOM!") } } } @@ -1179,7 +1179,7 @@ fn test_try_reserve_exact() { if let Err(CapacityOverflow) = ten_bytes.try_reserve_exact(MAX_CAP - 9) { } else { panic!("isize::MAX + 1 should trigger an overflow!"); } } else { - if let Err(AllocErr(_)) = ten_bytes.try_reserve_exact(MAX_CAP - 9) { + if let Err(AllocErr) = ten_bytes.try_reserve_exact(MAX_CAP - 9) { } else { panic!("isize::MAX + 1 should trigger an OOM!") } } if let Err(CapacityOverflow) = ten_bytes.try_reserve_exact(MAX_USIZE) { @@ -1200,7 +1200,7 @@ fn test_try_reserve_exact() { if let Err(CapacityOverflow) = ten_u32s.try_reserve_exact(MAX_CAP/4 - 9) { } else { panic!("isize::MAX + 1 should trigger an overflow!"); } } else { - if let Err(AllocErr(_)) = ten_u32s.try_reserve_exact(MAX_CAP/4 - 9) { + if let Err(AllocErr) = ten_u32s.try_reserve_exact(MAX_CAP/4 - 9) { } else { panic!("isize::MAX + 1 should trigger an OOM!") } } if let Err(CapacityOverflow) = ten_u32s.try_reserve_exact(MAX_USIZE - 20) { |
