diff options
82 files changed, 968 insertions, 636 deletions
diff --git a/.travis.yml b/.travis.yml index dc955bc2f2b..10545b90974 100644 --- a/.travis.yml +++ b/.travis.yml @@ -20,8 +20,7 @@ sudo: false before_script: - ./configure --enable-ccache script: - - make tidy - - make rustc-stage1 -j4 + - make tidy check -j4 env: - CXX=/usr/bin/g++-4.7 diff --git a/RELEASES.md b/RELEASES.md index db1c7380a78..0c495595511 100644 --- a/RELEASES.md +++ b/RELEASES.md @@ -16,6 +16,12 @@ Highlights jobs). It's not enabled by default, but will be "in the near future". It can be activated with the `-C codegen-units=N` flag to `rustc`. +* This is the first release with [experimental support for linking + with the MSVC linker and lib C on Windows (instead of using the GNU + variants via MinGW)][win]. It is yet recommended only for the most + intrepid Rusticians. +* Benchmark compilations are showing a 30% improvement in + bootstrapping over 1.1. Breaking Changes ---------------- @@ -31,6 +37,10 @@ Breaking Changes * [The `#[packed]` attribute is no longer silently accepted by the compiler][packed]. This attribute did nothing and code that mentioned it likely did not work as intended. +* Associated type defaults are [now behind the + `associated_type_defaults` feature gate][ad]. In 1.1 associated type + defaults *did not work*, but could be mentioned syntactically. As + such this breakage has minimal impact. Language -------- @@ -46,12 +56,11 @@ Libraries `LinkedList`, `VecDeque`, `EnumSet`, `BinaryHeap`, `VecMap`, `BTreeSet` and `BTreeMap`. [RFC][extend-rfc]. * The [`iter::once`] function returns an iterator that yields a single - element. -* The [`iter::empty`] function returns an iterator that yields no + element, and [`iter::empty`] returns an iterator that yields no elements. * The [`matches`] and [`rmatches`] methods on `str` return iterators over substring matches. -* [`Cell`] and [`RefCell`] both implement [`Eq`]. +* [`Cell`] and [`RefCell`] both implement `Eq`. * A number of methods for wrapping arithmetic are added to the integral types, [`wrapping_div`], [`wrapping_rem`], [`wrapping_neg`], [`wrapping_shl`], [`wrapping_shr`]. These are in @@ -144,6 +153,8 @@ Misc [dst]: https://github.com/rust-lang/rfcs/blob/master/text/0982-dst-coercion.md [parcodegen]: https://github.com/rust-lang/rust/pull/26018 [packed]: https://github.com/rust-lang/rust/pull/25541 +[ad]: https://github.com/rust-lang/rust/pull/27382 +[win]: https://github.com/rust-lang/rust/pull/25350 Version 1.1.0 (June 2015) ========================= diff --git a/src/doc/reference.md b/src/doc/reference.md index 9528871a774..5988d62bd79 100644 --- a/src/doc/reference.md +++ b/src/doc/reference.md @@ -1199,8 +1199,8 @@ An example of an `enum` item and its use: ``` enum Animal { - Dog, - Cat + Dog, + Cat, } let mut a: Animal = Animal::Dog; diff --git a/src/doc/tarpl/README.md b/src/doc/tarpl/README.md index 0b627737138..e4a46827f46 100644 --- a/src/doc/tarpl/README.md +++ b/src/doc/tarpl/README.md @@ -2,38 +2,33 @@ # NOTE: This is a draft document, and may contain serious errors -So you've played around with Rust a bit. You've written a few simple programs and -you think you grok the basics. Maybe you've even read through -*[The Rust Programming Language][trpl]*. Now you want to get neck-deep in all the +So you've played around with Rust a bit. You've written a few simple programs +and you think you grok the basics. Maybe you've even read through *[The Rust +Programming Language][trpl]* (TRPL). Now you want to get neck-deep in all the nitty-gritty details of the language. You want to know those weird corner-cases. -You want to know what the heck `unsafe` really means, and how to properly use it. -This is the book for you. +You want to know what the heck `unsafe` really means, and how to properly use +it. This is the book for you. -To be clear, this book goes into *serious* detail. We're going to dig into +To be clear, this book goes into serious detail. We're going to dig into exception-safety and pointer aliasing. We're going to talk about memory models. We're even going to do some type-theory. This is stuff that you -absolutely *don't* need to know to write fast and safe Rust programs. +absolutely don't need to know to write fast and safe Rust programs. You could probably close this book *right now* and still have a productive and happy career in Rust. -However if you intend to write unsafe code -- or just *really* want to dig into -the guts of the language -- this book contains *invaluable* information. +However if you intend to write unsafe code -- or just really want to dig into +the guts of the language -- this book contains invaluable information. -Unlike *The Rust Programming Language* we *will* be assuming considerable prior -knowledge. In particular, you should be comfortable with: +Unlike TRPL we will be assuming considerable prior knowledge. In particular, you +should be comfortable with basic systems programming and basic Rust. If you +don't feel comfortable with these topics, you should consider [reading +TRPL][trpl], though we will not be assuming that you have. You can skip +straight to this book if you want; just know that we won't be explaining +everything from the ground up. -* Basic Systems Programming: - * Pointers - * [The stack and heap][] - * The memory hierarchy (caches) - * Threads - -* [Basic Rust][] - -Due to the nature of advanced Rust programming, we will be spending a lot of time -talking about *safety* and *guarantees*. In particular, a significant portion of -the book will be dedicated to correctly writing and understanding Unsafe Rust. +Due to the nature of advanced Rust programming, we will be spending a lot of +time talking about *safety* and *guarantees*. In particular, a significant +portion of the book will be dedicated to correctly writing and understanding +Unsafe Rust. [trpl]: ../book/ -[The stack and heap]: ../book/the-stack-and-the-heap.html -[Basic Rust]: ../book/syntax-and-semantics.html diff --git a/src/doc/tarpl/SUMMARY.md b/src/doc/tarpl/SUMMARY.md index aeab8fc7276..7d4ef9c2514 100644 --- a/src/doc/tarpl/SUMMARY.md +++ b/src/doc/tarpl/SUMMARY.md @@ -10,7 +10,7 @@ * [Ownership](ownership.md) * [References](references.md) * [Lifetimes](lifetimes.md) - * [Limits of lifetimes](lifetime-mismatch.md) + * [Limits of Lifetimes](lifetime-mismatch.md) * [Lifetime Elision](lifetime-elision.md) * [Unbounded Lifetimes](unbounded-lifetimes.md) * [Higher-Rank Trait Bounds](hrtb.md) diff --git a/src/doc/tarpl/atomics.md b/src/doc/tarpl/atomics.md index 87378da7c52..2d567e39f8f 100644 --- a/src/doc/tarpl/atomics.md +++ b/src/doc/tarpl/atomics.md @@ -17,7 +17,7 @@ face. The C11 memory model is fundamentally about trying to bridge the gap between the semantics we want, the optimizations compilers want, and the inconsistent chaos our hardware wants. *We* would like to just write programs and have them do -exactly what we said but, you know, *fast*. Wouldn't that be great? +exactly what we said but, you know, fast. Wouldn't that be great? @@ -35,20 +35,20 @@ y = 3; x = 2; ``` -The compiler may conclude that it would *really* be best if your program did +The compiler may conclude that it would be best if your program did ```rust,ignore x = 2; y = 3; ``` -This has inverted the order of events *and* completely eliminated one event. +This has inverted the order of events and completely eliminated one event. From a single-threaded perspective this is completely unobservable: after all the statements have executed we are in exactly the same state. But if our -program is multi-threaded, we may have been relying on `x` to *actually* be -assigned to 1 before `y` was assigned. We would *really* like the compiler to be +program is multi-threaded, we may have been relying on `x` to actually be +assigned to 1 before `y` was assigned. We would like the compiler to be able to make these kinds of optimizations, because they can seriously improve -performance. On the other hand, we'd really like to be able to depend on our +performance. On the other hand, we'd also like to be able to depend on our program *doing the thing we said*. @@ -57,15 +57,15 @@ program *doing the thing we said*. # Hardware Reordering On the other hand, even if the compiler totally understood what we wanted and -respected our wishes, our *hardware* might instead get us in trouble. Trouble +respected our wishes, our hardware might instead get us in trouble. Trouble comes from CPUs in the form of memory hierarchies. There is indeed a global shared memory space somewhere in your hardware, but from the perspective of each CPU core it is *so very far away* and *so very slow*. Each CPU would rather work -with its local cache of the data and only go through all the *anguish* of -talking to shared memory *only* when it doesn't actually have that memory in +with its local cache of the data and only go through all the anguish of +talking to shared memory only when it doesn't actually have that memory in cache. -After all, that's the whole *point* of the cache, right? If every read from the +After all, that's the whole point of the cache, right? If every read from the cache had to run back to shared memory to double check that it hadn't changed, what would the point be? The end result is that the hardware doesn't guarantee that events that occur in the same order on *one* thread, occur in the same @@ -99,13 +99,13 @@ provides weak ordering guarantees. This has two consequences for concurrent programming: * Asking for stronger guarantees on strongly-ordered hardware may be cheap or - even *free* because they already provide strong guarantees unconditionally. + even free because they already provide strong guarantees unconditionally. Weaker guarantees may only yield performance wins on weakly-ordered hardware. -* Asking for guarantees that are *too* weak on strongly-ordered hardware is +* Asking for guarantees that are too weak on strongly-ordered hardware is more likely to *happen* to work, even though your program is strictly - incorrect. If possible, concurrent algorithms should be tested on weakly- - ordered hardware. + incorrect. If possible, concurrent algorithms should be tested on + weakly-ordered hardware. @@ -115,10 +115,10 @@ programming: The C11 memory model attempts to bridge the gap by allowing us to talk about the *causality* of our program. Generally, this is by establishing a *happens -before* relationships between parts of the program and the threads that are +before* relationship between parts of the program and the threads that are running them. This gives the hardware and compiler room to optimize the program more aggressively where a strict happens-before relationship isn't established, -but forces them to be more careful where one *is* established. The way we +but forces them to be more careful where one is established. The way we communicate these relationships are through *data accesses* and *atomic accesses*. @@ -130,8 +130,10 @@ propagate the changes made in data accesses to other threads as lazily and inconsistently as it wants. Mostly critically, data accesses are how data races happen. Data accesses are very friendly to the hardware and compiler, but as we've seen they offer *awful* semantics to try to write synchronized code with. -Actually, that's too weak. *It is literally impossible to write correct -synchronized code using only data accesses*. +Actually, that's too weak. + +**It is literally impossible to write correct synchronized code using only data +accesses.** Atomic accesses are how we tell the hardware and compiler that our program is multi-threaded. Each atomic access can be marked with an *ordering* that @@ -141,7 +143,10 @@ they *can't* do. For the compiler, this largely revolves around re-ordering of instructions. For the hardware, this largely revolves around how writes are propagated to other threads. The set of orderings Rust exposes are: -* Sequentially Consistent (SeqCst) Release Acquire Relaxed +* Sequentially Consistent (SeqCst) +* Release +* Acquire +* Relaxed (Note: We explicitly do not expose the C11 *consume* ordering) @@ -154,13 +159,13 @@ synchronize" Sequentially Consistent is the most powerful of all, implying the restrictions of all other orderings. Intuitively, a sequentially consistent operation -*cannot* be reordered: all accesses on one thread that happen before and after a -SeqCst access *stay* before and after it. A data-race-free program that uses +cannot be reordered: all accesses on one thread that happen before and after a +SeqCst access stay before and after it. A data-race-free program that uses only sequentially consistent atomics and data accesses has the very nice property that there is a single global execution of the program's instructions that all threads agree on. This execution is also particularly nice to reason about: it's just an interleaving of each thread's individual executions. This -*does not* hold if you start using the weaker atomic orderings. +does not hold if you start using the weaker atomic orderings. The relative developer-friendliness of sequential consistency doesn't come for free. Even on strongly-ordered platforms sequential consistency involves @@ -170,8 +175,8 @@ In practice, sequential consistency is rarely necessary for program correctness. However sequential consistency is definitely the right choice if you're not confident about the other memory orders. Having your program run a bit slower than it needs to is certainly better than it running incorrectly! It's also -*mechanically* trivial to downgrade atomic operations to have a weaker -consistency later on. Just change `SeqCst` to e.g. `Relaxed` and you're done! Of +mechanically trivial to downgrade atomic operations to have a weaker +consistency later on. Just change `SeqCst` to `Relaxed` and you're done! Of course, proving that this transformation is *correct* is a whole other matter. @@ -183,15 +188,15 @@ Acquire and Release are largely intended to be paired. Their names hint at their use case: they're perfectly suited for acquiring and releasing locks, and ensuring that critical sections don't overlap. -Intuitively, an acquire access ensures that every access after it *stays* after +Intuitively, an acquire access ensures that every access after it stays after it. However operations that occur before an acquire are free to be reordered to occur after it. Similarly, a release access ensures that every access before it -*stays* before it. However operations that occur after a release are free to be +stays before it. However operations that occur after a release are free to be reordered to occur before it. When thread A releases a location in memory and then thread B subsequently acquires *the same* location in memory, causality is established. Every write -that happened *before* A's release will be observed by B *after* its release. +that happened before A's release will be observed by B after its release. However no causality is established with any other threads. Similarly, no causality is established if A and B access *different* locations in memory. @@ -230,7 +235,7 @@ weakly-ordered platforms. # Relaxed Relaxed accesses are the absolute weakest. They can be freely re-ordered and -provide no happens-before relationship. Still, relaxed operations *are* still +provide no happens-before relationship. Still, relaxed operations are still atomic. That is, they don't count as data accesses and any read-modify-write operations done to them occur atomically. Relaxed operations are appropriate for things that you definitely want to happen, but don't particularly otherwise care diff --git a/src/doc/tarpl/borrow-splitting.md b/src/doc/tarpl/borrow-splitting.md index 123e2baf8fa..cc5bc8a602d 100644 --- a/src/doc/tarpl/borrow-splitting.md +++ b/src/doc/tarpl/borrow-splitting.md @@ -2,7 +2,7 @@ The mutual exclusion property of mutable references can be very limiting when working with a composite structure. The borrow checker understands some basic -stuff, but will fall over pretty easily. It *does* understand structs +stuff, but will fall over pretty easily. It does understand structs sufficiently to know that it's possible to borrow disjoint fields of a struct simultaneously. So this works today: @@ -27,19 +27,27 @@ However borrowck doesn't understand arrays or slices in any way, so this doesn't work: ```rust,ignore -let x = [1, 2, 3]; +let mut x = [1, 2, 3]; let a = &mut x[0]; let b = &mut x[1]; println!("{} {}", a, b); ``` ```text -<anon>:3:18: 3:22 error: cannot borrow immutable indexed content `x[..]` as mutable -<anon>:3 let a = &mut x[0]; - ^~~~ -<anon>:4:18: 4:22 error: cannot borrow immutable indexed content `x[..]` as mutable -<anon>:4 let b = &mut x[1]; - ^~~~ +<anon>:4:14: 4:18 error: cannot borrow `x[..]` as mutable more than once at a time +<anon>:4 let b = &mut x[1]; + ^~~~ +<anon>:3:14: 3:18 note: previous borrow of `x[..]` occurs here; the mutable borrow prevents subsequent moves, borrows, or modification of `x[..]` until the borrow ends +<anon>:3 let a = &mut x[0]; + ^~~~ +<anon>:6:2: 6:2 note: previous borrow ends here +<anon>:1 fn main() { +<anon>:2 let mut x = [1, 2, 3]; +<anon>:3 let a = &mut x[0]; +<anon>:4 let b = &mut x[1]; +<anon>:5 println!("{} {}", a, b); +<anon>:6 } + ^ error: aborting due to 2 previous errors ``` @@ -50,7 +58,7 @@ to the same value. In order to "teach" borrowck that what we're doing is ok, we need to drop down to unsafe code. For instance, mutable slices expose a `split_at_mut` function -that consumes the slice and returns *two* mutable slices. One for everything to +that consumes the slice and returns two mutable slices. One for everything to the left of the index, and one for everything to the right. Intuitively we know this is safe because the slices don't overlap, and therefore alias. However the implementation requires some unsafety: @@ -93,10 +101,10 @@ completely incompatible with this API, as it would produce multiple mutable references to the same object! However it actually *does* work, exactly because iterators are one-shot objects. -Everything an IterMut yields will be yielded *at most* once, so we don't -*actually* ever yield multiple mutable references to the same piece of data. +Everything an IterMut yields will be yielded at most once, so we don't +actually ever yield multiple mutable references to the same piece of data. -Perhaps surprisingly, mutable iterators *don't* require unsafe code to be +Perhaps surprisingly, mutable iterators don't require unsafe code to be implemented for many types! For instance here's a singly linked list: diff --git a/src/doc/tarpl/casts.md b/src/doc/tarpl/casts.md index cb12ffe8d21..5f07709cf45 100644 --- a/src/doc/tarpl/casts.md +++ b/src/doc/tarpl/casts.md @@ -1,13 +1,13 @@ % Casts Casts are a superset of coercions: every coercion can be explicitly -invoked via a cast. However some conversions *require* a cast. +invoked via a cast. However some conversions require a cast. While coercions are pervasive and largely harmless, these "true casts" are rare and potentially dangerous. As such, casts must be explicitly invoked using the `as` keyword: `expr as Type`. True casts generally revolve around raw pointers and the primitive numeric -types. Even though they're dangerous, these casts are *infallible* at runtime. +types. Even though they're dangerous, these casts are infallible at runtime. If a cast triggers some subtle corner case no indication will be given that this occurred. The cast will simply succeed. That said, casts must be valid at the type level, or else they will be prevented statically. For instance, diff --git a/src/doc/tarpl/checked-uninit.md b/src/doc/tarpl/checked-uninit.md index 706016a480c..f7c4482a4ab 100644 --- a/src/doc/tarpl/checked-uninit.md +++ b/src/doc/tarpl/checked-uninit.md @@ -80,7 +80,7 @@ loop { // because it relies on actual values. if true { // But it does understand that it will only be taken once because - // we *do* unconditionally break out of it. Therefore `x` doesn't + // we unconditionally break out of it. Therefore `x` doesn't // need to be marked as mutable. x = 0; break; diff --git a/src/doc/tarpl/concurrency.md b/src/doc/tarpl/concurrency.md index 95973b35d4f..9dcbecdd5b3 100644 --- a/src/doc/tarpl/concurrency.md +++ b/src/doc/tarpl/concurrency.md @@ -2,12 +2,12 @@ Rust as a language doesn't *really* have an opinion on how to do concurrency or parallelism. The standard library exposes OS threads and blocking sys-calls -because *everyone* has those, and they're uniform enough that you can provide +because everyone has those, and they're uniform enough that you can provide an abstraction over them in a relatively uncontroversial way. Message passing, green threads, and async APIs are all diverse enough that any abstraction over them tends to involve trade-offs that we weren't willing to commit to for 1.0. However the way Rust models concurrency makes it relatively easy design your own -concurrency paradigm as a library and have *everyone else's* code Just Work +concurrency paradigm as a library and have everyone else's code Just Work with yours. Just require the right lifetimes and Send and Sync where appropriate -and you're off to the races. Or rather, off to the... not... having... races. \ No newline at end of file +and you're off to the races. Or rather, off to the... not... having... races. diff --git a/src/doc/tarpl/constructors.md b/src/doc/tarpl/constructors.md index 023dea08444..97817cd1f90 100644 --- a/src/doc/tarpl/constructors.md +++ b/src/doc/tarpl/constructors.md @@ -37,14 +37,14 @@ blindly memcopied to somewhere else in memory. This means pure on-the-stack-but- still-movable intrusive linked lists are simply not happening in Rust (safely). Assignment and copy constructors similarly don't exist because move semantics -are the *only* semantics in Rust. At most `x = y` just moves the bits of y into -the x variable. Rust *does* provide two facilities for providing C++'s copy- +are the only semantics in Rust. At most `x = y` just moves the bits of y into +the x variable. Rust does provide two facilities for providing C++'s copy- oriented semantics: `Copy` and `Clone`. Clone is our moral equivalent of a copy constructor, but it's never implicitly invoked. You have to explicitly call `clone` on an element you want to be cloned. Copy is a special case of Clone where the implementation is just "copy the bits". Copy types *are* implicitly cloned whenever they're moved, but because of the definition of Copy this just -means *not* treating the old copy as uninitialized -- a no-op. +means not treating the old copy as uninitialized -- a no-op. While Rust provides a `Default` trait for specifying the moral equivalent of a default constructor, it's incredibly rare for this trait to be used. This is diff --git a/src/doc/tarpl/conversions.md b/src/doc/tarpl/conversions.md index 2309c45c6a8..b099a789ec3 100644 --- a/src/doc/tarpl/conversions.md +++ b/src/doc/tarpl/conversions.md @@ -8,7 +8,7 @@ a different type. Because Rust encourages encoding important properties in the type system, these problems are incredibly pervasive. As such, Rust consequently gives you several ways to solve them. -First we'll look at the ways that *Safe Rust* gives you to reinterpret values. +First we'll look at the ways that Safe Rust gives you to reinterpret values. The most trivial way to do this is to just destructure a value into its constituent parts and then build a new type out of them. e.g. diff --git a/src/doc/tarpl/data.md b/src/doc/tarpl/data.md index 88d169c3709..d0a796b7f0b 100644 --- a/src/doc/tarpl/data.md +++ b/src/doc/tarpl/data.md @@ -1,5 +1,5 @@ % Data Representation in Rust -Low-level programming cares a lot about data layout. It's a big deal. It also pervasively -influences the rest of the language, so we're going to start by digging into how data is -represented in Rust. +Low-level programming cares a lot about data layout. It's a big deal. It also +pervasively influences the rest of the language, so we're going to start by +digging into how data is represented in Rust. diff --git a/src/doc/tarpl/destructors.md b/src/doc/tarpl/destructors.md index 34c8b2b8624..568f7c07f59 100644 --- a/src/doc/tarpl/destructors.md +++ b/src/doc/tarpl/destructors.md @@ -7,16 +7,19 @@ What the language *does* provide is full-blown automatic destructors through the fn drop(&mut self); ``` -This method gives the type time to somehow finish what it was doing. **After -`drop` is run, Rust will recursively try to drop all of the fields of `self`**. +This method gives the type time to somehow finish what it was doing. + +**After `drop` is run, Rust will recursively try to drop all of the fields +of `self`.** + This is a convenience feature so that you don't have to write "destructor boilerplate" to drop children. If a struct has no special logic for being dropped other than dropping its children, then it means `Drop` doesn't need to be implemented at all! -**There is no stable way to prevent this behaviour in Rust 1.0. +**There is no stable way to prevent this behaviour in Rust 1.0.** -Note that taking `&mut self` means that even if you *could* suppress recursive +Note that taking `&mut self` means that even if you could suppress recursive Drop, Rust will prevent you from e.g. moving fields out of self. For most types, this is totally fine. @@ -90,7 +93,7 @@ After we deallocate the `box`'s ptr in SuperBox's destructor, Rust will happily proceed to tell the box to Drop itself and everything will blow up with use-after-frees and double-frees. -Note that the recursive drop behaviour applies to *all* structs and enums +Note that the recursive drop behaviour applies to all structs and enums regardless of whether they implement Drop. Therefore something like ```rust @@ -114,7 +117,7 @@ enum Link { } ``` -will have its inner Box field dropped *if and only if* an instance stores the +will have its inner Box field dropped if and only if an instance stores the Next variant. In general this works really nice because you don't need to worry about @@ -165,7 +168,7 @@ impl<T> Drop for SuperBox<T> { ``` However this has fairly odd semantics: you're saying that a field that *should* -always be Some may be None, just because that happens in the destructor. Of +always be Some *may* be None, just because that happens in the destructor. Of course this conversely makes a lot of sense: you can call arbitrary methods on self during the destructor, and this should prevent you from ever doing so after deinitializing the field. Not that it will prevent you from producing any other diff --git a/src/doc/tarpl/drop-flags.md b/src/doc/tarpl/drop-flags.md index f95ccc00329..1e81c97479b 100644 --- a/src/doc/tarpl/drop-flags.md +++ b/src/doc/tarpl/drop-flags.md @@ -10,7 +10,7 @@ How can it do this with conditional initialization? Note that this is not a problem that all assignments need worry about. In particular, assigning through a dereference unconditionally drops, and assigning -in a `let` unconditionally *doesn't* drop: +in a `let` unconditionally doesn't drop: ``` let mut x = Box::new(0); // let makes a fresh variable, so never need to drop @@ -23,11 +23,11 @@ one of its subfields. It turns out that Rust actually tracks whether a type should be dropped or not *at runtime*. As a variable becomes initialized and uninitialized, a *drop flag* -for that variable is toggled. When a variable *might* need to be dropped, this -flag is evaluated to determine if it *should* be dropped. +for that variable is toggled. When a variable might need to be dropped, this +flag is evaluated to determine if it should be dropped. -Of course, it is *often* the case that a value's initialization state can be -*statically* known at every point in the program. If this is the case, then the +Of course, it is often the case that a value's initialization state can be +statically known at every point in the program. If this is the case, then the compiler can theoretically generate more efficient code! For instance, straight- line code has such *static drop semantics*: @@ -40,8 +40,8 @@ y = x; // y was init; Drop y, overwrite it, and make x uninit! // x goes out of scope; x was uninit; do nothing. ``` -And even branched code where all branches have the same behaviour with respect -to initialization: +Similarly, branched code where all branches have the same behaviour with respect +to initialization has static drop semantics: ```rust # let condition = true; @@ -65,7 +65,7 @@ if condition { x = Box::new(0); // x was uninit; just overwrite. println!("{}", x); } - // x goes out of scope; x *might* be uninit; + // x goes out of scope; x might be uninit; // check the flag! ``` @@ -81,7 +81,7 @@ if condition { As of Rust 1.0, the drop flags are actually not-so-secretly stashed in a hidden field of any type that implements Drop. Rust sets the drop flag by overwriting -the *entire* value with a particular bit pattern. This is pretty obviously Not +the entire value with a particular bit pattern. This is pretty obviously Not The Fastest and causes a bunch of trouble with optimizing code. It's legacy from a time when you could do much more complex conditional initialization. @@ -92,4 +92,4 @@ as it requires fairly substantial changes to the compiler. Regardless, Rust programs don't need to worry about uninitialized values on the stack for correctness. Although they might care for performance. Thankfully, Rust makes it easy to take control here! Uninitialized values are there, and -you can work with them in Safe Rust, but you're *never* in danger. +you can work with them in Safe Rust, but you're never in danger. diff --git a/src/doc/tarpl/dropck.md b/src/doc/tarpl/dropck.md index c75bf8b1179..df09d1a1744 100644 --- a/src/doc/tarpl/dropck.md +++ b/src/doc/tarpl/dropck.md @@ -30,7 +30,7 @@ let (x, y) = (vec![], vec![]); ``` Does either value strictly outlive the other? The answer is in fact *no*, -neither value strictly outlives the other. Of course, one of x or y will be +neither value strictly outlives the other. Of course, one of x or y will be dropped before the other, but the actual order is not specified. Tuples aren't special in this regard; composite structures just don't guarantee their destruction order as of Rust 1.0. @@ -100,11 +100,11 @@ fn main() { <anon>:15 } ``` -Implementing Drop lets the Inspector execute some arbitrary code *during* its +Implementing Drop lets the Inspector execute some arbitrary code during its death. This means it can potentially observe that types that are supposed to live as long as it does actually were destroyed first. -Interestingly, only *generic* types need to worry about this. If they aren't +Interestingly, only generic types need to worry about this. If they aren't generic, then the only lifetimes they can harbor are `'static`, which will truly live *forever*. This is why this problem is referred to as *sound generic drop*. Sound generic drop is enforced by the *drop checker*. As of this writing, some @@ -116,12 +116,12 @@ section: strictly outlive it.** This rule is sufficient but not necessary to satisfy the drop checker. That is, -if your type obeys this rule then it's *definitely* sound to drop. However +if your type obeys this rule then it's definitely sound to drop. However there are special cases where you can fail to satisfy this, but still successfully pass the borrow checker. These are the precise rules that are currently up in the air. It turns out that when writing unsafe code, we generally don't need to worry at all about doing the right thing for the drop checker. However there -is *one* special case that you need to worry about, which we will look at in +is one special case that you need to worry about, which we will look at in the next section. diff --git a/src/doc/tarpl/exception-safety.md b/src/doc/tarpl/exception-safety.md index a43eec4f37e..74f7831a72a 100644 --- a/src/doc/tarpl/exception-safety.md +++ b/src/doc/tarpl/exception-safety.md @@ -1,8 +1,8 @@ % Exception Safety -Although programs should use unwinding sparingly, there's *a lot* of code that +Although programs should use unwinding sparingly, there's a lot of code that *can* panic. If you unwrap a None, index out of bounds, or divide by 0, your -program *will* panic. On debug builds, *every* arithmetic operation can panic +program will panic. On debug builds, every arithmetic operation can panic if it overflows. Unless you are very careful and tightly control what code runs, pretty much everything can unwind, and you need to be ready for it. @@ -22,7 +22,7 @@ unsound states must be careful that a panic does not cause that state to be used. Generally this means ensuring that only non-panicking code is run while these states exist, or making a guard that cleans up the state in the case of a panic. This does not necessarily mean that the state a panic witnesses is a -fully *coherent* state. We need only guarantee that it's a *safe* state. +fully coherent state. We need only guarantee that it's a *safe* state. Most Unsafe code is leaf-like, and therefore fairly easy to make exception-safe. It controls all the code that runs, and most of that code can't panic. However @@ -58,17 +58,16 @@ impl<T: Clone> Vec<T> { We bypass `push` in order to avoid redundant capacity and `len` checks on the Vec that we definitely know has capacity. The logic is totally correct, except there's a subtle problem with our code: it's not exception-safe! `set_len`, -`offset`, and `write` are all fine, but *clone* is the panic bomb we over- -looked. +`offset`, and `write` are all fine; `clone` is the panic bomb we over-looked. Clone is completely out of our control, and is totally free to panic. If it does, our function will exit early with the length of the Vec set too large. If the Vec is looked at or dropped, uninitialized memory will be read! The fix in this case is fairly simple. If we want to guarantee that the values -we *did* clone are dropped we can set the len *in* the loop. If we just want to -guarantee that uninitialized memory can't be observed, we can set the len -*after* the loop. +we *did* clone are dropped, we can set the `len` every loop iteration. If we +just want to guarantee that uninitialized memory can't be observed, we can set +the `len` after the loop. @@ -89,7 +88,7 @@ bubble_up(heap, index): A literal transcription of this code to Rust is totally fine, but has an annoying performance characteristic: the `self` element is swapped over and over again -uselessly. We would *rather* have the following: +uselessly. We would rather have the following: ```text bubble_up(heap, index): @@ -128,7 +127,7 @@ actually touched the state of the heap yet. Once we do start messing with the heap, we're working with only data and functions that we trust, so there's no concern of panics. -Perhaps you're not happy with this design. Surely, it's cheating! And we have +Perhaps you're not happy with this design. Surely it's cheating! And we have to do the complex heap traversal *twice*! Alright, let's bite the bullet. Let's intermix untrusted and unsafe code *for reals*. diff --git a/src/doc/tarpl/exotic-sizes.md b/src/doc/tarpl/exotic-sizes.md index d75d12e716e..0b653a7ad3a 100644 --- a/src/doc/tarpl/exotic-sizes.md +++ b/src/doc/tarpl/exotic-sizes.md @@ -48,7 +48,7 @@ a variable position based on its alignment][dst-issue].** # Zero Sized Types (ZSTs) -Rust actually allows types to be specified that occupy *no* space: +Rust actually allows types to be specified that occupy no space: ```rust struct Foo; // No fields = no size @@ -124,7 +124,7 @@ let res: Result<u32, Void> = Ok(0); let Ok(num) = res; ``` -But neither of these tricks work today, so all Void types get you today is +But neither of these tricks work today, so all Void types get you is the ability to be confident that certain situations are statically impossible. One final subtle detail about empty types is that raw pointers to them are diff --git a/src/doc/tarpl/hrtb.md b/src/doc/tarpl/hrtb.md index 3cc06f21df0..8692832e2c7 100644 --- a/src/doc/tarpl/hrtb.md +++ b/src/doc/tarpl/hrtb.md @@ -55,7 +55,7 @@ fn main() { How on earth are we supposed to express the lifetimes on `F`'s trait bound? We need to provide some lifetime there, but the lifetime we care about can't be named until we enter the body of `call`! Also, that isn't some fixed lifetime; -call works with *any* lifetime `&self` happens to have at that point. +`call` works with *any* lifetime `&self` happens to have at that point. This job requires The Magic of Higher-Rank Trait Bounds (HRTBs). The way we desugar this is as follows: diff --git a/src/doc/tarpl/leaking.md b/src/doc/tarpl/leaking.md index 343de99f08a..1aa78e112ea 100644 --- a/src/doc/tarpl/leaking.md +++ b/src/doc/tarpl/leaking.md @@ -21,21 +21,21 @@ uselessly, holding on to its precious resources until the program terminates (at which point all those resources would have been reclaimed by the OS anyway). We may consider a more restricted form of leak: failing to drop a value that is -unreachable. Rust also doesn't prevent this. In fact Rust has a *function for +unreachable. Rust also doesn't prevent this. In fact Rust *has a function for doing this*: `mem::forget`. This function consumes the value it is passed *and then doesn't run its destructor*. In the past `mem::forget` was marked as unsafe as a sort of lint against using it, since failing to call a destructor is generally not a well-behaved thing to do (though useful for some special unsafe code). However this was generally -determined to be an untenable stance to take: there are *many* ways to fail to +determined to be an untenable stance to take: there are many ways to fail to call a destructor in safe code. The most famous example is creating a cycle of reference-counted pointers using interior mutability. It is reasonable for safe code to assume that destructor leaks do not happen, as any program that leaks destructors is probably wrong. However *unsafe* code -cannot rely on destructors to be run to be *safe*. For most types this doesn't -matter: if you leak the destructor then the type is *by definition* +cannot rely on destructors to be run in order to be safe. For most types this +doesn't matter: if you leak the destructor then the type is by definition inaccessible, so it doesn't matter, right? For instance, if you leak a `Box<u8>` then you waste some memory but that's hardly going to violate memory-safety. @@ -64,7 +64,7 @@ uninitialized data! We could backshift all the elements in the Vec every time we remove a value, but this would have pretty catastrophic performance consequences. -Instead, we would like Drain to *fix* the Vec's backing storage when it is +Instead, we would like Drain to fix the Vec's backing storage when it is dropped. It should run itself to completion, backshift any elements that weren't removed (drain supports subranges), and then fix Vec's `len`. It's even unwinding-safe! Easy! @@ -97,13 +97,13 @@ consistent state gives us Undefined Behaviour in safe code (making the API unsound). So what can we do? Well, we can pick a trivially consistent state: set the Vec's -len to be 0 when we *start* the iteration, and fix it up if necessary in the +len to be 0 when we start the iteration, and fix it up if necessary in the destructor. That way, if everything executes like normal we get the desired behaviour with minimal overhead. But if someone has the *audacity* to mem::forget us in the middle of the iteration, all that does is *leak even more* -(and possibly leave the Vec in an *unexpected* but consistent state). Since -we've accepted that mem::forget is safe, this is definitely safe. We call leaks -causing more leaks a *leak amplification*. +(and possibly leave the Vec in an unexpected but otherwise consistent state). +Since we've accepted that mem::forget is safe, this is definitely safe. We call +leaks causing more leaks a *leak amplification*. @@ -167,16 +167,16 @@ impl<T> Drop for Rc<T> { } ``` -This code contains an implicit and subtle assumption: ref_count can fit in a +This code contains an implicit and subtle assumption: `ref_count` can fit in a `usize`, because there can't be more than `usize::MAX` Rcs in memory. However -this itself assumes that the ref_count accurately reflects the number of Rcs -in memory, which we know is false with mem::forget. Using mem::forget we can -overflow the ref_count, and then get it down to 0 with outstanding Rcs. Then we -can happily use-after-free the inner data. Bad Bad Not Good. +this itself assumes that the `ref_count` accurately reflects the number of Rcs +in memory, which we know is false with `mem::forget`. Using `mem::forget` we can +overflow the `ref_count`, and then get it down to 0 with outstanding Rcs. Then +we can happily use-after-free the inner data. Bad Bad Not Good. -This can be solved by *saturating* the ref_count, which is sound because -decreasing the refcount by `n` still requires `n` Rcs simultaneously living -in memory. +This can be solved by just checking the `ref_count` and doing *something*. The +standard library's stance is to just abort, because your program has become +horribly degenerate. Also *oh my gosh* it's such a ridiculous corner case. @@ -237,7 +237,7 @@ In principle, this totally works! Rust's ownership system perfectly ensures it! let mut data = Box::new(0); { let guard = thread::scoped(|| { - // This is at best a data race. At worst, it's *also* a use-after-free. + // This is at best a data race. At worst, it's also a use-after-free. *data += 1; }); // Because the guard is forgotten, expiring the loan without blocking this diff --git a/src/doc/tarpl/lifetime-mismatch.md b/src/doc/tarpl/lifetime-mismatch.md index 93ecb51c010..8b01616ee0d 100644 --- a/src/doc/tarpl/lifetime-mismatch.md +++ b/src/doc/tarpl/lifetime-mismatch.md @@ -18,7 +18,7 @@ fn main() { ``` One might expect it to compile. We call `mutate_and_share`, which mutably borrows -`foo` *temporarily*, but then returns *only* a shared reference. Therefore we +`foo` temporarily, but then returns only a shared reference. Therefore we would expect `foo.share()` to succeed as `foo` shouldn't be mutably borrowed. However when we try to compile it: @@ -69,7 +69,7 @@ due to the lifetime of `loan` and mutate_and_share's signature. Then when we try to call `share`, and it sees we're trying to alias that `&'c mut foo` and blows up in our face! -This program is clearly correct according to the reference semantics we *actually* +This program is clearly correct according to the reference semantics we actually care about, but the lifetime system is too coarse-grained to handle that. @@ -78,4 +78,4 @@ TODO: other common problems? SEME regions stuff, mostly? -[ex2]: lifetimes.html#example-2:-aliasing-a-mutable-reference \ No newline at end of file +[ex2]: lifetimes.html#example-2:-aliasing-a-mutable-reference diff --git a/src/doc/tarpl/lifetimes.md b/src/doc/tarpl/lifetimes.md index 37d03573361..f211841ec0c 100644 --- a/src/doc/tarpl/lifetimes.md +++ b/src/doc/tarpl/lifetimes.md @@ -6,11 +6,11 @@ and anything that contains a reference, is tagged with a lifetime specifying the scope it's valid for. Within a function body, Rust generally doesn't let you explicitly name the -lifetimes involved. This is because it's generally not really *necessary* +lifetimes involved. This is because it's generally not really necessary to talk about lifetimes in a local context; Rust has all the information and can work out everything as optimally as possible. Many anonymous scopes and temporaries that you would otherwise have to write are often introduced to -make your code *just work*. +make your code Just Work. However once you cross the function boundary, you need to start talking about lifetimes. Lifetimes are denoted with an apostrophe: `'a`, `'static`. To dip @@ -42,7 +42,7 @@ likely desugar to the following: 'a: { let x: i32 = 0; 'b: { - // lifetime used is 'b because that's *good enough*. + // lifetime used is 'b because that's good enough. let y: &'b i32 = &'b x; 'c: { // ditto on 'c @@ -107,8 +107,9 @@ fn as_str<'a>(data: &'a u32) -> &'a str { This signature of `as_str` takes a reference to a u32 with *some* lifetime, and promises that it can produce a reference to a str that can live *just as long*. Already we can see why this signature might be trouble. That basically implies -that we're going to *find* a str somewhere in the scope the scope the reference -to the u32 originated in, or somewhere *even* earlier. That's a *bit* of a big ask. +that we're going to find a str somewhere in the scope the reference +to the u32 originated in, or somewhere *even earlier*. That's a bit of a big +ask. We then proceed to compute the string `s`, and return a reference to it. Since the contract of our function says the reference must outlive `'a`, that's the @@ -135,7 +136,7 @@ fn main() { 'd: { // An anonymous scope is introduced because the borrow does not // need to last for the whole scope x is valid for. The return - // of as_str must find a str somewhere *before* this function + // of as_str must find a str somewhere before this function // call. Obviously not happening. println!("{}", as_str::<'d>(&'d x)); } @@ -195,21 +196,21 @@ println!("{}", x); The problem here is is bit more subtle and interesting. We want Rust to reject this program for the following reason: We have a live shared reference `x` -to a descendent of `data` when try to take a *mutable* reference to `data` -when we call `push`. This would create an aliased mutable reference, which would +to a descendent of `data` when we try to take a mutable reference to `data` +to `push`. This would create an aliased mutable reference, which would violate the *second* rule of references. However this is *not at all* how Rust reasons that this program is bad. Rust doesn't understand that `x` is a reference to a subpath of `data`. It doesn't understand Vec at all. What it *does* see is that `x` has to live for `'b` to be printed. The signature of `Index::index` subsequently demands that the -reference we take to *data* has to survive for `'b`. When we try to call `push`, +reference we take to `data` has to survive for `'b`. When we try to call `push`, it then sees us try to make an `&'c mut data`. Rust knows that `'c` is contained within `'b`, and rejects our program because the `&'b data` must still be live! -Here we see that the lifetime system is *much* more coarse than the reference +Here we see that the lifetime system is much more coarse than the reference semantics we're actually interested in preserving. For the most part, *that's totally ok*, because it keeps us from spending all day explaining our program -to the compiler. However it does mean that several programs that are *totally* +to the compiler. However it does mean that several programs that are totally correct with respect to Rust's *true* semantics are rejected because lifetimes are too dumb. diff --git a/src/doc/tarpl/meet-safe-and-unsafe.md b/src/doc/tarpl/meet-safe-and-unsafe.md index a5e3136c54a..15e49c747b8 100644 --- a/src/doc/tarpl/meet-safe-and-unsafe.md +++ b/src/doc/tarpl/meet-safe-and-unsafe.md @@ -29,7 +29,7 @@ Rust, you will never have to worry about type-safety or memory-safety. You will never endure a null or dangling pointer, or any of that Undefined Behaviour nonsense. -*That's totally awesome*. +*That's totally awesome.* The standard library also gives you enough utilities out-of-the-box that you'll be able to write awesome high-performance applications and libraries in pure @@ -41,7 +41,7 @@ low-level abstraction not exposed by the standard library. Maybe you're need to do something the type-system doesn't understand and just *frob some dang bits*. Maybe you need Unsafe Rust. -Unsafe Rust is exactly like Safe Rust with *all* the same rules and semantics. +Unsafe Rust is exactly like Safe Rust with all the same rules and semantics. However Unsafe Rust lets you do some *extra* things that are Definitely Not Safe. The only things that are different in Unsafe Rust are that you can: diff --git a/src/doc/tarpl/ownership.md b/src/doc/tarpl/ownership.md index f79cd92479f..e80c64c3543 100644 --- a/src/doc/tarpl/ownership.md +++ b/src/doc/tarpl/ownership.md @@ -12,7 +12,7 @@ language? Regardless of your feelings on GC, it is pretty clearly a *massive* boon to making code safe. You never have to worry about things going away *too soon* -(although whether you still *wanted* to be pointing at that thing is a different +(although whether you still wanted to be pointing at that thing is a different issue...). This is a pervasive problem that C and C++ programs need to deal with. Consider this simple mistake that all of us who have used a non-GC'd language have made at one point: diff --git a/src/doc/tarpl/phantom-data.md b/src/doc/tarpl/phantom-data.md index 034f3178429..0d7ec7f1617 100644 --- a/src/doc/tarpl/phantom-data.md +++ b/src/doc/tarpl/phantom-data.md @@ -14,11 +14,11 @@ struct Iter<'a, T: 'a> { However because `'a` is unused within the struct's body, it's *unbounded*. Because of the troubles this has historically caused, unbounded lifetimes and -types are *illegal* in struct definitions. Therefore we must somehow refer +types are *forbidden* in struct definitions. Therefore we must somehow refer to these types in the body. Correctly doing this is necessary to have correct variance and drop checking. -We do this using *PhantomData*, which is a special marker type. PhantomData +We do this using `PhantomData`, which is a special marker type. `PhantomData` consumes no space, but simulates a field of the given type for the purpose of static analysis. This was deemed to be less error-prone than explicitly telling the type-system the kind of variance that you want, while also providing other @@ -57,7 +57,7 @@ Good to go! Nope. The drop checker will generously determine that Vec<T> does not own any values -of type T. This will in turn make it conclude that it does *not* need to worry +of type T. This will in turn make it conclude that it doesn't need to worry about Vec dropping any T's in its destructor for determining drop check soundness. This will in turn allow people to create unsoundness using Vec's destructor. diff --git a/src/doc/tarpl/poisoning.md b/src/doc/tarpl/poisoning.md index 6fb16f28e34..70de91af61f 100644 --- a/src/doc/tarpl/poisoning.md +++ b/src/doc/tarpl/poisoning.md @@ -20,7 +20,7 @@ standard library's Mutex type. A Mutex will poison itself if one of its MutexGuards (the thing it returns when a lock is obtained) is dropped during a panic. Any future attempts to lock the Mutex will return an `Err` or panic. -Mutex poisons not for *true* safety in the sense that Rust normally cares about. It +Mutex poisons not for true safety in the sense that Rust normally cares about. It poisons as a safety-guard against blindly using the data that comes out of a Mutex that has witnessed a panic while locked. The data in such a Mutex was likely in the middle of being modified, and as such may be in an inconsistent or incomplete state. diff --git a/src/doc/tarpl/races.md b/src/doc/tarpl/races.md index 2ad62c14a80..3b47502ebfe 100644 --- a/src/doc/tarpl/races.md +++ b/src/doc/tarpl/races.md @@ -12,11 +12,13 @@ it's impossible to alias a mutable reference, so it's impossible to perform a data race. Interior mutability makes this more complicated, which is largely why we have the Send and Sync traits (see below). -However Rust *does not* prevent general race conditions. This is -pretty fundamentally impossible, and probably honestly undesirable. Your hardware -is racy, your OS is racy, the other programs on your computer are racy, and the -world this all runs in is racy. Any system that could genuinely claim to prevent -*all* race conditions would be pretty awful to use, if not just incorrect. +**However Rust does not prevent general race conditions.** + +This is pretty fundamentally impossible, and probably honestly undesirable. Your +hardware is racy, your OS is racy, the other programs on your computer are racy, +and the world this all runs in is racy. Any system that could genuinely claim to +prevent *all* race conditions would be pretty awful to use, if not just +incorrect. So it's perfectly "fine" for a Safe Rust program to get deadlocked or do something incredibly stupid with incorrect synchronization. Obviously such a @@ -25,7 +27,7 @@ race condition can't violate memory safety in a Rust program on its own. Only in conjunction with some other unsafe code can a race condition actually violate memory safety. For instance: -```rust,norun +```rust,no_run use std::thread; use std::sync::atomic::{AtomicUsize, Ordering}; use std::sync::Arc; @@ -46,7 +48,7 @@ thread::spawn(move || { }); // Index with the value loaded from the atomic. This is safe because we -// read the atomic memory only once, and then pass a *copy* of that value +// read the atomic memory only once, and then pass a copy of that value // to the Vec's indexing implementation. This indexing will be correctly // bounds checked, and there's no chance of the value getting changed // in the middle. However our program may panic if the thread we spawned @@ -56,7 +58,7 @@ thread::spawn(move || { println!("{}", data[idx.load(Ordering::SeqCst)]); ``` -```rust,norun +```rust,no_run use std::thread; use std::sync::atomic::{AtomicUsize, Ordering}; use std::sync::Arc; @@ -75,7 +77,7 @@ thread::spawn(move || { if idx.load(Ordering::SeqCst) < data.len() { unsafe { - // Incorrectly loading the idx *after* we did the bounds check. + // Incorrectly loading the idx after we did the bounds check. // It could have changed. This is a race condition, *and dangerous* // because we decided to do `get_unchecked`, which is `unsafe`. println!("{}", data.get_unchecked(idx.load(Ordering::SeqCst))); diff --git a/src/doc/tarpl/repr-rust.md b/src/doc/tarpl/repr-rust.md index 639d64adc18..c8a372be767 100644 --- a/src/doc/tarpl/repr-rust.md +++ b/src/doc/tarpl/repr-rust.md @@ -31,8 +31,8 @@ type's size is a multiple of its alignment. For instance: ```rust struct A { a: u8, - c: u32, - b: u16, + b: u32, + c: u16, } ``` @@ -70,7 +70,7 @@ struct B { Rust *does* guarantee that two instances of A have their data laid out in exactly the same way. However Rust *does not* guarantee that an instance of A has the same field ordering or padding as an instance of B (in practice there's -no *particular* reason why they wouldn't, other than that its not currently +no particular reason why they wouldn't, other than that its not currently guaranteed). With A and B as written, this is basically nonsensical, but several other @@ -88,9 +88,9 @@ struct Foo<T, U> { ``` Now consider the monomorphizations of `Foo<u32, u16>` and `Foo<u16, u32>`. If -Rust lays out the fields in the order specified, we expect it to *pad* the -values in the struct to satisfy their *alignment* requirements. So if Rust -didn't reorder fields, we would expect Rust to produce the following: +Rust lays out the fields in the order specified, we expect it to pad the +values in the struct to satisfy their alignment requirements. So if Rust +didn't reorder fields, we would expect it to produce the following: ```rust,ignore struct Foo<u16, u32> { @@ -112,7 +112,7 @@ The latter case quite simply wastes space. An optimal use of space therefore requires different monomorphizations to have *different field orderings*. **Note: this is a hypothetical optimization that is not yet implemented in Rust -**1.0 +1.0** Enums make this consideration even more complicated. Naively, an enum such as: @@ -128,8 +128,8 @@ would be laid out as: ```rust struct FooRepr { - data: u64, // this is *really* either a u64, u32, or u8 based on `tag` - tag: u8, // 0 = A, 1 = B, 2 = C + data: u64, // this is either a u64, u32, or u8 based on `tag` + tag: u8, // 0 = A, 1 = B, 2 = C } ``` diff --git a/src/doc/tarpl/safe-unsafe-meaning.md b/src/doc/tarpl/safe-unsafe-meaning.md index 909308397d7..2f15b7050e3 100644 --- a/src/doc/tarpl/safe-unsafe-meaning.md +++ b/src/doc/tarpl/safe-unsafe-meaning.md @@ -5,7 +5,7 @@ So what's the relationship between Safe and Unsafe Rust? How do they interact? Rust models the separation between Safe and Unsafe Rust with the `unsafe` keyword, which can be thought as a sort of *foreign function interface* (FFI) between Safe and Unsafe Rust. This is the magic behind why we can say Safe Rust -is a safe language: all the scary unsafe bits are relegated *exclusively* to FFI +is a safe language: all the scary unsafe bits are relegated exclusively to FFI *just like every other safe language*. However because one language is a subset of the other, the two can be cleanly @@ -61,13 +61,13 @@ The need for unsafe traits boils down to the fundamental property of safe code: **No matter how completely awful Safe code is, it can't cause Undefined Behaviour.** -This means that Unsafe, **the royal vanguard of Undefined Behaviour**, has to be -*super paranoid* about generic safe code. Unsafe is free to trust *specific* safe -code (or else you would degenerate into infinite spirals of paranoid despair). -It is generally regarded as ok to trust the standard library to be correct, as -`std` is effectively an extension of the language (and you *really* just have -to trust the language). If `std` fails to uphold the guarantees it declares, -then it's basically a language bug. +This means that Unsafe Rust, **the royal vanguard of Undefined Behaviour**, has to be +*super paranoid* about generic safe code. To be clear, Unsafe Rust is totally free to trust +specific safe code. Anything else would degenerate into infinite spirals of +paranoid despair. In particular it's generally regarded as ok to trust the standard library +to be correct. `std` is effectively an extension of the language, and you +really just have to trust the language. If `std` fails to uphold the +guarantees it declares, then it's basically a language bug. That said, it would be best to minimize *needlessly* relying on properties of concrete safe code. Bugs happen! Of course, I must reinforce that this is only @@ -75,36 +75,36 @@ a concern for Unsafe code. Safe code can blindly trust anyone and everyone as far as basic memory-safety is concerned. On the other hand, safe traits are free to declare arbitrary contracts, but because -implementing them is Safe, Unsafe can't trust those contracts to actually +implementing them is safe, unsafe code can't trust those contracts to actually be upheld. This is different from the concrete case because *anyone* can randomly implement the interface. There is something fundamentally different -about trusting a *particular* piece of code to be correct, and trusting *all the +about trusting a particular piece of code to be correct, and trusting *all the code that will ever be written* to be correct. For instance Rust has `PartialOrd` and `Ord` traits to try to differentiate between types which can "just" be compared, and those that actually implement a -*total* ordering. Pretty much every API that wants to work with data that can be -compared *really* wants Ord data. For instance, a sorted map like BTreeMap +total ordering. Pretty much every API that wants to work with data that can be +compared wants Ord data. For instance, a sorted map like BTreeMap *doesn't even make sense* for partially ordered types. If you claim to implement Ord for a type, but don't actually provide a proper total ordering, BTreeMap will get *really confused* and start making a total mess of itself. Data that is inserted may be impossible to find! But that's okay. BTreeMap is safe, so it guarantees that even if you give it a -*completely* garbage Ord implementation, it will still do something *safe*. You -won't start reading uninitialized memory or unallocated memory. In fact, BTreeMap +completely garbage Ord implementation, it will still do something *safe*. You +won't start reading uninitialized or unallocated memory. In fact, BTreeMap manages to not actually lose any of your data. When the map is dropped, all the destructors will be successfully called! Hooray! -However BTreeMap is implemented using a modest spoonful of Unsafe (most collections -are). That means that it is not necessarily *trivially true* that a bad Ord -implementation will make BTreeMap behave safely. Unsafe must be sure not to rely -on Ord *where safety is at stake*. Ord is provided by Safe, and safety is not -Safe's responsibility to uphold. +However BTreeMap is implemented using a modest spoonful of Unsafe Rust (most collections +are). That means that it's not necessarily *trivially true* that a bad Ord +implementation will make BTreeMap behave safely. BTreeMap must be sure not to rely +on Ord *where safety is at stake*. Ord is provided by safe code, and safety is not +safe code's responsibility to uphold. -But wouldn't it be grand if there was some way for Unsafe to trust *some* trait +But wouldn't it be grand if there was some way for Unsafe to trust some trait contracts *somewhere*? This is the problem that unsafe traits tackle: by marking -*the trait itself* as unsafe *to implement*, Unsafe can trust the implementation +*the trait itself* as unsafe to implement, unsafe code can trust the implementation to uphold the trait's contract. Although the trait implementation may be incorrect in arbitrary other ways. @@ -126,7 +126,7 @@ But it's probably not the implementation you want. Rust has traditionally avoided making traits unsafe because it makes Unsafe pervasive, which is not desirable. Send and Sync are unsafe is because thread -safety is a *fundamental property* that Unsafe cannot possibly hope to defend +safety is a *fundamental property* that unsafe code cannot possibly hope to defend against in the same way it would defend against a bad Ord implementation. The only way to possibly defend against thread-unsafety would be to *not use threading at all*. Making every load and store atomic isn't even sufficient, @@ -135,10 +135,10 @@ in memory. For instance, the pointer and capacity of a Vec must be in sync. Even concurrent paradigms that are traditionally regarded as Totally Safe like message passing implicitly rely on some notion of thread safety -- are you -really message-passing if you pass a *pointer*? Send and Sync therefore require -some *fundamental* level of trust that Safe code can't provide, so they must be +really message-passing if you pass a pointer? Send and Sync therefore require +some fundamental level of trust that Safe code can't provide, so they must be unsafe to implement. To help obviate the pervasive unsafety that this would -introduce, Send (resp. Sync) is *automatically* derived for all types composed only +introduce, Send (resp. Sync) is automatically derived for all types composed only of Send (resp. Sync) values. 99% of types are Send and Sync, and 99% of those never actually say it (the remaining 1% is overwhelmingly synchronization primitives). diff --git a/src/doc/tarpl/send-and-sync.md b/src/doc/tarpl/send-and-sync.md index 5b00709a1bf..334d5c9dd55 100644 --- a/src/doc/tarpl/send-and-sync.md +++ b/src/doc/tarpl/send-and-sync.md @@ -5,29 +5,28 @@ multiply alias a location in memory while mutating it. Unless these types use synchronization to manage this access, they are absolutely not thread safe. Rust captures this with through the `Send` and `Sync` traits. -* A type is Send if it is safe to send it to another thread. A type is Sync if -* it is safe to share between threads (`&T` is Send). +* A type is Send if it is safe to send it to another thread. +* A type is Sync if it is safe to share between threads (`&T` is Send). -Send and Sync are *very* fundamental to Rust's concurrency story. As such, a +Send and Sync are fundamental to Rust's concurrency story. As such, a substantial amount of special tooling exists to make them work right. First and -foremost, they're *unsafe traits*. This means that they are unsafe *to -implement*, and other unsafe code can *trust* that they are correctly +foremost, they're [unsafe traits][]. This means that they are unsafe to +implement, and other unsafe code can that they are correctly implemented. Since they're *marker traits* (they have no associated items like methods), correctly implemented simply means that they have the intrinsic properties an implementor should have. Incorrectly implementing Send or Sync can cause Undefined Behaviour. -Send and Sync are also what Rust calls *opt-in builtin traits*. This means that, -unlike every other trait, they are *automatically* derived: if a type is -composed entirely of Send or Sync types, then it is Send or Sync. Almost all -primitives are Send and Sync, and as a consequence pretty much all types you'll -ever interact with are Send and Sync. +Send and Sync are also automatically derived traits. This means that, unlike +every other trait, if a type is composed entirely of Send or Sync types, then it +is Send or Sync. Almost all primitives are Send and Sync, and as a consequence +pretty much all types you'll ever interact with are Send and Sync. Major exceptions include: -* raw pointers are neither Send nor Sync (because they have no safety guards) -* `UnsafeCell` isn't Sync (and therefore `Cell` and `RefCell` aren't) `Rc` isn't -* Send or Sync (because the refcount is shared and unsynchronized) +* raw pointers are neither Send nor Sync (because they have no safety guards). +* `UnsafeCell` isn't Sync (and therefore `Cell` and `RefCell` aren't). +* `Rc` isn't Send or Sync (because the refcount is shared and unsynchronized). `Rc` and `UnsafeCell` are very fundamentally not thread-safe: they enable unsynchronized shared mutable state. However raw pointers are, strictly @@ -37,13 +36,12 @@ sense, one could argue that it would be "fine" for them to be marked as thread safe. However it's important that they aren't thread safe to prevent types that -*contain them* from being automatically marked as thread safe. These types have +contain them from being automatically marked as thread safe. These types have non-trivial untracked ownership, and it's unlikely that their author was necessarily thinking hard about thread safety. In the case of Rc, we have a nice -example of a type that contains a `*mut` that is *definitely* not thread safe. +example of a type that contains a `*mut` that is definitely not thread safe. -Types that aren't automatically derived can *opt-in* to Send and Sync by simply -implementing them: +Types that aren't automatically derived can simply implement them if desired: ```rust struct MyBox(*mut u8); @@ -52,12 +50,13 @@ unsafe impl Send for MyBox {} unsafe impl Sync for MyBox {} ``` -In the *incredibly rare* case that a type is *inappropriately* automatically -derived to be Send or Sync, then one can also *unimplement* Send and Sync: +In the *incredibly rare* case that a type is inappropriately automatically +derived to be Send or Sync, then one can also unimplement Send and Sync: ```rust #![feature(optin_builtin_traits)] +// I have some magic semantics for some synchronization primitive! struct SpecialThreadToken(u8); impl !Send for SpecialThreadToken {} @@ -71,9 +70,11 @@ possible cause trouble by being incorrectly Send or Sync. Most uses of raw pointers should be encapsulated behind a sufficient abstraction that Send and Sync can be derived. For instance all of Rust's standard collections are Send and Sync (when they contain Send and Sync types) in spite -of their pervasive use raw pointers to manage allocations and complex ownership. +of their pervasive use of raw pointers to manage allocations and complex ownership. Similarly, most iterators into these collections are Send and Sync because they largely behave like an `&` or `&mut` into the collection. TODO: better explain what can or can't be Send or Sync. Sufficient to appeal only to data races? + +[unsafe traits]: safe-unsafe-meaning.html diff --git a/src/doc/tarpl/subtyping.md b/src/doc/tarpl/subtyping.md index 767a0aca542..3c57297f323 100644 --- a/src/doc/tarpl/subtyping.md +++ b/src/doc/tarpl/subtyping.md @@ -1,14 +1,14 @@ % Subtyping and Variance Although Rust doesn't have any notion of structural inheritance, it *does* -include subtyping. In Rust, subtyping derives entirely from *lifetimes*. Since +include subtyping. In Rust, subtyping derives entirely from lifetimes. Since lifetimes are scopes, we can partially order them based on the *contains* (outlives) relationship. We can even express this as a generic bound. -Subtyping on lifetimes in terms of that relationship: if `'a: 'b` ("a contains +Subtyping on lifetimes is in terms of that relationship: if `'a: 'b` ("a contains b" or "a outlives b"), then `'a` is a subtype of `'b`. This is a large source of confusion, because it seems intuitively backwards to many: the bigger scope is a -*sub type* of the smaller scope. +*subtype* of the smaller scope. This does in fact make sense, though. The intuitive reason for this is that if you expect an `&'a u8`, then it's totally fine for me to hand you an `&'static @@ -72,7 +72,7 @@ to be able to pass `&&'static str` where an `&&'a str` is expected. The additional level of indirection does not change the desire to be able to pass longer lived things where shorted lived things are expected. -However this logic *does not* apply to `&mut`. To see why `&mut` should +However this logic doesn't apply to `&mut`. To see why `&mut` should be invariant over T, consider the following code: ```rust,ignore @@ -109,7 +109,7 @@ between `'a` and T is that `'a` is a property of the reference itself, while T is something the reference is borrowing. If you change T's type, then the source still remembers the original type. However if you change the lifetime's type, no one but the reference knows this information, so it's fine. -Put another way, `&'a mut T` owns `'a`, but only *borrows* T. +Put another way: `&'a mut T` owns `'a`, but only *borrows* T. `Box` and `Vec` are interesting cases because they're variant, but you can definitely store values in them! This is where Rust gets really clever: it's @@ -118,7 +118,7 @@ in them *via a mutable reference*! The mutable reference makes the whole type invariant, and therefore prevents you from smuggling a short-lived type into them. -Being variant *does* allows `Box` and `Vec` to be weakened when shared +Being variant allows `Box` and `Vec` to be weakened when shared immutably. So you can pass a `&Box<&'static str>` where a `&Box<&'a str>` is expected. @@ -126,7 +126,7 @@ However what should happen when passing *by-value* is less obvious. It turns out that, yes, you can use subtyping when passing by-value. That is, this works: ```rust -fn get_box<'a>(str: &'a u8) -> Box<&'a str> { +fn get_box<'a>(str: &'a str) -> Box<&'a str> { // string literals are `&'static str`s Box::new("hello") } @@ -150,7 +150,7 @@ signature: fn foo(&'a str) -> usize; ``` -This signature claims that it can handle any `&str` that lives *at least* as +This signature claims that it can handle any `&str` that lives at least as long as `'a`. Now if this signature was variant over `&'a str`, that would mean @@ -159,10 +159,12 @@ fn foo(&'static str) -> usize; ``` could be provided in its place, as it would be a subtype. However this function -has a *stronger* requirement: it says that it can *only* handle `&'static str`s, -and nothing else. Therefore functions are not variant over their arguments. +has a stronger requirement: it says that it can only handle `&'static str`s, +and nothing else. Giving `&'a str`s to it would be unsound, as it's free to +assume that what it's given lives forever. Therefore functions are not variant +over their arguments. -To see why `Fn(T) -> U` should be *variant* over U, consider the following +To see why `Fn(T) -> U` should be variant over U, consider the following function signature: ```rust,ignore @@ -177,7 +179,7 @@ therefore completely reasonable to provide fn foo(usize) -> &'static str; ``` -in its place. Therefore functions *are* variant over their return type. +in its place. Therefore functions are variant over their return type. `*const` has the exact same semantics as `&`, so variance follows. `*mut` on the other hand can dereference to an `&mut` whether shared or not, so it is marked diff --git a/src/doc/tarpl/unwinding.md b/src/doc/tarpl/unwinding.md index 59494d86474..3ad95dde39d 100644 --- a/src/doc/tarpl/unwinding.md +++ b/src/doc/tarpl/unwinding.md @@ -31,12 +31,12 @@ panics can only be caught by the parent thread. This means catching a panic requires spinning up an entire OS thread! This unfortunately stands in conflict to Rust's philosophy of zero-cost abstractions. -There is an *unstable* API called `catch_panic` that enables catching a panic +There is an unstable API called `catch_panic` that enables catching a panic without spawning a thread. Still, we would encourage you to only do this sparingly. In particular, Rust's current unwinding implementation is heavily optimized for the "doesn't unwind" case. If a program doesn't unwind, there should be no runtime cost for the program being *ready* to unwind. As a -consequence, *actually* unwinding will be more expensive than in e.g. Java. +consequence, actually unwinding will be more expensive than in e.g. Java. Don't build your programs to unwind under normal circumstances. Ideally, you should only panic for programming errors or *extreme* problems. diff --git a/src/doc/tarpl/vec-alloc.md b/src/doc/tarpl/vec-alloc.md index 93efbbbdf89..fc7feba2356 100644 --- a/src/doc/tarpl/vec-alloc.md +++ b/src/doc/tarpl/vec-alloc.md @@ -60,7 +60,7 @@ of memory at once (e.g. half the theoretical address space). As such it's like the standard library as much as possible, so we'll just kill the whole program. -We said we don't want to use intrinsics, so doing *exactly* what `std` does is +We said we don't want to use intrinsics, so doing exactly what `std` does is out. Instead, we'll call `std::process::exit` with some random number. ```rust @@ -84,7 +84,7 @@ But Rust's only supported allocator API is so low level that we'll need to do a fair bit of extra work. We also need to guard against some special conditions that can occur with really large allocations or empty allocations. -In particular, `ptr::offset` will cause us *a lot* of trouble, because it has +In particular, `ptr::offset` will cause us a lot of trouble, because it has the semantics of LLVM's GEP inbounds instruction. If you're fortunate enough to not have dealt with this instruction, here's the basic story with GEP: alias analysis, alias analysis, alias analysis. It's super important to an optimizing @@ -102,7 +102,7 @@ As a simple example, consider the following fragment of code: If the compiler can prove that `x` and `y` point to different locations in memory, the two operations can in theory be executed in parallel (by e.g. loading them into different registers and working on them independently). -However in *general* the compiler can't do this because if x and y point to +However the compiler can't do this in general because if x and y point to the same location in memory, the operations need to be done to the same value, and they can't just be merged afterwards. @@ -118,7 +118,7 @@ possible. So that's what GEP's about, how can it cause us trouble? The first problem is that we index into arrays with unsigned integers, but -GEP (and as a consequence `ptr::offset`) takes a *signed integer*. This means +GEP (and as a consequence `ptr::offset`) takes a signed integer. This means that half of the seemingly valid indices into an array will overflow GEP and actually go in the wrong direction! As such we must limit all allocations to `isize::MAX` elements. This actually means we only need to worry about @@ -138,7 +138,7 @@ However since this is a tutorial, we're not going to be particularly optimal here, and just unconditionally check, rather than use clever platform-specific `cfg`s. -The other corner-case we need to worry about is *empty* allocations. There will +The other corner-case we need to worry about is empty allocations. There will be two kinds of empty allocations we need to worry about: `cap = 0` for all T, and `cap > 0` for zero-sized types. @@ -165,9 +165,9 @@ protected from being allocated anyway (a whole 4k, on many platforms). However what about for positive-sized types? That one's a bit trickier. In principle, you can argue that offsetting by 0 gives LLVM no information: either -there's an element before the address, or after it, but it can't know which. +there's an element before the address or after it, but it can't know which. However we've chosen to conservatively assume that it may do bad things. As -such we *will* guard against this case explicitly. +such we will guard against this case explicitly. *Phew* diff --git a/src/doc/tarpl/vec-drain.md b/src/doc/tarpl/vec-drain.md index 3be295f1adc..4521bbdd05e 100644 --- a/src/doc/tarpl/vec-drain.md +++ b/src/doc/tarpl/vec-drain.md @@ -130,7 +130,7 @@ impl<'a, T> Drop for Drain<'a, T> { impl<T> Vec<T> { pub fn drain(&mut self) -> Drain<T> { // this is a mem::forget safety thing. If Drain is forgotten, we just - // leak the whole Vec's contents. Also we need to do this *eventually* + // leak the whole Vec's contents. Also we need to do this eventually // anyway, so why not do it now? self.len = 0; diff --git a/src/doc/tarpl/vec-insert-remove.md b/src/doc/tarpl/vec-insert-remove.md index 6f88a77b32a..0a37170c52c 100644 --- a/src/doc/tarpl/vec-insert-remove.md +++ b/src/doc/tarpl/vec-insert-remove.md @@ -10,7 +10,7 @@ handling the case where the source and destination overlap (which will definitely happen here). If we insert at index `i`, we want to shift the `[i .. len]` to `[i+1 .. len+1]` -using the *old* len. +using the old len. ```rust,ignore pub fn insert(&mut self, index: usize, elem: T) { diff --git a/src/doc/tarpl/vec-into-iter.md b/src/doc/tarpl/vec-into-iter.md index a9c1917feb9..ebb0a79bb65 100644 --- a/src/doc/tarpl/vec-into-iter.md +++ b/src/doc/tarpl/vec-into-iter.md @@ -21,8 +21,8 @@ read out the value pointed to at that end and move the pointer over by one. When the two pointers are equal, we know we're done. Note that the order of read and offset are reversed for `next` and `next_back` -For `next_back` the pointer is always *after* the element it wants to read next, -while for `next` the pointer is always *at* the element it wants to read next. +For `next_back` the pointer is always after the element it wants to read next, +while for `next` the pointer is always at the element it wants to read next. To see why this is, consider the case where every element but one has been yielded. @@ -124,7 +124,7 @@ impl<T> DoubleEndedIterator for IntoIter<T> { ``` Because IntoIter takes ownership of its allocation, it needs to implement Drop -to free it. However it *also* wants to implement Drop to drop any elements it +to free it. However it also wants to implement Drop to drop any elements it contains that weren't yielded. diff --git a/src/doc/tarpl/vec-push-pop.md b/src/doc/tarpl/vec-push-pop.md index 2ef15e324b6..b518e8aa48f 100644 --- a/src/doc/tarpl/vec-push-pop.md +++ b/src/doc/tarpl/vec-push-pop.md @@ -32,14 +32,14 @@ pub fn push(&mut self, elem: T) { Easy! How about `pop`? Although this time the index we want to access is initialized, Rust won't just let us dereference the location of memory to move -the value out, because that *would* leave the memory uninitialized! For this we +the value out, because that would leave the memory uninitialized! For this we need `ptr::read`, which just copies out the bits from the target address and intrprets it as a value of type T. This will leave the memory at this address -*logically* uninitialized, even though there is in fact a perfectly good instance +logically uninitialized, even though there is in fact a perfectly good instance of T there. For `pop`, if the old len is 1, we want to read out of the 0th index. So we -should offset by the *new* len. +should offset by the new len. ```rust,ignore pub fn pop(&mut self) -> Option<T> { diff --git a/src/doc/tarpl/vec-zsts.md b/src/doc/tarpl/vec-zsts.md index 931aed33ef5..72e8a34488b 100644 --- a/src/doc/tarpl/vec-zsts.md +++ b/src/doc/tarpl/vec-zsts.md @@ -2,7 +2,7 @@ It's time. We're going to fight the spectre that is zero-sized types. Safe Rust *never* needs to care about this, but Vec is very intensive on raw pointers and -raw allocations, which are exactly the *only* two things that care about +raw allocations, which are exactly the two things that care about zero-sized types. We need to be careful of two things: * The raw allocator API has undefined behaviour if you pass in 0 for an @@ -22,7 +22,7 @@ So if the allocator API doesn't support zero-sized allocations, what on earth do we store as our allocation? Why, `heap::EMPTY` of course! Almost every operation with a ZST is a no-op since ZSTs have exactly one value, and therefore no state needs to be considered to store or load them. This actually extends to `ptr::read` and -`ptr::write`: they won't actually look at the pointer at all. As such we *never* need +`ptr::write`: they won't actually look at the pointer at all. As such we never need to change the pointer. Note however that our previous reliance on running out of memory before overflow is diff --git a/src/doc/trpl/SUMMARY.md b/src/doc/trpl/SUMMARY.md index 85f0019276e..24686e772e3 100644 --- a/src/doc/trpl/SUMMARY.md +++ b/src/doc/trpl/SUMMARY.md @@ -26,8 +26,7 @@ * [Primitive Types](primitive-types.md) * [Comments](comments.md) * [if](if.md) - * [for loops](for-loops.md) - * [while loops](while-loops.md) + * [Loops](loops.md) * [Ownership](ownership.md) * [References and Borrowing](references-and-borrowing.md) * [Lifetimes](lifetimes.md) diff --git a/src/doc/trpl/choosing-your-guarantees.md b/src/doc/trpl/choosing-your-guarantees.md index a7d9032c3c5..68812f342f1 100644 --- a/src/doc/trpl/choosing-your-guarantees.md +++ b/src/doc/trpl/choosing-your-guarantees.md @@ -308,6 +308,7 @@ scope. Both of these provide safe shared mutability across threads, however they are prone to deadlocks. Some level of additional protocol safety can be obtained via the type system. + #### Costs These use internal atomic-like types to maintain the locks, which are pretty costly (they can block diff --git a/src/doc/trpl/for-loops.md b/src/doc/trpl/for-loops.md deleted file mode 100644 index 2866cee3a1a..00000000000 --- a/src/doc/trpl/for-loops.md +++ /dev/null @@ -1,85 +0,0 @@ -% for Loops - -The `for` loop is used to loop a particular number of times. Rust’s `for` loops -work a bit differently than in other systems languages, however. Rust’s `for` -loop doesn’t look like this “C-style” `for` loop: - -```c -for (x = 0; x < 10; x++) { - printf( "%d\n", x ); -} -``` - -Instead, it looks like this: - -```rust -for x in 0..10 { - println!("{}", x); // x: i32 -} -``` - -In slightly more abstract terms, - -```ignore -for var in expression { - code -} -``` - -The expression is an [iterator][iterator]. The iterator gives back a series of -elements. Each element is one iteration of the loop. That value is then bound -to the name `var`, which is valid for the loop body. Once the body is over, the -next value is fetched from the iterator, and we loop another time. When there -are no more values, the `for` loop is over. - -[iterator]: iterators.html - -In our example, `0..10` is an expression that takes a start and an end position, -and gives an iterator over those values. The upper bound is exclusive, though, -so our loop will print `0` through `9`, not `10`. - -Rust does not have the “C-style” `for` loop on purpose. Manually controlling -each element of the loop is complicated and error prone, even for experienced C -developers. - -# Enumerate - -When you need to keep track of how many times you already looped, you can use the `.enumerate()` function. - -## On ranges: - -```rust -for (i,j) in (5..10).enumerate() { - println!("i = {} and j = {}", i, j); -} -``` - -Outputs: - -```text -i = 0 and j = 5 -i = 1 and j = 6 -i = 2 and j = 7 -i = 3 and j = 8 -i = 4 and j = 9 -``` - -Don't forget to add the parentheses around the range. - -## On iterators: - -```rust -# let lines = "hello\nworld".lines(); -for (linenumber, line) in lines.enumerate() { - println!("{}: {}", linenumber, line); -} -``` - -Outputs: - -```text -0: Content of line one -1: Content of line two -2: Content of line tree -3: Content of line four -``` diff --git a/src/doc/trpl/loops.md b/src/doc/trpl/loops.md new file mode 100644 index 00000000000..a91fb8dadaf --- /dev/null +++ b/src/doc/trpl/loops.md @@ -0,0 +1,209 @@ +% Loops + +Rust currently provides three approaches to performing some kind of iterative activity. They are: `loop`, `while` and `for`. Each approach has its own set of uses. + +## loop + +The infinite `loop` is the simplest form of loop available in Rust. Using the keyword `loop`, Rust provides a way to loop indefinitely until some terminating statement is reached. Rust's infinite `loop`s look like this: + +```rust,ignore +loop { + println!("Loop forever!"); +} +``` + +## while + +Rust also has a `while` loop. It looks like this: + +```rust +let mut x = 5; // mut x: i32 +let mut done = false; // mut done: bool + +while !done { + x += x - 3; + + println!("{}", x); + + if x % 5 == 0 { + done = true; + } +} +``` + +`while` loops are the correct choice when you’re not sure how many times +you need to loop. + +If you need an infinite loop, you may be tempted to write this: + +```rust,ignore +while true { +``` + +However, `loop` is far better suited to handle this case: + +```rust,ignore +loop { +``` + +Rust’s control-flow analysis treats this construct differently than a `while +true`, since we know that it will always loop. In general, the more information +we can give to the compiler, the better it can do with safety and code +generation, so you should always prefer `loop` when you plan to loop +infinitely. + +## for + +The `for` loop is used to loop a particular number of times. Rust’s `for` loops +work a bit differently than in other systems languages, however. Rust’s `for` +loop doesn’t look like this “C-style” `for` loop: + +```c +for (x = 0; x < 10; x++) { + printf( "%d\n", x ); +} +``` + +Instead, it looks like this: + +```rust +for x in 0..10 { + println!("{}", x); // x: i32 +} +``` + +In slightly more abstract terms, + +```ignore +for var in expression { + code +} +``` + +The expression is an [iterator][iterator]. The iterator gives back a series of +elements. Each element is one iteration of the loop. That value is then bound +to the name `var`, which is valid for the loop body. Once the body is over, the +next value is fetched from the iterator, and we loop another time. When there +are no more values, the `for` loop is over. + +[iterator]: iterators.html + +In our example, `0..10` is an expression that takes a start and an end position, +and gives an iterator over those values. The upper bound is exclusive, though, +so our loop will print `0` through `9`, not `10`. + +Rust does not have the “C-style” `for` loop on purpose. Manually controlling +each element of the loop is complicated and error prone, even for experienced C +developers. + +### Enumerate + +When you need to keep track of how many times you already looped, you can use the `.enumerate()` function. + +#### On ranges: + +```rust +for (i,j) in (5..10).enumerate() { + println!("i = {} and j = {}", i, j); +} +``` + +Outputs: + +```text +i = 0 and j = 5 +i = 1 and j = 6 +i = 2 and j = 7 +i = 3 and j = 8 +i = 4 and j = 9 +``` + +Don't forget to add the parentheses around the range. + +#### On iterators: + +```rust +# let lines = "hello\nworld".lines(); +for (linenumber, line) in lines.enumerate() { + println!("{}: {}", linenumber, line); +} +``` + +Outputs: + +```text +0: Content of line one +1: Content of line two +2: Content of line tree +3: Content of line four +``` + +## Ending iteration early + +Let’s take a look at that `while` loop we had earlier: + +```rust +let mut x = 5; +let mut done = false; + +while !done { + x += x - 3; + + println!("{}", x); + + if x % 5 == 0 { + done = true; + } +} +``` + +We had to keep a dedicated `mut` boolean variable binding, `done`, to know +when we should exit out of the loop. Rust has two keywords to help us with +modifying iteration: `break` and `continue`. + +In this case, we can write the loop in a better way with `break`: + +```rust +let mut x = 5; + +loop { + x += x - 3; + + println!("{}", x); + + if x % 5 == 0 { break; } +} +``` + +We now loop forever with `loop` and use `break` to break out early. Issuing an explicit `return` statement will also serve to terminate the loop early. + +`continue` is similar, but instead of ending the loop, goes to the next +iteration. This will only print the odd numbers: + +```rust +for x in 0..10 { + if x % 2 == 0 { continue; } + + println!("{}", x); +} +``` + +## Loop labels + +You may also encounter situations where you have nested loops and need to +specify which one your `break` or `continue` statement is for. Like most +other languages, by default a `break` or `continue` will apply to innermost +loop. In a sitation where you would like to a `break` or `continue` for one +of the outer loops, you can use labels to specify which loop the `break` or + `continue` statement applies to. This will only print when both `x` and `y` are + odd: + +```rust +'outer: for x in 0..10 { + 'inner: for y in 0..10 { + if x % 2 == 0 { continue 'outer; } // continues the loop over x + if y % 2 == 0 { continue 'inner; } // continues the loop over y + println!("x: {}, y: {}", x, y); + } +} +``` diff --git a/src/doc/trpl/the-stack-and-the-heap.md b/src/doc/trpl/the-stack-and-the-heap.md index ff81590cc03..cfab268a7c5 100644 --- a/src/doc/trpl/the-stack-and-the-heap.md +++ b/src/doc/trpl/the-stack-and-the-heap.md @@ -73,7 +73,7 @@ frame. But before we can show what happens when `foo()` is called, we need to visualize what’s going on with memory. Your operating system presents a view of memory to your program that’s pretty simple: a huge list of addresses, from 0 to a large number, representing how much RAM your computer has. For example, if -you have a gigabyte of RAM, your addresses go from `0` to `1,073,741,824`. That +you have a gigabyte of RAM, your addresses go from `0` to `1,073,741,823`. That number comes from 2<sup>30</sup>, the number of bytes in a gigabyte. This memory is kind of like a giant array: addresses start at zero and go @@ -551,7 +551,7 @@ is a great introduction. [wilson]: http://www.cs.northwestern.edu/~pdinda/icsclass/doc/dsa.pdf -## Semantic impact +## Semantic impact Stack-allocation impacts the Rust language itself, and thus the developer’s mental model. The LIFO semantics is what drives how the Rust language handles diff --git a/src/doc/trpl/while-loops.md b/src/doc/trpl/while-loops.md deleted file mode 100644 index 124ebc7d69d..00000000000 --- a/src/doc/trpl/while-loops.md +++ /dev/null @@ -1,111 +0,0 @@ -% while Loops - -Rust also has a `while` loop. It looks like this: - -```rust -let mut x = 5; // mut x: i32 -let mut done = false; // mut done: bool - -while !done { - x += x - 3; - - println!("{}", x); - - if x % 5 == 0 { - done = true; - } -} -``` - -`while` loops are the correct choice when you’re not sure how many times -you need to loop. - -If you need an infinite loop, you may be tempted to write this: - -```rust,ignore -while true { -``` - -However, Rust has a dedicated keyword, `loop`, to handle this case: - -```rust,ignore -loop { -``` - -Rust’s control-flow analysis treats this construct differently than a `while -true`, since we know that it will always loop. In general, the more information -we can give to the compiler, the better it can do with safety and code -generation, so you should always prefer `loop` when you plan to loop -infinitely. - -## Ending iteration early - -Let’s take a look at that `while` loop we had earlier: - -```rust -let mut x = 5; -let mut done = false; - -while !done { - x += x - 3; - - println!("{}", x); - - if x % 5 == 0 { - done = true; - } -} -``` - -We had to keep a dedicated `mut` boolean variable binding, `done`, to know -when we should exit out of the loop. Rust has two keywords to help us with -modifying iteration: `break` and `continue`. - -In this case, we can write the loop in a better way with `break`: - -```rust -let mut x = 5; - -loop { - x += x - 3; - - println!("{}", x); - - if x % 5 == 0 { break; } -} -``` - -We now loop forever with `loop` and use `break` to break out early. - -`continue` is similar, but instead of ending the loop, goes to the next -iteration. This will only print the odd numbers: - -```rust -for x in 0..10 { - if x % 2 == 0 { continue; } - - println!("{}", x); -} -``` - -You may also encounter situations where you have nested loops and need to -specify which one your `break` or `continue` statement is for. Like most -other languages, by default a `break` or `continue` will apply to innermost -loop. In a sitation where you would like to a `break` or `continue` for one -of the outer loops, you can use labels to specify which loop the `break` or - `continue` statement applies to. This will only print when both `x` and `y` are - odd: - -```rust -'outer: for x in 0..10 { - 'inner: for y in 0..10 { - if x % 2 == 0 { continue 'outer; } // continues the loop over x - if y % 2 == 0 { continue 'inner; } // continues the loop over y - println!("x: {}, y: {}", x, y); - } -} -``` - -Both `continue` and `break` are valid in both `while` loops and [`for` loops][for]. - -[for]: for-loops.html diff --git a/src/libcore/hash/mod.rs b/src/libcore/hash/mod.rs index 75b7208d66b..e35f380d06f 100644 --- a/src/libcore/hash/mod.rs +++ b/src/libcore/hash/mod.rs @@ -91,8 +91,7 @@ pub trait Hash { fn hash<H: Hasher>(&self, state: &mut H); /// Feeds a slice of this type into the state provided. - #[unstable(feature = "hash_slice", - reason = "module was recently redesigned")] + #[stable(feature = "hash_slice", since = "1.3.0")] fn hash_slice<H: Hasher>(data: &[Self], state: &mut H) where Self: Sized { for piece in data { piece.hash(state); @@ -113,29 +112,29 @@ pub trait Hasher { /// Write a single `u8` into this hasher #[inline] - #[unstable(feature = "hasher_write", reason = "module was recently redesigned")] + #[stable(feature = "hasher_write", since = "1.3.0")] fn write_u8(&mut self, i: u8) { self.write(&[i]) } /// Write a single `u16` into this hasher. #[inline] - #[unstable(feature = "hasher_write", reason = "module was recently redesigned")] + #[stable(feature = "hasher_write", since = "1.3.0")] fn write_u16(&mut self, i: u16) { self.write(&unsafe { mem::transmute::<_, [u8; 2]>(i) }) } /// Write a single `u32` into this hasher. #[inline] - #[unstable(feature = "hasher_write", reason = "module was recently redesigned")] + #[stable(feature = "hasher_write", since = "1.3.0")] fn write_u32(&mut self, i: u32) { self.write(&unsafe { mem::transmute::<_, [u8; 4]>(i) }) } /// Write a single `u64` into this hasher. #[inline] - #[unstable(feature = "hasher_write", reason = "module was recently redesigned")] + #[stable(feature = "hasher_write", since = "1.3.0")] fn write_u64(&mut self, i: u64) { self.write(&unsafe { mem::transmute::<_, [u8; 8]>(i) }) } /// Write a single `usize` into this hasher. #[inline] - #[unstable(feature = "hasher_write", reason = "module was recently redesigned")] + #[stable(feature = "hasher_write", since = "1.3.0")] fn write_usize(&mut self, i: usize) { if cfg!(target_pointer_width = "32") { self.write_u32(i as u32) @@ -146,23 +145,23 @@ pub trait Hasher { /// Write a single `i8` into this hasher. #[inline] - #[unstable(feature = "hasher_write", reason = "module was recently redesigned")] + #[stable(feature = "hasher_write", since = "1.3.0")] fn write_i8(&mut self, i: i8) { self.write_u8(i as u8) } /// Write a single `i16` into this hasher. #[inline] - #[unstable(feature = "hasher_write", reason = "module was recently redesigned")] + #[stable(feature = "hasher_write", since = "1.3.0")] fn write_i16(&mut self, i: i16) { self.write_u16(i as u16) } /// Write a single `i32` into this hasher. #[inline] - #[unstable(feature = "hasher_write", reason = "module was recently redesigned")] + #[stable(feature = "hasher_write", since = "1.3.0")] fn write_i32(&mut self, i: i32) { self.write_u32(i as u32) } /// Write a single `i64` into this hasher. #[inline] - #[unstable(feature = "hasher_write", reason = "module was recently redesigned")] + #[stable(feature = "hasher_write", since = "1.3.0")] fn write_i64(&mut self, i: i64) { self.write_u64(i as u64) } /// Write a single `isize` into this hasher. #[inline] - #[unstable(feature = "hasher_write", reason = "module was recently redesigned")] + #[stable(feature = "hasher_write", since = "1.3.0")] fn write_isize(&mut self, i: isize) { self.write_usize(i as usize) } } diff --git a/src/libcore/lib.rs b/src/libcore/lib.rs index ef2a33c37dd..238644c4a26 100644 --- a/src/libcore/lib.rs +++ b/src/libcore/lib.rs @@ -65,6 +65,7 @@ #![allow(raw_pointer_derive)] #![deny(missing_docs)] +#![feature(associated_type_defaults)] #![feature(intrinsics)] #![feature(lang_items)] #![feature(on_unimplemented)] @@ -157,21 +158,23 @@ pub mod fmt; // note: does not need to be public mod tuple; +// A curious inner-module that's not exported that contains the bindings of core +// so that compiler-expanded references to `core::$foo` can be resolved within +// core itself. +// +// Note that no crate-defined macros require this module due to the existence of +// the `$crate` meta variable, only those expansions defined in the compiler +// require this. This is because the compiler doesn't currently know that it's +// compiling the core library when it's compiling this library, so it expands +// all references to `::core::$foo` #[doc(hidden)] mod core { - pub use intrinsics; - pub use panicking; - pub use fmt; - pub use clone; - pub use cmp; - pub use hash; - pub use marker; - pub use option; - pub use iter; -} - -#[doc(hidden)] -mod std { - // range syntax - pub use ops; + pub use intrinsics; // derive(PartialOrd) + pub use fmt; // format_args! + pub use clone; // derive(Clone) + pub use cmp; // derive(Ord) + pub use hash; // derive(Hash) + pub use marker; // derive(Copy) + pub use option; // iterator protocol + pub use iter; // iterator protocol } diff --git a/src/libcore/mem.rs b/src/libcore/mem.rs index 271d83201b1..3b321d43b3d 100644 --- a/src/libcore/mem.rs +++ b/src/libcore/mem.rs @@ -253,21 +253,89 @@ pub unsafe fn dropped<T>() -> T { dropped_impl() } -/// Creates an uninitialized value. +/// Bypasses Rust's normal memory-initialization checks by pretending to +/// produce a value of type T, while doing nothing at all. /// -/// Care must be taken when using this function, if the type `T` has a destructor and the value -/// falls out of scope (due to unwinding or returning) before being initialized, then the -/// destructor will run on uninitialized data, likely leading to crashes. +/// **This is incredibly dangerous, and should not be done lightly. Deeply +/// consider initializing your memory with a default value instead.** /// -/// This is useful for FFI functions sometimes, but should generally be avoided. +/// This is useful for FFI functions and initializing arrays sometimes, +/// but should generally be avoided. +/// +/// # Undefined Behaviour +/// +/// It is Undefined Behaviour to read uninitialized memory. Even just an +/// uninitialized boolean. For instance, if you branch on the value of such +/// a boolean your program may take one, both, or neither of the branches. +/// +/// Note that this often also includes *writing* to the uninitialized value. +/// Rust believes the value is initialized, and will therefore try to Drop +/// the uninitialized value and its fields if you try to overwrite the memory +/// in a normal manner. The only way to safely initialize an arbitrary +/// uninitialized value is with one of the `ptr` functions: `write`, `copy`, or +/// `copy_nonoverlapping`. This isn't necessary if `T` is a primitive +/// or otherwise only contains types that don't implement Drop. +/// +/// If this value *does* need some kind of Drop, it must be initialized before +/// it goes out of scope (and therefore would be dropped). Note that this +/// includes a `panic` occurring and unwinding the stack suddenly. /// /// # Examples /// +/// Here's how to safely initialize an array of `Vec`s. +/// /// ``` /// use std::mem; +/// use std::ptr; /// -/// let x: i32 = unsafe { mem::uninitialized() }; +/// // Only declare the array. This safely leaves it +/// // uninitialized in a way that Rust will track for us. +/// // However we can't initialize it element-by-element +/// // safely, and we can't use the `[value; 1000]` +/// // constructor because it only works with `Copy` data. +/// let mut data: [Vec<u32>; 1000]; +/// +/// unsafe { +/// // So we need to do this to initialize it. +/// data = mem::uninitialized(); +/// +/// // DANGER ZONE: if anything panics or otherwise +/// // incorrectly reads the array here, we will have +/// // Undefined Behaviour. +/// +/// // It's ok to mutably iterate the data, since this +/// // doesn't involve reading it at all. +/// // (ptr and len are statically known for arrays) +/// for elem in &mut data[..] { +/// // *elem = Vec::new() would try to drop the +/// // uninitialized memory at `elem` -- bad! +/// // +/// // Vec::new doesn't allocate or do really +/// // anything. It's only safe to call here +/// // because we know it won't panic. +/// ptr::write(elem, Vec::new()); +/// } +/// +/// // SAFE ZONE: everything is initialized. +/// } +/// +/// println!("{:?}", &data[0]); /// ``` +/// +/// Hopefully this example emphasizes to you exactly how delicate +/// and dangerous doing this is. Note that the `vec!` macro +/// *does* let you initialize every element with a value that +/// is only `Clone`, so the following is equivalent and vastly +/// less dangerous, as long as you can live with an extra heap +/// allocation: +/// +/// ``` +/// let data: Vec<Vec<u32>> = vec![Vec::new(); 1000]; +/// println!("{:?}", &data[0]); +/// ``` +/// +/// For large arrays this is probably advisable +/// anyway to avoid blowing the stack. #[inline] #[stable(feature = "rust1", since = "1.0.0")] pub unsafe fn uninitialized<T>() -> T { diff --git a/src/libcore/ptr.rs b/src/libcore/ptr.rs index 6fed89547d4..116c1dfaa3e 100644 --- a/src/libcore/ptr.rs +++ b/src/libcore/ptr.rs @@ -20,7 +20,7 @@ use mem; use clone::Clone; use intrinsics; use ops::Deref; -use core::fmt; +use fmt; use option::Option::{self, Some, None}; use marker::{PhantomData, Send, Sized, Sync}; use nonzero::NonZero; diff --git a/src/libcore/str/pattern.rs b/src/libcore/str/pattern.rs index 707f7fcf2ab..2b3fc39fc8b 100644 --- a/src/libcore/str/pattern.rs +++ b/src/libcore/str/pattern.rs @@ -17,7 +17,7 @@ reason = "API not fully fleshed out and ready to be stabilized")] use prelude::*; -use core::cmp; +use cmp; use usize; // Pattern diff --git a/src/liblibc/lib.rs b/src/liblibc/lib.rs index f474841136e..8bd57a8cb1f 100644 --- a/src/liblibc/lib.rs +++ b/src/liblibc/lib.rs @@ -1002,7 +1002,7 @@ pub mod types { } pub mod posix01 { use types::common::c95::{c_void}; - use types::common::c99::{uint8_t, uint32_t, int32_t}; + use types::common::c99::{uint32_t, int32_t}; use types::os::arch::c95::{c_long, time_t}; use types::os::arch::posix88::{dev_t, gid_t, ino_t}; use types::os::arch::posix88::{mode_t, off_t}; @@ -1096,7 +1096,7 @@ pub mod types { } pub mod posix01 { use types::common::c95::{c_void}; - use types::common::c99::{uint8_t, uint32_t, int32_t}; + use types::common::c99::{uint32_t, int32_t}; use types::os::arch::c95::{c_long, time_t}; use types::os::arch::posix88::{dev_t, gid_t, ino_t}; use types::os::arch::posix88::{mode_t, off_t}; diff --git a/src/librustc_borrowck/diagnostics.rs b/src/librustc_borrowck/diagnostics.rs index 4f90a287cb9..24bbd4fcb7e 100644 --- a/src/librustc_borrowck/diagnostics.rs +++ b/src/librustc_borrowck/diagnostics.rs @@ -73,6 +73,28 @@ fn main() { To fix this, ensure that any declared variables are initialized before being used. +"##, + +E0384: r##" +This error occurs when an attempt is made to reassign an immutable variable. +For example: + +``` +fn main(){ + let x = 3; + x = 5; // error, reassignment of immutable variable +} +``` + +By default, variables in Rust are immutable. To fix this error, add the keyword +`mut` after the keyword `let` when declaring the variable. For example: + +``` +fn main(){ + let mut x = 3; + x = 5; +} +``` "## } @@ -80,7 +102,6 @@ used. register_diagnostics! { E0382, // use of partially/collaterally moved value E0383, // partial reinitialization of uninitialized structure - E0384, // reassignment of immutable variable E0385, // {} in an aliasable location E0386, // {} in an immutable container E0387, // {} in a captured outer variable in an `Fn` closure diff --git a/src/librustc_resolve/diagnostics.rs b/src/librustc_resolve/diagnostics.rs index 6db4e49e98f..2696ca378f7 100644 --- a/src/librustc_resolve/diagnostics.rs +++ b/src/librustc_resolve/diagnostics.rs @@ -583,9 +583,10 @@ Please verify you didn't misspell the import's name. "##, E0437: r##" -Trait impls can only implement associated types that are members of the trait in -question. This error indicates that you attempted to implement an associated -type whose name does not match the name of any associated type in the trait. +Trait implementations can only implement associated types that are members of +the trait in question. This error indicates that you attempted to implement +an associated type whose name does not match the name of any associated type +in the trait. Here is an example that demonstrates the error: @@ -607,10 +608,10 @@ impl Foo for i32 {} "##, E0438: r##" -Trait impls can only implement associated constants that are members of the -trait in question. This error indicates that you attempted to implement an -associated constant whose name does not match the name of any associated -constant in the trait. +Trait implementations can only implement associated constants that are +members of the trait in question. This error indicates that you +attempted to implement an associated constant whose name does not +match the name of any associated constant in the trait. Here is an example that demonstrates the error: diff --git a/src/librustc_trans/back/msvc/mod.rs b/src/librustc_trans/back/msvc/mod.rs index bc0573d8281..87d3fd1c0e8 100644 --- a/src/librustc_trans/back/msvc/mod.rs +++ b/src/librustc_trans/back/msvc/mod.rs @@ -206,7 +206,7 @@ pub fn link_exe_cmd(sess: &Session) -> Command { return max_key } - fn get_windows_sdk_path() -> Option<(PathBuf, usize)> { + fn get_windows_sdk_path() -> Option<(PathBuf, usize, Option<OsString>)> { let key = r"SOFTWARE\Microsoft\Microsoft SDKs\Windows"; let key = LOCAL_MACHINE.open(key.as_ref()); let (n, k) = match key.ok().as_ref().and_then(max_version) { @@ -217,19 +217,20 @@ pub fn link_exe_cmd(sess: &Session) -> Command { let major = parts.next().unwrap().parse::<usize>().unwrap(); let _minor = parts.next().unwrap().parse::<usize>().unwrap(); k.query_str("InstallationFolder").ok().map(|folder| { - (PathBuf::from(folder), major) + let ver = k.query_str("ProductVersion"); + (PathBuf::from(folder), major, ver.ok()) }) } fn get_windows_sdk_lib_path(sess: &Session) -> Option<PathBuf> { - let (mut path, major) = match get_windows_sdk_path() { + let (mut path, major, ver) = match get_windows_sdk_path() { Some(p) => p, None => return None, }; path.push("Lib"); if major <= 7 { // In Windows SDK 7.x, x86 libraries are directly in the Lib folder, - // x64 libraries are inside, and it's not necessary to link agains + // x64 libraries are inside, and it's not necessary to link against // the SDK 7.x when targeting ARM or other architectures. let x86 = match &sess.target.target.arch[..] { "x86" => true, @@ -237,8 +238,8 @@ pub fn link_exe_cmd(sess: &Session) -> Command { _ => return None, }; Some(if x86 {path} else {path.join("x64")}) - } else { - // Windows SDK 8.x installes libraries in a folder whose names + } else if major <= 8 { + // Windows SDK 8.x installs libraries in a folder whose names // depend on the version of the OS you're targeting. By default // choose the newest, which usually corresponds to the version of // the OS you've installed the SDK on. @@ -251,7 +252,25 @@ pub fn link_exe_cmd(sess: &Session) -> Command { }).map(|path| { path.join("um").join(extra) }) - } + } else if let Some(mut ver) = ver { + // Windows SDK 10 splits the libraries into architectures the same + // as Windows SDK 8.x, except for the addition of arm64. + // Additionally, the SDK 10 is split by Windows 10 build numbers + // rather than the OS version like the SDK 8.x does. + let extra = match windows_sdk_v10_subdir(sess) { + Some(e) => e, + None => return None, + }; + // To get the correct directory we need to get the Windows SDK 10 + // version, and so far it looks like the "ProductVersion" of the SDK + // corresponds to the folder name that the libraries are located in + // except that the folder contains an extra ".0". For now just + // append a ".0" to look for find the directory we're in. This logic + // will likely want to be refactored one day. + ver.push(".0"); + let p = path.join(ver).join("um").join(extra); + fs::metadata(&p).ok().map(|_| p) + } else { None } } fn windows_sdk_v8_subdir(sess: &Session) -> Option<&'static str> { @@ -263,6 +282,16 @@ pub fn link_exe_cmd(sess: &Session) -> Command { } } + fn windows_sdk_v10_subdir(sess: &Session) -> Option<&'static str> { + match &sess.target.target.arch[..] { + "x86" => Some("x86"), + "x86_64" => Some("x64"), + "arm" => Some("arm"), + "aarch64" => Some("arm64"), // FIXME - Check if aarch64 is correct + _ => return None, + } + } + fn ucrt_install_dir(vs_install_dir: &Path) -> Option<(PathBuf, String)> { let is_vs_14 = vs_install_dir.iter().filter_map(|p| p.to_str()).any(|s| { s == "Microsoft Visual Studio 14.0" diff --git a/src/librustc_typeck/diagnostics.rs b/src/librustc_typeck/diagnostics.rs index d94870c68bd..77256d5b34e 100644 --- a/src/librustc_typeck/diagnostics.rs +++ b/src/librustc_typeck/diagnostics.rs @@ -1227,16 +1227,22 @@ impl Bytes { ... } // error, same as above "##, E0117: r##" -You got this error because because you tried to implement a foreign -trait for a foreign type (with maybe a foreign type parameter). Erroneous -code example: +This error indicates a violation of one of Rust's orphan rules for trait +implementations. The rule prohibits any implementation of a foreign trait (a +trait defined in another crate) where + + - the type that is implementing the trait is foreign + - all of the parameters being passed to the trait (if there are any) are also + foreign. + +Here's one example of this error: ``` impl Drop for u32 {} ``` -The type, trait or the type parameter (or all of them) has to be defined -in your crate. Example: +To avoid this kind of error, ensure that at least one local type is referenced +by the `impl`: ``` pub struct Foo; // you define your type in your crate @@ -1245,14 +1251,6 @@ impl Drop for Foo { // and you can implement the trait on it! // code of trait implementation here } -trait Bar { // or define your trait in your crate - fn get(&self) -> usize; -} - -impl Bar for u32 { // and then you implement it on a foreign type - fn get(&self) -> usize { 0 } -} - impl From<Foo> for i32 { // or you use a type from your crate as // a type parameter fn from(i: Foo) -> i32 { @@ -1260,6 +1258,22 @@ impl From<Foo> for i32 { // or you use a type from your crate as } } ``` + +Alternatively, define a trait locally and implement that instead: + +``` +trait Bar { + fn get(&self) -> usize; +} + +impl Bar for u32 { + fn get(&self) -> usize { 0 } +} +``` + +For information on the design of the orphan rules, see [RFC 1023]. + +[RFC 1023]: https://github.com/rust-lang/rfcs/pull/1023 "##, E0119: r##" @@ -1889,6 +1903,71 @@ impl MyTrait for Foo { ``` "##, +E0210: r##" +This error indicates a violation of one of Rust's orphan rules for trait +implementations. The rule concerns the use of type parameters in an +implementation of a foreign trait (a trait defined in another crate), and +states that type parameters must be "covered" by a local type. To understand +what this means, it is perhaps easiest to consider a few examples. + +If `ForeignTrait` is a trait defined in some external crate `foo`, then the +following trait `impl` is an error: + +``` +extern crate foo; +use foo::ForeignTrait; + +impl<T> ForeignTrait for T { ... } // error +``` + +To work around this, it can be covered with a local type, `MyType`: + +``` +struct MyType<T>(T); +impl<T> ForeignTrait for MyType<T> { ... } // Ok +``` + +For another example of an error, suppose there's another trait defined in `foo` +named `ForeignTrait2` that takes two type parameters. Then this `impl` results +in the same rule violation: + +``` +struct MyType2; +impl<T> ForeignTrait2<T, MyType<T>> for MyType2 { ... } // error +``` + +The reason for this is that there are two appearances of type parameter `T` in +the `impl` header, both as parameters for `ForeignTrait2`. The first appearance +is uncovered, and so runs afoul of the orphan rule. + +Consider one more example: + +``` +impl<T> ForeignTrait2<MyType<T>, T> for MyType2 { ... } // Ok +``` + +This only differs from the previous `impl` in that the parameters `T` and +`MyType<T>` for `ForeignTrait2` have been swapped. This example does *not* +violate the orphan rule; it is permitted. + +To see why that last example was allowed, you need to understand the general +rule. Unfortunately this rule is a bit tricky to state. Consider an `impl`: + +``` +impl<P1, ..., Pm> ForeignTrait<T1, ..., Tn> for T0 { ... } +``` + +where `P1, ..., Pm` are the type parameters of the `impl` and `T0, ..., Tn` +are types. One of the types `T0, ..., Tn` must be a local type (this is another +orphan rule, see the explanation for E0117). Let `i` be the smallest integer +such that `Ti` is a local type. Then no type parameter can appear in any of the +`Tj` for `j < i`. + +For information on the design of the orphan rules, see [RFC 1023]. + +[RFC 1023]: https://github.com/rust-lang/rfcs/pull/1023 +"##, + E0211: r##" You used an intrinsic function which doesn't correspond to its definition. Erroneous code example: @@ -2335,7 +2414,6 @@ register_diagnostics! { // and only one is supported E0208, E0209, // builtin traits can only be implemented on structs or enums - E0210, // type parameter is not constrained by any local type E0212, // cannot extract an associated type from a higher-ranked trait bound E0213, // associated types are not accepted in this context E0214, // parenthesized parameters may only be used with a trait diff --git a/src/librustdoc/markdown.rs b/src/librustdoc/markdown.rs index bc6c797e5c5..a311b938e96 100644 --- a/src/librustdoc/markdown.rs +++ b/src/librustdoc/markdown.rs @@ -29,13 +29,14 @@ use test::{TestOptions, Collector}; /// Separate any lines at the start of the file that begin with `%`. fn extract_leading_metadata<'a>(s: &'a str) -> (Vec<&'a str>, &'a str) { let mut metadata = Vec::new(); + let mut count = 0; for line in s.lines() { if line.starts_with("%") { // remove %<whitespace> - metadata.push(line[1..].trim_left()) + metadata.push(line[1..].trim_left()); + count += line.len() + 1; } else { - let line_start_byte = s.find(line).unwrap(); - return (metadata, &s[line_start_byte..]); + return (metadata, &s[count..]); } } // if we're here, then all lines were metadata % lines. diff --git a/src/libstd/error.rs b/src/libstd/error.rs index b21b2edf2ec..4d08f08bb6e 100644 --- a/src/libstd/error.rs +++ b/src/libstd/error.rs @@ -168,7 +168,7 @@ impl Error for string::FromUtf16Error { // copied from any.rs impl Error + 'static { /// Returns true if the boxed type is the same as `T` - #[unstable(feature = "error_downcast", reason = "recently added")] + #[stable(feature = "error_downcast", since = "1.3.0")] #[inline] pub fn is<T: Error + 'static>(&self) -> bool { // Get TypeId of the type this function is instantiated with @@ -183,7 +183,7 @@ impl Error + 'static { /// Returns some reference to the boxed value if it is of type `T`, or /// `None` if it isn't. - #[unstable(feature = "error_downcast", reason = "recently added")] + #[stable(feature = "error_downcast", since = "1.3.0")] #[inline] pub fn downcast_ref<T: Error + 'static>(&self) -> Option<&T> { if self.is::<T>() { @@ -201,7 +201,7 @@ impl Error + 'static { /// Returns some mutable reference to the boxed value if it is of type `T`, or /// `None` if it isn't. - #[unstable(feature = "error_downcast", reason = "recently added")] + #[stable(feature = "error_downcast", since = "1.3.0")] #[inline] pub fn downcast_mut<T: Error + 'static>(&mut self) -> Option<&mut T> { if self.is::<T>() { @@ -220,21 +220,44 @@ impl Error + 'static { impl Error + 'static + Send { /// Forwards to the method defined on the type `Any`. - #[unstable(feature = "error_downcast", reason = "recently added")] + #[stable(feature = "error_downcast", since = "1.3.0")] #[inline] pub fn is<T: Error + 'static>(&self) -> bool { <Error + 'static>::is::<T>(self) } /// Forwards to the method defined on the type `Any`. - #[unstable(feature = "error_downcast", reason = "recently added")] + #[stable(feature = "error_downcast", since = "1.3.0")] #[inline] pub fn downcast_ref<T: Error + 'static>(&self) -> Option<&T> { <Error + 'static>::downcast_ref::<T>(self) } /// Forwards to the method defined on the type `Any`. - #[unstable(feature = "error_downcast", reason = "recently added")] + #[stable(feature = "error_downcast", since = "1.3.0")] + #[inline] + pub fn downcast_mut<T: Error + 'static>(&mut self) -> Option<&mut T> { + <Error + 'static>::downcast_mut::<T>(self) + } +} + +impl Error + 'static + Send + Sync { + /// Forwards to the method defined on the type `Any`. + #[stable(feature = "error_downcast", since = "1.3.0")] + #[inline] + pub fn is<T: Error + 'static>(&self) -> bool { + <Error + 'static>::is::<T>(self) + } + + /// Forwards to the method defined on the type `Any`. + #[stable(feature = "error_downcast", since = "1.3.0")] + #[inline] + pub fn downcast_ref<T: Error + 'static>(&self) -> Option<&T> { + <Error + 'static>::downcast_ref::<T>(self) + } + + /// Forwards to the method defined on the type `Any`. + #[stable(feature = "error_downcast", since = "1.3.0")] #[inline] pub fn downcast_mut<T: Error + 'static>(&mut self) -> Option<&mut T> { <Error + 'static>::downcast_mut::<T>(self) @@ -243,7 +266,7 @@ impl Error + 'static + Send { impl Error { #[inline] - #[unstable(feature = "error_downcast", reason = "recently added")] + #[stable(feature = "error_downcast", since = "1.3.0")] /// Attempt to downcast the box to a concrete type. pub fn downcast<T: Error + 'static>(self: Box<Self>) -> Result<Box<T>, Box<Error>> { if self.is::<T>() { @@ -264,9 +287,10 @@ impl Error { impl Error + Send { #[inline] - #[unstable(feature = "error_downcast", reason = "recently added")] + #[stable(feature = "error_downcast", since = "1.3.0")] /// Attempt to downcast the box to a concrete type. - pub fn downcast<T: Error + 'static>(self: Box<Self>) -> Result<Box<T>, Box<Error + Send>> { + pub fn downcast<T: Error + 'static>(self: Box<Self>) + -> Result<Box<T>, Box<Error + Send>> { let err: Box<Error> = self; <Error>::downcast(err).map_err(|s| unsafe { // reapply the Send marker @@ -274,3 +298,63 @@ impl Error + Send { }) } } + +impl Error + Send + Sync { + #[inline] + #[stable(feature = "error_downcast", since = "1.3.0")] + /// Attempt to downcast the box to a concrete type. + pub fn downcast<T: Error + 'static>(self: Box<Self>) + -> Result<Box<T>, Box<Self>> { + let err: Box<Error> = self; + <Error>::downcast(err).map_err(|s| unsafe { + // reapply the Send+Sync marker + transmute::<Box<Error>, Box<Error + Send + Sync>>(s) + }) + } +} + +#[cfg(test)] +mod tests { + use prelude::v1::*; + use super::Error; + use fmt; + + #[derive(Debug, PartialEq)] + struct A; + #[derive(Debug, PartialEq)] + struct B; + + impl fmt::Display for A { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + write!(f, "A") + } + } + impl fmt::Display for B { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + write!(f, "B") + } + } + + impl Error for A { + fn description(&self) -> &str { "A-desc" } + } + impl Error for B { + fn description(&self) -> &str { "A-desc" } + } + + #[test] + fn downcasting() { + let mut a = A; + let mut a = &mut a as &mut (Error + 'static); + assert_eq!(a.downcast_ref::<A>(), Some(&A)); + assert_eq!(a.downcast_ref::<B>(), None); + assert_eq!(a.downcast_mut::<A>(), Some(&mut A)); + assert_eq!(a.downcast_mut::<B>(), None); + + let a: Box<Error> = Box::new(A); + match a.downcast::<B>() { + Ok(..) => panic!("expected error"), + Err(e) => assert_eq!(*e.downcast::<A>().unwrap(), A), + } + } +} diff --git a/src/libstd/io/error.rs b/src/libstd/io/error.rs index 3b48ff30960..e12e202148b 100644 --- a/src/libstd/io/error.rs +++ b/src/libstd/io/error.rs @@ -219,8 +219,7 @@ impl Error { /// /// If this `Error` was constructed via `new` then this function will /// return `Some`, otherwise it will return `None`. - #[unstable(feature = "io_error_inner", - reason = "recently added and requires UFCS to downcast")] + #[stable(feature = "io_error_inner", since = "1.3.0")] pub fn get_ref(&self) -> Option<&(error::Error+Send+Sync+'static)> { match self.repr { Repr::Os(..) => None, @@ -233,8 +232,7 @@ impl Error { /// /// If this `Error` was constructed via `new` then this function will /// return `Some`, otherwise it will return `None`. - #[unstable(feature = "io_error_inner", - reason = "recently added and requires UFCS to downcast")] + #[stable(feature = "io_error_inner", since = "1.3.0")] pub fn get_mut(&mut self) -> Option<&mut (error::Error+Send+Sync+'static)> { match self.repr { Repr::Os(..) => None, @@ -246,8 +244,7 @@ impl Error { /// /// If this `Error` was constructed via `new` then this function will /// return `Some`, otherwise it will return `None`. - #[unstable(feature = "io_error_inner", - reason = "recently added and requires UFCS to downcast")] + #[stable(feature = "io_error_inner", since = "1.3.0")] pub fn into_inner(self) -> Option<Box<error::Error+Send+Sync>> { match self.repr { Repr::Os(..) => None, @@ -349,10 +346,10 @@ mod test { // we have to call all of these UFCS style right now since method // resolution won't implicitly drop the Send+Sync bounds let mut err = Error::new(ErrorKind::Other, TestError); - assert!(error::Error::is::<TestError>(err.get_ref().unwrap())); + assert!(err.get_ref().unwrap().is::<TestError>()); assert_eq!("asdf", err.get_ref().unwrap().description()); - assert!(error::Error::is::<TestError>(err.get_mut().unwrap())); + assert!(err.get_mut().unwrap().is::<TestError>()); let extracted = err.into_inner().unwrap(); - error::Error::downcast::<TestError>(extracted).unwrap(); + extracted.downcast::<TestError>().unwrap(); } } diff --git a/src/libstd/io/mod.rs b/src/libstd/io/mod.rs index f811aa1be4e..3d746aa450a 100644 --- a/src/libstd/io/mod.rs +++ b/src/libstd/io/mod.rs @@ -1060,8 +1060,9 @@ pub trait Seek { /// The behavior when seeking past the end of the stream is implementation /// defined. /// - /// This method returns the new position within the stream if the seek - /// operation completed successfully. + /// If the seek operation completed successfully, + /// this method returns the new position from the start of the stream. + /// That position can be used later with `SeekFrom::Start`. /// /// # Errors /// diff --git a/src/libstd/io/stdio.rs b/src/libstd/io/stdio.rs index d8b7c8a282c..d69e17cade4 100644 --- a/src/libstd/io/stdio.rs +++ b/src/libstd/io/stdio.rs @@ -255,7 +255,7 @@ impl Stdin { // in which case it will wait for the Enter key to be pressed before /// continuing #[stable(feature = "rust1", since = "1.0.0")] - pub fn read_line(&mut self, buf: &mut String) -> io::Result<usize> { + pub fn read_line(&self, buf: &mut String) -> io::Result<usize> { self.lock().read_line(buf) } } diff --git a/src/libstd/lib.rs b/src/libstd/lib.rs index 16491549705..2e796004aab 100644 --- a/src/libstd/lib.rs +++ b/src/libstd/lib.rs @@ -13,12 +13,11 @@ //! The Rust Standard Library is the foundation of portable Rust //! software, a set of minimal and battle-tested shared abstractions //! for the [broader Rust ecosystem](https://crates.io). It offers -//! core types (e.g. [`Vec`](vec/index.html) -//! and [`Option`](option/index.html)), library-defined [operations on -//! language primitives](#primitive) (e.g. [`u32`](u32/index.html) and -//! [`str`](str/index.html)), [standard macros](#macros), +//! core types, like [`Vec`](vec/index.html) +//! and [`Option`](option/index.html), library-defined [operations on +//! language primitives](#primitives), [standard macros](#macros), //! [I/O](io/index.html) and [multithreading](thread/index.html), among -//! [many other lovely +//! [many other //! things](#what-is-in-the-standard-library-documentation?). //! //! `std` is available to all Rust crates by default, just as if each @@ -65,8 +64,6 @@ //! //! # What is in the standard library documentation? //! -//! Lots of stuff. Well, broadly four things actually. -//! //! First of all, The Rust Standard Library is divided into a number //! of focused modules, [all listed further down this page](#modules). //! These modules are the bedrock upon which all of Rust is forged, @@ -89,7 +86,7 @@ //! //! So for example there is a [page for the primitive type //! `i32`](primitive.i32.html) that lists all the methods that can be -//! called on 32-bit integers (mega useful), and there is a [page for +//! called on 32-bit integers (very useful), and there is a [page for //! the module `std::i32`](i32/index.html) that documents the constant //! values `MIN` and `MAX` (rarely useful). //! @@ -99,9 +96,7 @@ //! [`String`](string/struct.String.html) and //! [`Vec`](vec/struct.Vec.html) are actually calls to methods on //! `str` and `[T]` respectively, via [deref -//! coercions](../book/deref-coercions.html). *Accepting that -//! primitive types are documented on their own pages will bring you a -//! deep inner wisdom. Embrace it now before proceeding.* +//! coercions](../book/deref-coercions.html). //! //! Third, the standard library defines [The Rust //! Prelude](prelude/index.html), a small collection of items - mostly @@ -416,27 +411,10 @@ pub mod __rand { // because rustdoc only looks for these modules at the crate level. include!("primitive_docs.rs"); -// A curious inner-module that's not exported that contains the binding -// 'std' so that macro-expanded references to std::error and such -// can be resolved within libstd. -#[doc(hidden)] +// The expansion of --test has a few references to `::std::$foo` so this module +// is necessary to get things to compile. +#[cfg(test)] mod std { - pub use sync; // used for select!() - pub use error; // used for try!() - pub use fmt; // used for any formatting strings - pub use option; // used for thread_local!{} - pub use rt; // used for panic!() - pub use vec; // used for vec![] - pub use cell; // used for tls! - pub use thread; // used for thread_local! - pub use marker; // used for tls! - - // The test runner calls ::std::env::args() but really wants realstd - #[cfg(test)] pub use realstd::env as env; - // The test runner requires std::slice::Vector, so re-export std::slice just for it. - // - // It is also used in vec![] - pub use slice; - - pub use boxed; // used for vec![] + pub use option; + pub use realstd::env; } diff --git a/src/libstd/process.rs b/src/libstd/process.rs index 269a1638b0a..74a66558627 100644 --- a/src/libstd/process.rs +++ b/src/libstd/process.rs @@ -505,7 +505,7 @@ impl Child { } /// Returns the OS-assigned process identifier associated with this child. - #[unstable(feature = "process_id", reason = "api recently added")] + #[stable(feature = "process_id", since = "1.3.0")] pub fn id(&self) -> u32 { self.handle.id() } @@ -799,7 +799,7 @@ mod tests { #[cfg(not(target_os="android"))] #[test] fn test_inherit_env() { - use std::env; + use env; let result = env_cmd().output().unwrap(); let output = String::from_utf8(result.stdout).unwrap(); diff --git a/src/libstd/sync/mpsc/mod.rs b/src/libstd/sync/mpsc/mod.rs index 1453c91fd4d..d80d858e7a9 100644 --- a/src/libstd/sync/mpsc/mod.rs +++ b/src/libstd/sync/mpsc/mod.rs @@ -1107,7 +1107,7 @@ impl error::Error for TryRecvError { mod tests { use prelude::v1::*; - use std::env; + use env; use super::*; use thread; @@ -1655,7 +1655,7 @@ mod tests { mod sync_tests { use prelude::v1::*; - use std::env; + use env; use thread; use super::*; diff --git a/src/libstd/thread/local.rs b/src/libstd/thread/local.rs index 11b375dcce2..9a6d68acb9f 100644 --- a/src/libstd/thread/local.rs +++ b/src/libstd/thread/local.rs @@ -107,14 +107,14 @@ pub struct LocalKey<T> { #[cfg(not(no_elf_tls))] macro_rules! thread_local { (static $name:ident: $t:ty = $init:expr) => ( - static $name: ::std::thread::LocalKey<$t> = + static $name: $crate::thread::LocalKey<$t> = __thread_local_inner!($t, $init, #[cfg_attr(all(any(target_os = "macos", target_os = "linux"), not(target_arch = "aarch64")), thread_local)]); ); (pub static $name:ident: $t:ty = $init:expr) => ( - pub static $name: ::std::thread::LocalKey<$t> = + pub static $name: $crate::thread::LocalKey<$t> = __thread_local_inner!($t, $init, #[cfg_attr(all(any(target_os = "macos", target_os = "linux"), not(target_arch = "aarch64")), @@ -128,11 +128,11 @@ macro_rules! thread_local { #[cfg(no_elf_tls)] macro_rules! thread_local { (static $name:ident: $t:ty = $init:expr) => ( - static $name: ::std::thread::LocalKey<$t> = + static $name: $crate::thread::LocalKey<$t> = __thread_local_inner!($t, $init, #[]); ); (pub static $name:ident: $t:ty = $init:expr) => ( - pub static $name: ::std::thread::LocalKey<$t> = + pub static $name: $crate::thread::LocalKey<$t> = __thread_local_inner!($t, $init, #[]); ); } @@ -145,11 +145,11 @@ macro_rules! thread_local { macro_rules! __thread_local_inner { ($t:ty, $init:expr, #[$($attr:meta),*]) => {{ $(#[$attr])* - static __KEY: ::std::thread::__LocalKeyInner<$t> = - ::std::thread::__LocalKeyInner::new(); + static __KEY: $crate::thread::__LocalKeyInner<$t> = + $crate::thread::__LocalKeyInner::new(); fn __init() -> $t { $init } - fn __getit() -> &'static ::std::thread::__LocalKeyInner<$t> { &__KEY } - ::std::thread::LocalKey::new(__getit, __init) + fn __getit() -> &'static $crate::thread::__LocalKeyInner<$t> { &__KEY } + $crate::thread::LocalKey::new(__getit, __init) }} } diff --git a/src/libstd/thread/scoped_tls.rs b/src/libstd/thread/scoped_tls.rs index 4fbfdec8e7e..cf2c5db8277 100644 --- a/src/libstd/thread/scoped_tls.rs +++ b/src/libstd/thread/scoped_tls.rs @@ -70,11 +70,11 @@ pub struct ScopedKey<T> { inner: fn() -> &'static imp::KeyInner<T> } #[allow_internal_unstable] macro_rules! scoped_thread_local { (static $name:ident: $t:ty) => ( - static $name: ::std::thread::ScopedKey<$t> = + static $name: $crate::thread::ScopedKey<$t> = __scoped_thread_local_inner!($t); ); (pub static $name:ident: $t:ty) => ( - pub static $name: ::std::thread::ScopedKey<$t> = + pub static $name: $crate::thread::ScopedKey<$t> = __scoped_thread_local_inner!($t); ); } @@ -87,10 +87,10 @@ macro_rules! scoped_thread_local { #[cfg(no_elf_tls)] macro_rules! __scoped_thread_local_inner { ($t:ty) => {{ - static _KEY: ::std::thread::__ScopedKeyInner<$t> = - ::std::thread::__ScopedKeyInner::new(); - fn _getit() -> &'static ::std::thread::__ScopedKeyInner<$t> { &_KEY } - ::std::thread::ScopedKey::new(_getit) + static _KEY: $crate::thread::__ScopedKeyInner<$t> = + $crate::thread::__ScopedKeyInner::new(); + fn _getit() -> &'static $crate::thread::__ScopedKeyInner<$t> { &_KEY } + $crate::thread::ScopedKey::new(_getit) }} } @@ -109,10 +109,10 @@ macro_rules! __scoped_thread_local_inner { target_os = "openbsd", target_arch = "aarch64")), thread_local)] - static _KEY: ::std::thread::__ScopedKeyInner<$t> = - ::std::thread::__ScopedKeyInner::new(); - fn _getit() -> &'static ::std::thread::__ScopedKeyInner<$t> { &_KEY } - ::std::thread::ScopedKey::new(_getit) + static _KEY: $crate::thread::__ScopedKeyInner<$t> = + $crate::thread::__ScopedKeyInner::new(); + fn _getit() -> &'static $crate::thread::__ScopedKeyInner<$t> { &_KEY } + $crate::thread::ScopedKey::new(_getit) }} } @@ -225,7 +225,7 @@ impl<T> ScopedKey<T> { no_elf_tls)))] #[doc(hidden)] mod imp { - use std::cell::Cell; + use cell::Cell; pub struct KeyInner<T> { inner: Cell<*mut T> } diff --git a/src/libsyntax/ext/deriving/generic/mod.rs b/src/libsyntax/ext/deriving/generic/mod.rs index e7d242ab703..8f9e0279b29 100644 --- a/src/libsyntax/ext/deriving/generic/mod.rs +++ b/src/libsyntax/ext/deriving/generic/mod.rs @@ -188,6 +188,7 @@ pub use self::SubstructureFields::*; use self::StructType::*; use std::cell::RefCell; +use std::collections::HashSet; use std::vec; use abi::Abi; @@ -549,10 +550,20 @@ impl<'a> TraitDef<'a> { .map(|ty_param| ty_param.ident.name) .collect(); + let mut processed_field_types = HashSet::new(); for field_ty in field_tys { let tys = find_type_parameters(&*field_ty, &ty_param_names); for ty in tys { + // if we have already handled this type, skip it + if let ast::TyPath(_, ref p) = ty.node { + if p.segments.len() == 1 + && ty_param_names.contains(&p.segments[0].identifier.name) + || processed_field_types.contains(&p.segments) { + continue; + }; + processed_field_types.insert(p.segments.clone()); + } let mut bounds: Vec<_> = self.additional_bounds.iter().map(|p| { cx.typarambound(p.to_path(cx, self.span, type_ident, generics)) }).collect(); diff --git a/src/libsyntax/feature_gate.rs b/src/libsyntax/feature_gate.rs index 53b57cdfaa1..945e457a77b 100644 --- a/src/libsyntax/feature_gate.rs +++ b/src/libsyntax/feature_gate.rs @@ -163,8 +163,12 @@ const KNOWN_FEATURES: &'static [(&'static str, &'static str, Status)] = &[ // Allows the definition recursive static items. ("static_recursion", "1.3.0", Active), -// Allows default type parameters to influence type inference. - ("default_type_parameter_fallback", "1.3.0", Active) + + // Allows default type parameters to influence type inference. + ("default_type_parameter_fallback", "1.3.0", Active), + + // Allows associated type defaults + ("associated_type_defaults", "1.2.0", Active), ]; // (changing above list without updating src/doc/reference.md makes @cmr sad) @@ -762,6 +766,10 @@ impl<'a, 'v> Visitor<'v> for PostExpansionVisitor<'a> { self.gate_feature("const_fn", ti.span, "const fn is unstable"); } } + ast::TypeTraitItem(_, Some(_)) => { + self.gate_feature("associated_type_defaults", ti.span, + "associated type defaults are unstable"); + } _ => {} } visit::walk_trait_item(self, ti); diff --git a/src/libsyntax/parse/obsolete.rs b/src/libsyntax/parse/obsolete.rs index 5a72477d4ac..bc355f70fb3 100644 --- a/src/libsyntax/parse/obsolete.rs +++ b/src/libsyntax/parse/obsolete.rs @@ -13,11 +13,8 @@ //! //! Obsolete syntax that becomes too hard to parse can be removed. -use ast::{Expr, ExprTup}; use codemap::Span; use parse::parser; -use parse::token; -use ptr::P; /// The specific types of unsupported syntax #[derive(Copy, Clone, PartialEq, Eq, Hash)] @@ -29,17 +26,12 @@ pub enum ObsoleteSyntax { pub trait ParserObsoleteMethods { /// Reports an obsolete syntax non-fatal error. fn obsolete(&mut self, sp: Span, kind: ObsoleteSyntax); - /// Reports an obsolete syntax non-fatal error, and returns - /// a placeholder expression - fn obsolete_expr(&mut self, sp: Span, kind: ObsoleteSyntax) -> P<Expr>; fn report(&mut self, sp: Span, kind: ObsoleteSyntax, kind_str: &str, desc: &str, error: bool); - fn is_obsolete_ident(&mut self, ident: &str) -> bool; - fn eat_obsolete_ident(&mut self, ident: &str) -> bool; } impl<'a> ParserObsoleteMethods for parser::Parser<'a> { @@ -61,13 +53,6 @@ impl<'a> ParserObsoleteMethods for parser::Parser<'a> { self.report(sp, kind, kind_str, desc, error); } - /// Reports an obsolete syntax non-fatal error, and returns - /// a placeholder expression - fn obsolete_expr(&mut self, sp: Span, kind: ObsoleteSyntax) -> P<Expr> { - self.obsolete(sp, kind); - self.mk_expr(sp.lo, sp.hi, ExprTup(vec![])) - } - fn report(&mut self, sp: Span, kind: ObsoleteSyntax, @@ -89,20 +74,4 @@ impl<'a> ParserObsoleteMethods for parser::Parser<'a> { self.obsolete_set.insert(kind); } } - - fn is_obsolete_ident(&mut self, ident: &str) -> bool { - match self.token { - token::Ident(sid, _) => sid.name == ident, - _ => false, - } - } - - fn eat_obsolete_ident(&mut self, ident: &str) -> bool { - if self.is_obsolete_ident(ident) { - panictry!(self.bump()); - true - } else { - false - } - } } diff --git a/src/libsyntax/parse/parser.rs b/src/libsyntax/parse/parser.rs index 11611c9adb0..e7ab9a73c0f 100644 --- a/src/libsyntax/parse/parser.rs +++ b/src/libsyntax/parse/parser.rs @@ -4610,7 +4610,7 @@ impl<'a> Parser<'a> { None }; - if try!(self.eat(&token::DotDot) ){ + if opt_trait.is_some() && try!(self.eat(&token::DotDot) ){ if generics.is_parameterized() { self.span_err(impl_span, "default trait implementations are not \ allowed to have generics"); diff --git a/src/snapshots.txt b/src/snapshots.txt index d317b5be4c1..31803fb18f6 100644 --- a/src/snapshots.txt +++ b/src/snapshots.txt @@ -1,5 +1,6 @@ S 2015-07-26 a5c12f4 bitrig-x86_64 8734eb41ffbe6ddc1120aa2910db4162ec9cf270 + freebsd-i386 2fee22adec101e2f952a5548fd1437ce1bd8d26f freebsd-x86_64 bc50b0f8d7f6d62f4f5ffa136f5387f5bf6524fd linux-i386 3459275cdf3896f678e225843fa56f0d9fdbabe8 linux-x86_64 e451e3bd6e5fcef71e41ae6f3da9fb1cf0e13a0c diff --git a/src/test/auxiliary/xcrate_associated_type_defaults.rs b/src/test/auxiliary/xcrate_associated_type_defaults.rs index a6b70bf974f..43852a4e793 100644 --- a/src/test/auxiliary/xcrate_associated_type_defaults.rs +++ b/src/test/auxiliary/xcrate_associated_type_defaults.rs @@ -8,6 +8,8 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. +#![feature(associated_type_defaults)] + pub trait Foo { type Input = usize; fn bar(&self, _: Self::Input) {} diff --git a/src/test/compile-fail/associated-types-overridden-default.rs b/src/test/compile-fail/associated-types-overridden-default.rs index eb519e79006..19f13f5fc2f 100644 --- a/src/test/compile-fail/associated-types-overridden-default.rs +++ b/src/test/compile-fail/associated-types-overridden-default.rs @@ -9,6 +9,7 @@ // except according to those terms. #![feature(associated_consts)] +#![feature(associated_type_defaults)] pub trait Tr { type Assoc = u8; diff --git a/src/test/compile-fail/feature-gate-assoc-type-defaults.rs b/src/test/compile-fail/feature-gate-assoc-type-defaults.rs new file mode 100644 index 00000000000..fc4871a712d --- /dev/null +++ b/src/test/compile-fail/feature-gate-assoc-type-defaults.rs @@ -0,0 +1,15 @@ +// Copyright 2015 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or +// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license +// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +trait Foo { + type Bar = u8; //~ ERROR associated type defaults are unstable +} + +fn main() {} diff --git a/src/test/compile-fail/issue-23073.rs b/src/test/compile-fail/issue-23073.rs index 1286ba873be..2d219177a80 100644 --- a/src/test/compile-fail/issue-23073.rs +++ b/src/test/compile-fail/issue-23073.rs @@ -8,6 +8,8 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. +#![feature(associated_type_defaults)] + trait Foo { type T; } trait Bar { type Foo: Foo; diff --git a/src/test/compile-fail/issue-23595-1.rs b/src/test/compile-fail/issue-23595-1.rs index 749b261e387..a3422d859c6 100644 --- a/src/test/compile-fail/issue-23595-1.rs +++ b/src/test/compile-fail/issue-23595-1.rs @@ -8,6 +8,8 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. +#![feature(associated_type_defaults)] + use std::ops::{Index}; trait Hierarchy { diff --git a/src/test/compile-fail/issue-23595-2.rs b/src/test/compile-fail/issue-23595-2.rs index 78a3f42f1a6..6a3ce03fce5 100644 --- a/src/test/compile-fail/issue-23595-2.rs +++ b/src/test/compile-fail/issue-23595-2.rs @@ -8,6 +8,8 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. +#![feature(associated_type_defaults)] + pub struct C<AType: A> {a:AType} pub trait A { diff --git a/src/test/compile-fail/lint-missing-doc.rs b/src/test/compile-fail/lint-missing-doc.rs index 04db6c8c8f3..c98d7083743 100644 --- a/src/test/compile-fail/lint-missing-doc.rs +++ b/src/test/compile-fail/lint-missing-doc.rs @@ -12,6 +12,7 @@ // injected intrinsics by the compiler. #![deny(missing_docs)] #![allow(dead_code)] +#![feature(associated_type_defaults)] //! Some garbage docs for the crate here #![doc="More garbage"] diff --git a/src/test/parse-fail/empty-impl-semicolon.rs b/src/test/parse-fail/empty-impl-semicolon.rs index e356ab1debc..d9f8add8cfb 100644 --- a/src/test/parse-fail/empty-impl-semicolon.rs +++ b/src/test/parse-fail/empty-impl-semicolon.rs @@ -10,4 +10,4 @@ // compile-flags: -Z parse-only -impl Foo; //~ ERROR expected one of `(`, `+`, `..`, `::`, `<`, `for`, `where`, or `{`, found `;` +impl Foo; //~ ERROR expected one of `(`, `+`, `::`, `<`, `for`, `where`, or `{`, found `;` diff --git a/src/test/parse-fail/issue-27255.rs b/src/test/parse-fail/issue-27255.rs new file mode 100644 index 00000000000..a751c4af494 --- /dev/null +++ b/src/test/parse-fail/issue-27255.rs @@ -0,0 +1,15 @@ +// Copyright 2015 The Rust Project Developers. See the COPYRIGHT +// file at the top-level directory of this distribution and at +// http://rust-lang.org/COPYRIGHT. +// +// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or +// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license +// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your +// option. This file may not be copied, modified, or distributed +// except according to those terms. + +// compile-flags: -Z parse-only + +impl A .. {} //~ ERROR + +fn main() {} diff --git a/src/test/parse-fail/multitrait.rs b/src/test/parse-fail/multitrait.rs index a1c737609d1..2a8d6d99957 100644 --- a/src/test/parse-fail/multitrait.rs +++ b/src/test/parse-fail/multitrait.rs @@ -15,7 +15,7 @@ struct S { } impl Cmp, ToString for S { -//~^ ERROR: expected one of `(`, `+`, `..`, `::`, `<`, `for`, `where`, or `{`, found `,` +//~^ ERROR: expected one of `(`, `+`, `::`, `<`, `for`, `where`, or `{`, found `,` fn eq(&&other: S) { false } fn to_string(&self) -> String { "hi".to_string() } } diff --git a/src/test/parse-fail/trait-bounds-not-on-impl.rs b/src/test/parse-fail/trait-bounds-not-on-impl.rs index 7a20b00ce12..3bd8908d18b 100644 --- a/src/test/parse-fail/trait-bounds-not-on-impl.rs +++ b/src/test/parse-fail/trait-bounds-not-on-impl.rs @@ -17,7 +17,7 @@ struct Bar; impl Foo + Owned for Bar { //~^ ERROR not a trait -//~^^ ERROR expected one of `..`, `where`, or `{`, found `Bar` +//~^^ ERROR expected one of `where` or `{`, found `Bar` } fn main() { } diff --git a/src/test/run-pass/default-associated-types.rs b/src/test/run-pass/default-associated-types.rs index b3def429b9b..3e6c72c993a 100644 --- a/src/test/run-pass/default-associated-types.rs +++ b/src/test/run-pass/default-associated-types.rs @@ -8,6 +8,8 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. +#![feature(associated_type_defaults)] + trait Foo<T> { type Out = T; fn foo(&self) -> Self::Out; diff --git a/src/test/run-pass/issue-25339.rs b/src/test/run-pass/issue-25339.rs index af172000fdb..381df7c5d59 100644 --- a/src/test/run-pass/issue-25339.rs +++ b/src/test/run-pass/issue-25339.rs @@ -8,6 +8,8 @@ // option. This file may not be copied, modified, or distributed // except according to those terms. +#![feature(associated_type_defaults)] + use std::marker::PhantomData; pub trait Routing<I> { |
