about summary refs log tree commit diff
diff options
context:
space:
mode:
authorbors <bors@rust-lang.org>2015-02-15 07:53:07 +0000
committerbors <bors@rust-lang.org>2015-02-15 07:53:07 +0000
commitb6d91a2bdac45cd919497a24207fab843124d4ba (patch)
treed826a22591607d47cdbc31f7e72b94698b4d451e
parent5be210c4188fb2f1a4fabc6baee5397ac6e6741e (diff)
parentbdc730e403cc29ae8b29f94320df6fa22ff279f6 (diff)
downloadrust-b6d91a2bdac45cd919497a24207fab843124d4ba.tar.gz
rust-b6d91a2bdac45cd919497a24207fab843124d4ba.zip
Auto merge of #22126 - steveklabnik:gh21281, r=nikomatsakis
This is super black magic internals at the moment, but having it
somewhere semi-public seems good. The current versions weren't being
rendered, and they'll be useful for some people.

Fixes #21281

r? @nikomatsakis @kmcallister 
-rw-r--r--src/librustc/middle/infer/README.md237
-rw-r--r--src/librustc/middle/infer/doc.rs247
-rw-r--r--src/librustc/middle/infer/higher_ranked/README.md403
-rw-r--r--src/librustc/middle/infer/higher_ranked/doc.rs413
-rw-r--r--src/librustc/middle/infer/mod.rs3
-rw-r--r--src/librustc/middle/infer/region_inference/README.md364
-rw-r--r--src/librustc/middle/infer/region_inference/doc.rs374
-rw-r--r--src/librustc/middle/infer/region_inference/mod.rs1
-rw-r--r--src/librustc/middle/traits/README.md (renamed from src/librustc/middle/traits/doc.rs)14
-rw-r--r--src/librustc/middle/traits/mod.rs2
-rw-r--r--src/librustc_borrowck/borrowck/README.md1212
-rw-r--r--src/librustc_borrowck/borrowck/doc.rs1222
-rw-r--r--src/librustc_borrowck/borrowck/mod.rs4
-rw-r--r--src/librustc_trans/trans/cleanup.rs107
-rw-r--r--src/librustc_trans/trans/datum.rs93
-rw-r--r--src/librustc_trans/trans/doc.rs233
-rw-r--r--src/librustc_trans/trans/expr.rs21
-rw-r--r--src/librustc_trans/trans/mod.rs1
-rw-r--r--src/librustc_typeck/check/method/README.md111
-rw-r--r--src/librustc_typeck/check/method/doc.rs121
-rw-r--r--src/librustc_typeck/check/method/mod.rs1
21 files changed, 2545 insertions, 2639 deletions
diff --git a/src/librustc/middle/infer/README.md b/src/librustc/middle/infer/README.md
new file mode 100644
index 00000000000..c835189820e
--- /dev/null
+++ b/src/librustc/middle/infer/README.md
@@ -0,0 +1,237 @@
+# Type inference engine
+
+This is loosely based on standard HM-type inference, but with an
+extension to try and accommodate subtyping.  There is nothing
+principled about this extension; it's sound---I hope!---but it's a
+heuristic, ultimately, and does not guarantee that it finds a valid
+typing even if one exists (in fact, there are known scenarios where it
+fails, some of which may eventually become problematic).
+
+## Key idea
+
+The main change is that each type variable T is associated with a
+lower-bound L and an upper-bound U.  L and U begin as bottom and top,
+respectively, but gradually narrow in response to new constraints
+being introduced.  When a variable is finally resolved to a concrete
+type, it can (theoretically) select any type that is a supertype of L
+and a subtype of U.
+
+There are several critical invariants which we maintain:
+
+- the upper-bound of a variable only becomes lower and the lower-bound
+  only becomes higher over time;
+- the lower-bound L is always a subtype of the upper bound U;
+- the lower-bound L and upper-bound U never refer to other type variables,
+  but only to types (though those types may contain type variables).
+
+> An aside: if the terms upper- and lower-bound confuse you, think of
+> "supertype" and "subtype".  The upper-bound is a "supertype"
+> (super=upper in Latin, or something like that anyway) and the lower-bound
+> is a "subtype" (sub=lower in Latin).  I find it helps to visualize
+> a simple class hierarchy, like Java minus interfaces and
+> primitive types.  The class Object is at the root (top) and other
+> types lie in between.  The bottom type is then the Null type.
+> So the tree looks like:
+>
+> ```text
+>         Object
+>         /    \
+>     String   Other
+>         \    /
+>         (null)
+> ```
+>
+> So the upper bound type is the "supertype" and the lower bound is the
+> "subtype" (also, super and sub mean upper and lower in Latin, or something
+> like that anyway).
+
+## Satisfying constraints
+
+At a primitive level, there is only one form of constraint that the
+inference understands: a subtype relation.  So the outside world can
+say "make type A a subtype of type B".  If there are variables
+involved, the inferencer will adjust their upper- and lower-bounds as
+needed to ensure that this relation is satisfied. (We also allow "make
+type A equal to type B", but this is translated into "A <: B" and "B
+<: A")
+
+As stated above, we always maintain the invariant that type bounds
+never refer to other variables.  This keeps the inference relatively
+simple, avoiding the scenario of having a kind of graph where we have
+to pump constraints along and reach a fixed point, but it does impose
+some heuristics in the case where the user is relating two type
+variables A <: B.
+
+Combining two variables such that variable A will forever be a subtype
+of variable B is the trickiest part of the algorithm because there is
+often no right choice---that is, the right choice will depend on
+future constraints which we do not yet know. The problem comes about
+because both A and B have bounds that can be adjusted in the future.
+Let's look at some of the cases that can come up.
+
+Imagine, to start, the best case, where both A and B have an upper and
+lower bound (that is, the bounds are not top nor bot respectively). In
+that case, if we're lucky, A.ub <: B.lb, and so we know that whatever
+A and B should become, they will forever have the desired subtyping
+relation.  We can just leave things as they are.
+
+### Option 1: Unify
+
+However, suppose that A.ub is *not* a subtype of B.lb.  In
+that case, we must make a decision.  One option is to unify A
+and B so that they are one variable whose bounds are:
+
+    UB = GLB(A.ub, B.ub)
+    LB = LUB(A.lb, B.lb)
+
+(Note that we will have to verify that LB <: UB; if it does not, the
+types are not intersecting and there is an error) In that case, A <: B
+holds trivially because A==B.  However, we have now lost some
+flexibility, because perhaps the user intended for A and B to end up
+as different types and not the same type.
+
+Pictorally, what this does is to take two distinct variables with
+(hopefully not completely) distinct type ranges and produce one with
+the intersection.
+
+```text
+                  B.ub                  B.ub
+                   /\                    /
+           A.ub   /  \           A.ub   /
+           /   \ /    \              \ /
+          /     X      \              UB
+         /     / \      \            / \
+        /     /   /      \          /   /
+        \     \  /       /          \  /
+         \      X       /             LB
+          \    / \     /             / \
+           \  /   \   /             /   \
+           A.lb    B.lb          A.lb    B.lb
+```
+
+
+### Option 2: Relate UB/LB
+
+Another option is to keep A and B as distinct variables but set their
+bounds in such a way that, whatever happens, we know that A <: B will hold.
+This can be achieved by ensuring that A.ub <: B.lb.  In practice there
+are two ways to do that, depicted pictorially here:
+
+```text
+    Before                Option #1            Option #2
+
+             B.ub                B.ub                B.ub
+              /\                 /  \                /  \
+      A.ub   /  \        A.ub   /(B')\       A.ub   /(B')\
+      /   \ /    \           \ /     /           \ /     /
+     /     X      \         __UB____/             UB    /
+    /     / \      \       /  |                   |    /
+   /     /   /      \     /   |                   |   /
+   \     \  /       /    /(A')|                   |  /
+    \      X       /    /     LB            ______LB/
+     \    / \     /    /     / \           / (A')/ \
+      \  /   \   /     \    /   \          \    /   \
+      A.lb    B.lb       A.lb    B.lb        A.lb    B.lb
+```
+
+In these diagrams, UB and LB are defined as before.  As you can see,
+the new ranges `A'` and `B'` are quite different from the range that
+would be produced by unifying the variables.
+
+### What we do now
+
+Our current technique is to *try* (transactionally) to relate the
+existing bounds of A and B, if there are any (i.e., if `UB(A) != top
+&& LB(B) != bot`).  If that succeeds, we're done.  If it fails, then
+we merge A and B into same variable.
+
+This is not clearly the correct course.  For example, if `UB(A) !=
+top` but `LB(B) == bot`, we could conceivably set `LB(B)` to `UB(A)`
+and leave the variables unmerged.  This is sometimes the better
+course, it depends on the program.
+
+The main case which fails today that I would like to support is:
+
+```text
+fn foo<T>(x: T, y: T) { ... }
+
+fn bar() {
+    let x: @mut int = @mut 3;
+    let y: @int = @3;
+    foo(x, y);
+}
+```
+
+In principle, the inferencer ought to find that the parameter `T` to
+`foo(x, y)` is `@const int`.  Today, however, it does not; this is
+because the type variable `T` is merged with the type variable for
+`X`, and thus inherits its UB/LB of `@mut int`.  This leaves no
+flexibility for `T` to later adjust to accommodate `@int`.
+
+### What to do when not all bounds are present
+
+In the prior discussion we assumed that A.ub was not top and B.lb was
+not bot.  Unfortunately this is rarely the case.  Often type variables
+have "lopsided" bounds.  For example, if a variable in the program has
+been initialized but has not been used, then its corresponding type
+variable will have a lower bound but no upper bound.  When that
+variable is then used, we would like to know its upper bound---but we
+don't have one!  In this case we'll do different things depending on
+how the variable is being used.
+
+## Transactional support
+
+Whenever we adjust merge variables or adjust their bounds, we always
+keep a record of the old value.  This allows the changes to be undone.
+
+## Regions
+
+I've only talked about type variables here, but region variables
+follow the same principle.  They have upper- and lower-bounds.  A
+region A is a subregion of a region B if A being valid implies that B
+is valid.  This basically corresponds to the block nesting structure:
+the regions for outer block scopes are superregions of those for inner
+block scopes.
+
+## Integral and floating-point type variables
+
+There is a third variety of type variable that we use only for
+inferring the types of unsuffixed integer literals.  Integral type
+variables differ from general-purpose type variables in that there's
+no subtyping relationship among the various integral types, so instead
+of associating each variable with an upper and lower bound, we just
+use simple unification.  Each integer variable is associated with at
+most one integer type.  Floating point types are handled similarly to
+integral types.
+
+## GLB/LUB
+
+Computing the greatest-lower-bound and least-upper-bound of two
+types/regions is generally straightforward except when type variables
+are involved. In that case, we follow a similar "try to use the bounds
+when possible but otherwise merge the variables" strategy.  In other
+words, `GLB(A, B)` where `A` and `B` are variables will often result
+in `A` and `B` being merged and the result being `A`.
+
+## Type coercion
+
+We have a notion of assignability which differs somewhat from
+subtyping; in particular it may cause region borrowing to occur.  See
+the big comment later in this file on Type Coercion for specifics.
+
+### In conclusion
+
+I showed you three ways to relate `A` and `B`.  There are also more,
+of course, though I'm not sure if there are any more sensible options.
+The main point is that there are various options, each of which
+produce a distinct range of types for `A` and `B`.  Depending on what
+the correct values for A and B are, one of these options will be the
+right choice: but of course we don't know the right values for A and B
+yet, that's what we're trying to find!  In our code, we opt to unify
+(Option #1).
+
+# Implementation details
+
+We make use of a trait-like implementation strategy to consolidate
+duplicated code between subtypes, GLB, and LUB computations.  See the
+section on "Type Combining" below for details.
diff --git a/src/librustc/middle/infer/doc.rs b/src/librustc/middle/infer/doc.rs
deleted file mode 100644
index 814eaa87347..00000000000
--- a/src/librustc/middle/infer/doc.rs
+++ /dev/null
@@ -1,247 +0,0 @@
-// Copyright 2012 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-//! # Type inference engine
-//!
-//! This is loosely based on standard HM-type inference, but with an
-//! extension to try and accommodate subtyping.  There is nothing
-//! principled about this extension; it's sound---I hope!---but it's a
-//! heuristic, ultimately, and does not guarantee that it finds a valid
-//! typing even if one exists (in fact, there are known scenarios where it
-//! fails, some of which may eventually become problematic).
-//!
-//! ## Key idea
-//!
-//! The main change is that each type variable T is associated with a
-//! lower-bound L and an upper-bound U.  L and U begin as bottom and top,
-//! respectively, but gradually narrow in response to new constraints
-//! being introduced.  When a variable is finally resolved to a concrete
-//! type, it can (theoretically) select any type that is a supertype of L
-//! and a subtype of U.
-//!
-//! There are several critical invariants which we maintain:
-//!
-//! - the upper-bound of a variable only becomes lower and the lower-bound
-//!   only becomes higher over time;
-//! - the lower-bound L is always a subtype of the upper bound U;
-//! - the lower-bound L and upper-bound U never refer to other type variables,
-//!   but only to types (though those types may contain type variables).
-//!
-//! > An aside: if the terms upper- and lower-bound confuse you, think of
-//! > "supertype" and "subtype".  The upper-bound is a "supertype"
-//! > (super=upper in Latin, or something like that anyway) and the lower-bound
-//! > is a "subtype" (sub=lower in Latin).  I find it helps to visualize
-//! > a simple class hierarchy, like Java minus interfaces and
-//! > primitive types.  The class Object is at the root (top) and other
-//! > types lie in between.  The bottom type is then the Null type.
-//! > So the tree looks like:
-//! >
-//! > ```text
-//! >         Object
-//! >         /    \
-//! >     String   Other
-//! >         \    /
-//! >         (null)
-//! > ```
-//! >
-//! > So the upper bound type is the "supertype" and the lower bound is the
-//! > "subtype" (also, super and sub mean upper and lower in Latin, or something
-//! > like that anyway).
-//!
-//! ## Satisfying constraints
-//!
-//! At a primitive level, there is only one form of constraint that the
-//! inference understands: a subtype relation.  So the outside world can
-//! say "make type A a subtype of type B".  If there are variables
-//! involved, the inferencer will adjust their upper- and lower-bounds as
-//! needed to ensure that this relation is satisfied. (We also allow "make
-//! type A equal to type B", but this is translated into "A <: B" and "B
-//! <: A")
-//!
-//! As stated above, we always maintain the invariant that type bounds
-//! never refer to other variables.  This keeps the inference relatively
-//! simple, avoiding the scenario of having a kind of graph where we have
-//! to pump constraints along and reach a fixed point, but it does impose
-//! some heuristics in the case where the user is relating two type
-//! variables A <: B.
-//!
-//! Combining two variables such that variable A will forever be a subtype
-//! of variable B is the trickiest part of the algorithm because there is
-//! often no right choice---that is, the right choice will depend on
-//! future constraints which we do not yet know. The problem comes about
-//! because both A and B have bounds that can be adjusted in the future.
-//! Let's look at some of the cases that can come up.
-//!
-//! Imagine, to start, the best case, where both A and B have an upper and
-//! lower bound (that is, the bounds are not top nor bot respectively). In
-//! that case, if we're lucky, A.ub <: B.lb, and so we know that whatever
-//! A and B should become, they will forever have the desired subtyping
-//! relation.  We can just leave things as they are.
-//!
-//! ### Option 1: Unify
-//!
-//! However, suppose that A.ub is *not* a subtype of B.lb.  In
-//! that case, we must make a decision.  One option is to unify A
-//! and B so that they are one variable whose bounds are:
-//!
-//!     UB = GLB(A.ub, B.ub)
-//!     LB = LUB(A.lb, B.lb)
-//!
-//! (Note that we will have to verify that LB <: UB; if it does not, the
-//! types are not intersecting and there is an error) In that case, A <: B
-//! holds trivially because A==B.  However, we have now lost some
-//! flexibility, because perhaps the user intended for A and B to end up
-//! as different types and not the same type.
-//!
-//! Pictorally, what this does is to take two distinct variables with
-//! (hopefully not completely) distinct type ranges and produce one with
-//! the intersection.
-//!
-//! ```text
-//!                   B.ub                  B.ub
-//!                    /\                    /
-//!            A.ub   /  \           A.ub   /
-//!            /   \ /    \              \ /
-//!           /     X      \              UB
-//!          /     / \      \            / \
-//!         /     /   /      \          /   /
-//!         \     \  /       /          \  /
-//!          \      X       /             LB
-//!           \    / \     /             / \
-//!            \  /   \   /             /   \
-//!            A.lb    B.lb          A.lb    B.lb
-//! ```
-//!
-//!
-//! ### Option 2: Relate UB/LB
-//!
-//! Another option is to keep A and B as distinct variables but set their
-//! bounds in such a way that, whatever happens, we know that A <: B will hold.
-//! This can be achieved by ensuring that A.ub <: B.lb.  In practice there
-//! are two ways to do that, depicted pictorially here:
-//!
-//! ```text
-//!     Before                Option #1            Option #2
-//!
-//!              B.ub                B.ub                B.ub
-//!               /\                 /  \                /  \
-//!       A.ub   /  \        A.ub   /(B')\       A.ub   /(B')\
-//!       /   \ /    \           \ /     /           \ /     /
-//!      /     X      \         __UB____/             UB    /
-//!     /     / \      \       /  |                   |    /
-//!    /     /   /      \     /   |                   |   /
-//!    \     \  /       /    /(A')|                   |  /
-//!     \      X       /    /     LB            ______LB/
-//!      \    / \     /    /     / \           / (A')/ \
-//!       \  /   \   /     \    /   \          \    /   \
-//!       A.lb    B.lb       A.lb    B.lb        A.lb    B.lb
-//! ```
-//!
-//! In these diagrams, UB and LB are defined as before.  As you can see,
-//! the new ranges `A'` and `B'` are quite different from the range that
-//! would be produced by unifying the variables.
-//!
-//! ### What we do now
-//!
-//! Our current technique is to *try* (transactionally) to relate the
-//! existing bounds of A and B, if there are any (i.e., if `UB(A) != top
-//! && LB(B) != bot`).  If that succeeds, we're done.  If it fails, then
-//! we merge A and B into same variable.
-//!
-//! This is not clearly the correct course.  For example, if `UB(A) !=
-//! top` but `LB(B) == bot`, we could conceivably set `LB(B)` to `UB(A)`
-//! and leave the variables unmerged.  This is sometimes the better
-//! course, it depends on the program.
-//!
-//! The main case which fails today that I would like to support is:
-//!
-//! ```text
-//! fn foo<T>(x: T, y: T) { ... }
-//!
-//! fn bar() {
-//!     let x: @mut int = @mut 3;
-//!     let y: @int = @3;
-//!     foo(x, y);
-//! }
-//! ```
-//!
-//! In principle, the inferencer ought to find that the parameter `T` to
-//! `foo(x, y)` is `@const int`.  Today, however, it does not; this is
-//! because the type variable `T` is merged with the type variable for
-//! `X`, and thus inherits its UB/LB of `@mut int`.  This leaves no
-//! flexibility for `T` to later adjust to accommodate `@int`.
-//!
-//! ### What to do when not all bounds are present
-//!
-//! In the prior discussion we assumed that A.ub was not top and B.lb was
-//! not bot.  Unfortunately this is rarely the case.  Often type variables
-//! have "lopsided" bounds.  For example, if a variable in the program has
-//! been initialized but has not been used, then its corresponding type
-//! variable will have a lower bound but no upper bound.  When that
-//! variable is then used, we would like to know its upper bound---but we
-//! don't have one!  In this case we'll do different things depending on
-//! how the variable is being used.
-//!
-//! ## Transactional support
-//!
-//! Whenever we adjust merge variables or adjust their bounds, we always
-//! keep a record of the old value.  This allows the changes to be undone.
-//!
-//! ## Regions
-//!
-//! I've only talked about type variables here, but region variables
-//! follow the same principle.  They have upper- and lower-bounds.  A
-//! region A is a subregion of a region B if A being valid implies that B
-//! is valid.  This basically corresponds to the block nesting structure:
-//! the regions for outer block scopes are superregions of those for inner
-//! block scopes.
-//!
-//! ## Integral and floating-point type variables
-//!
-//! There is a third variety of type variable that we use only for
-//! inferring the types of unsuffixed integer literals.  Integral type
-//! variables differ from general-purpose type variables in that there's
-//! no subtyping relationship among the various integral types, so instead
-//! of associating each variable with an upper and lower bound, we just
-//! use simple unification.  Each integer variable is associated with at
-//! most one integer type.  Floating point types are handled similarly to
-//! integral types.
-//!
-//! ## GLB/LUB
-//!
-//! Computing the greatest-lower-bound and least-upper-bound of two
-//! types/regions is generally straightforward except when type variables
-//! are involved. In that case, we follow a similar "try to use the bounds
-//! when possible but otherwise merge the variables" strategy.  In other
-//! words, `GLB(A, B)` where `A` and `B` are variables will often result
-//! in `A` and `B` being merged and the result being `A`.
-//!
-//! ## Type coercion
-//!
-//! We have a notion of assignability which differs somewhat from
-//! subtyping; in particular it may cause region borrowing to occur.  See
-//! the big comment later in this file on Type Coercion for specifics.
-//!
-//! ### In conclusion
-//!
-//! I showed you three ways to relate `A` and `B`.  There are also more,
-//! of course, though I'm not sure if there are any more sensible options.
-//! The main point is that there are various options, each of which
-//! produce a distinct range of types for `A` and `B`.  Depending on what
-//! the correct values for A and B are, one of these options will be the
-//! right choice: but of course we don't know the right values for A and B
-//! yet, that's what we're trying to find!  In our code, we opt to unify
-//! (Option #1).
-//!
-//! # Implementation details
-//!
-//! We make use of a trait-like implementation strategy to consolidate
-//! duplicated code between subtypes, GLB, and LUB computations.  See the
-//! section on "Type Combining" below for details.
diff --git a/src/librustc/middle/infer/higher_ranked/README.md b/src/librustc/middle/infer/higher_ranked/README.md
new file mode 100644
index 00000000000..3414c7515a3
--- /dev/null
+++ b/src/librustc/middle/infer/higher_ranked/README.md
@@ -0,0 +1,403 @@
+# Skolemization and functions
+
+One of the trickiest and most subtle aspects of regions is dealing
+with higher-ranked things which include bound region variables, such
+as function types. I strongly suggest that if you want to understand
+the situation, you read this paper (which is, admittedly, very long,
+but you don't have to read the whole thing):
+
+http://research.microsoft.com/en-us/um/people/simonpj/papers/higher-rank/
+
+Although my explanation will never compete with SPJ's (for one thing,
+his is approximately 100 pages), I will attempt to explain the basic
+problem and also how we solve it. Note that the paper only discusses
+subtyping, not the computation of LUB/GLB.
+
+The problem we are addressing is that there is a kind of subtyping
+between functions with bound region parameters. Consider, for
+example, whether the following relation holds:
+
+    for<'a> fn(&'a int) <: for<'b> fn(&'b int)? (Yes, a => b)
+
+The answer is that of course it does. These two types are basically
+the same, except that in one we used the name `a` and one we used
+the name `b`.
+
+In the examples that follow, it becomes very important to know whether
+a lifetime is bound in a function type (that is, is a lifetime
+parameter) or appears free (is defined in some outer scope).
+Therefore, from now on I will always write the bindings explicitly,
+using the Rust syntax `for<'a> fn(&'a int)` to indicate that `a` is a
+lifetime parameter.
+
+Now let's consider two more function types. Here, we assume that the
+`'b` lifetime is defined somewhere outside and hence is not a lifetime
+parameter bound by the function type (it "appears free"):
+
+    for<'a> fn(&'a int) <: fn(&'b int)? (Yes, a => b)
+
+This subtyping relation does in fact hold. To see why, you have to
+consider what subtyping means. One way to look at `T1 <: T2` is to
+say that it means that it is always ok to treat an instance of `T1` as
+if it had the type `T2`. So, with our functions, it is always ok to
+treat a function that can take pointers with any lifetime as if it
+were a function that can only take a pointer with the specific
+lifetime `'b`. After all, `'b` is a lifetime, after all, and
+the function can take values of any lifetime.
+
+You can also look at subtyping as the *is a* relationship. This amounts
+to the same thing: a function that accepts pointers with any lifetime
+*is a* function that accepts pointers with some specific lifetime.
+
+So, what if we reverse the order of the two function types, like this:
+
+    fn(&'b int) <: for<'a> fn(&'a int)? (No)
+
+Does the subtyping relationship still hold?  The answer of course is
+no. In this case, the function accepts *only the lifetime `'b`*,
+so it is not reasonable to treat it as if it were a function that
+accepted any lifetime.
+
+What about these two examples:
+
+    for<'a,'b> fn(&'a int, &'b int) <: for<'a>    fn(&'a int, &'a int)? (Yes)
+    for<'a>    fn(&'a int, &'a int) <: for<'a,'b> fn(&'a int, &'b int)? (No)
+
+Here, it is true that functions which take two pointers with any two
+lifetimes can be treated as if they only accepted two pointers with
+the same lifetime, but not the reverse.
+
+## The algorithm
+
+Here is the algorithm we use to perform the subtyping check:
+
+1. Replace all bound regions in the subtype with new variables
+2. Replace all bound regions in the supertype with skolemized
+   equivalents. A "skolemized" region is just a new fresh region
+   name.
+3. Check that the parameter and return types match as normal
+4. Ensure that no skolemized regions 'leak' into region variables
+   visible from "the outside"
+
+Let's walk through some examples and see how this algorithm plays out.
+
+#### First example
+
+We'll start with the first example, which was:
+
+    1. for<'a> fn(&'a T) <: for<'b> fn(&'b T)?        Yes: a -> b
+
+After steps 1 and 2 of the algorithm we will have replaced the types
+like so:
+
+    1. fn(&'A T) <: fn(&'x T)?
+
+Here the upper case `&A` indicates a *region variable*, that is, a
+region whose value is being inferred by the system. I also replaced
+`&b` with `&x`---I'll use letters late in the alphabet (`x`, `y`, `z`)
+to indicate skolemized region names. We can assume they don't appear
+elsewhere. Note that neither the sub- nor the supertype bind any
+region names anymore (as indicated by the absence of `<` and `>`).
+
+The next step is to check that the parameter types match. Because
+parameters are contravariant, this means that we check whether:
+
+    &'x T <: &'A T
+
+Region pointers are contravariant so this implies that
+
+    &A <= &x
+
+must hold, where `<=` is the subregion relationship. Processing
+*this* constrain simply adds a constraint into our graph that `&A <=
+&x` and is considered successful (it can, for example, be satisfied by
+choosing the value `&x` for `&A`).
+
+So far we have encountered no error, so the subtype check succeeds.
+
+#### The third example
+
+Now let's look first at the third example, which was:
+
+    3. fn(&'a T)    <: for<'b> fn(&'b T)?        No!
+
+After steps 1 and 2 of the algorithm we will have replaced the types
+like so:
+
+    3. fn(&'a T) <: fn(&'x T)?
+
+This looks pretty much the same as before, except that on the LHS
+`'a` was not bound, and hence was left as-is and not replaced with
+a variable. The next step is again to check that the parameter types
+match. This will ultimately require (as before) that `'a` <= `&x`
+must hold: but this does not hold. `self` and `x` are both distinct
+free regions. So the subtype check fails.
+
+#### Checking for skolemization leaks
+
+You may be wondering about that mysterious last step in the algorithm.
+So far it has not been relevant. The purpose of that last step is to
+catch something like *this*:
+
+    for<'a> fn() -> fn(&'a T) <: fn() -> for<'b> fn(&'b T)?   No.
+
+Here the function types are the same but for where the binding occurs.
+The subtype returns a function that expects a value in precisely one
+region. The supertype returns a function that expects a value in any
+region. If we allow an instance of the subtype to be used where the
+supertype is expected, then, someone could call the fn and think that
+the return value has type `fn<b>(&'b T)` when it really has type
+`fn(&'a T)` (this is case #3, above). Bad.
+
+So let's step through what happens when we perform this subtype check.
+We first replace the bound regions in the subtype (the supertype has
+no bound regions). This gives us:
+
+    fn() -> fn(&'A T) <: fn() -> for<'b> fn(&'b T)?
+
+Now we compare the return types, which are covariant, and hence we have:
+
+    fn(&'A T) <: for<'b> fn(&'b T)?
+
+Here we skolemize the bound region in the supertype to yield:
+
+    fn(&'A T) <: fn(&'x T)?
+
+And then proceed to compare the argument types:
+
+    &'x T <: &'A T
+    'A <= 'x
+
+Finally, this is where it gets interesting!  This is where an error
+*should* be reported. But in fact this will not happen. The reason why
+is that `A` is a variable: we will infer that its value is the fresh
+region `x` and think that everything is happy. In fact, this behavior
+is *necessary*, it was key to the first example we walked through.
+
+The difference between this example and the first one is that the variable
+`A` already existed at the point where the skolemization occurred. In
+the first example, you had two functions:
+
+    for<'a> fn(&'a T) <: for<'b> fn(&'b T)
+
+and hence `&A` and `&x` were created "together". In general, the
+intention of the skolemized names is that they are supposed to be
+fresh names that could never be equal to anything from the outside.
+But when inference comes into play, we might not be respecting this
+rule.
+
+So the way we solve this is to add a fourth step that examines the
+constraints that refer to skolemized names. Basically, consider a
+non-directed version of the constraint graph. Let `Tainted(x)` be the
+set of all things reachable from a skolemized variable `x`.
+`Tainted(x)` should not contain any regions that existed before the
+step at which the skolemization was performed. So this case here
+would fail because `&x` was created alone, but is relatable to `&A`.
+
+## Computing the LUB and GLB
+
+The paper I pointed you at is written for Haskell. It does not
+therefore considering subtyping and in particular does not consider
+LUB or GLB computation. We have to consider this. Here is the
+algorithm I implemented.
+
+First though, let's discuss what we are trying to compute in more
+detail. The LUB is basically the "common supertype" and the GLB is
+"common subtype"; one catch is that the LUB should be the
+*most-specific* common supertype and the GLB should be *most general*
+common subtype (as opposed to any common supertype or any common
+subtype).
+
+Anyway, to help clarify, here is a table containing some function
+pairs and their LUB/GLB (for conciseness, in this table, I'm just
+including the lifetimes here, not the rest of the types, and I'm
+writing `fn<>` instead of `for<> fn`):
+
+```
+Type 1                Type 2                LUB                    GLB
+fn<'a>('a)            fn('X)                fn('X)                 fn<'a>('a)
+fn('a)                fn('X)                --                     fn<'a>('a)
+fn<'a,'b>('a, 'b)     fn<'x>('x, 'x)        fn<'a>('a, 'a)         fn<'a,'b>('a, 'b)
+fn<'a,'b>('a, 'b, 'a) fn<'x,'y>('x, 'y, 'y) fn<'a>('a, 'a, 'a)     fn<'a,'b,'c>('a,'b,'c)
+```
+
+### Conventions
+
+I use lower-case letters (e.g., `&a`) for bound regions and upper-case
+letters for free regions (`&A`).  Region variables written with a
+dollar-sign (e.g., `$a`).  I will try to remember to enumerate the
+bound-regions on the fn type as well (e.g., `for<'a> fn(&a)`).
+
+### High-level summary
+
+Both the LUB and the GLB algorithms work in a similar fashion.  They
+begin by replacing all bound regions (on both sides) with fresh region
+inference variables.  Therefore, both functions are converted to types
+that contain only free regions.  We can then compute the LUB/GLB in a
+straightforward way, as described in `combine.rs`.  This results in an
+interim type T.  The algorithms then examine the regions that appear
+in T and try to, in some cases, replace them with bound regions to
+yield the final result.
+
+To decide whether to replace a region `R` that appears in `T` with
+a bound region, the algorithms make use of two bits of
+information.  First is a set `V` that contains all region
+variables created as part of the LUB/GLB computation (roughly; see
+`region_vars_confined_to_snapshot()` for full details). `V` will
+contain the region variables created to replace the bound regions
+in the input types, but it also contains 'intermediate' variables
+created to represent the LUB/GLB of individual regions.
+Basically, when asked to compute the LUB/GLB of a region variable
+with another region, the inferencer cannot oblige immediately
+since the values of that variables are not known.  Therefore, it
+creates a new variable that is related to the two regions.  For
+example, the LUB of two variables `$x` and `$y` is a fresh
+variable `$z` that is constrained such that `$x <= $z` and `$y <=
+$z`.  So `V` will contain these intermediate variables as well.
+
+The other important factor in deciding how to replace a region in T is
+the function `Tainted($r)` which, for a region variable, identifies
+all regions that the region variable is related to in some way
+(`Tainted()` made an appearance in the subtype computation as well).
+
+### LUB
+
+The LUB algorithm proceeds in three steps:
+
+1. Replace all bound regions (on both sides) with fresh region
+   inference variables.
+2. Compute the LUB "as normal", meaning compute the GLB of each
+   pair of argument types and the LUB of the return types and
+   so forth.  Combine those to a new function type `F`.
+3. Replace each region `R` that appears in `F` as follows:
+   - Let `V` be the set of variables created during the LUB
+     computational steps 1 and 2, as described in the previous section.
+   - If `R` is not in `V`, replace `R` with itself.
+   - If `Tainted(R)` contains a region that is not in `V`,
+     replace `R` with itself.
+   - Otherwise, select the earliest variable in `Tainted(R)` that originates
+     from the left-hand side and replace `R` with the bound region that
+     this variable was a replacement for.
+
+So, let's work through the simplest example: `fn(&A)` and `for<'a> fn(&a)`.
+In this case, `&a` will be replaced with `$a` and the interim LUB type
+`fn($b)` will be computed, where `$b=GLB(&A,$a)`.  Therefore, `V =
+{$a, $b}` and `Tainted($b) = { $b, $a, &A }`.  When we go to replace
+`$b`, we find that since `&A \in Tainted($b)` is not a member of `V`,
+we leave `$b` as is.  When region inference happens, `$b` will be
+resolved to `&A`, as we wanted.
+
+Let's look at a more complex one: `fn(&a, &b)` and `fn(&x, &x)`.  In
+this case, we'll end up with a (pre-replacement) LUB type of `fn(&g,
+&h)` and a graph that looks like:
+
+```
+     $a        $b     *--$x
+       \        \    /  /
+        \        $h-*  /
+         $g-----------*
+```
+
+Here `$g` and `$h` are fresh variables that are created to represent
+the LUB/GLB of things requiring inference.  This means that `V` and
+`Tainted` will look like:
+
+```
+V = {$a, $b, $g, $h, $x}
+Tainted($g) = Tainted($h) = { $a, $b, $h, $g, $x }
+```
+
+Therefore we replace both `$g` and `$h` with `$a`, and end up
+with the type `fn(&a, &a)`.
+
+### GLB
+
+The procedure for computing the GLB is similar.  The difference lies
+in computing the replacements for the various variables. For each
+region `R` that appears in the type `F`, we again compute `Tainted(R)`
+and examine the results:
+
+1. If `R` is not in `V`, it is not replaced.
+2. Else, if `Tainted(R)` contains only variables in `V`, and it
+   contains exactly one variable from the LHS and one variable from
+   the RHS, then `R` can be mapped to the bound version of the
+   variable from the LHS.
+3. Else, if `Tainted(R)` contains no variable from the LHS and no
+   variable from the RHS, then `R` can be mapped to itself.
+4. Else, `R` is mapped to a fresh bound variable.
+
+These rules are pretty complex.  Let's look at some examples to see
+how they play out.
+
+Out first example was `fn(&a)` and `fn(&X)`.  In this case, `&a` will
+be replaced with `$a` and we will ultimately compute a
+(pre-replacement) GLB type of `fn($g)` where `$g=LUB($a,&X)`.
+Therefore, `V={$a,$g}` and `Tainted($g)={$g,$a,&X}.  To find the
+replacement for `$g` we consult the rules above:
+- Rule (1) does not apply because `$g \in V`
+- Rule (2) does not apply because `&X \in Tainted($g)`
+- Rule (3) does not apply because `$a \in Tainted($g)`
+- Hence, by rule (4), we replace `$g` with a fresh bound variable `&z`.
+So our final result is `fn(&z)`, which is correct.
+
+The next example is `fn(&A)` and `fn(&Z)`. In this case, we will again
+have a (pre-replacement) GLB of `fn(&g)`, where `$g = LUB(&A,&Z)`.
+Therefore, `V={$g}` and `Tainted($g) = {$g, &A, &Z}`.  In this case,
+by rule (3), `$g` is mapped to itself, and hence the result is
+`fn($g)`.  This result is correct (in this case, at least), but it is
+indicative of a case that *can* lead us into concluding that there is
+no GLB when in fact a GLB does exist.  See the section "Questionable
+Results" below for more details.
+
+The next example is `fn(&a, &b)` and `fn(&c, &c)`. In this case, as
+before, we'll end up with `F=fn($g, $h)` where `Tainted($g) =
+Tainted($h) = {$g, $h, $a, $b, $c}`.  Only rule (4) applies and hence
+we'll select fresh bound variables `y` and `z` and wind up with
+`fn(&y, &z)`.
+
+For the last example, let's consider what may seem trivial, but is
+not: `fn(&a, &a)` and `fn(&b, &b)`.  In this case, we'll get `F=fn($g,
+$h)` where `Tainted($g) = {$g, $a, $x}` and `Tainted($h) = {$h, $a,
+$x}`.  Both of these sets contain exactly one bound variable from each
+side, so we'll map them both to `&a`, resulting in `fn(&a, &a)`, which
+is the desired result.
+
+### Shortcomings and correctness
+
+You may be wondering whether this algorithm is correct.  The answer is
+"sort of".  There are definitely cases where they fail to compute a
+result even though a correct result exists.  I believe, though, that
+if they succeed, then the result is valid, and I will attempt to
+convince you.  The basic argument is that the "pre-replacement" step
+computes a set of constraints.  The replacements, then, attempt to
+satisfy those constraints, using bound identifiers where needed.
+
+For now I will briefly go over the cases for LUB/GLB and identify
+their intent:
+
+- LUB:
+  - The region variables that are substituted in place of bound regions
+    are intended to collect constraints on those bound regions.
+  - If Tainted(R) contains only values in V, then this region is unconstrained
+    and can therefore be generalized, otherwise it cannot.
+- GLB:
+  - The region variables that are substituted in place of bound regions
+    are intended to collect constraints on those bound regions.
+  - If Tainted(R) contains exactly one variable from each side, and
+    only variables in V, that indicates that those two bound regions
+    must be equated.
+  - Otherwise, if Tainted(R) references any variables from left or right
+    side, then it is trying to combine a bound region with a free one or
+    multiple bound regions, so we need to select fresh bound regions.
+
+Sorry this is more of a shorthand to myself.  I will try to write up something
+more convincing in the future.
+
+#### Where are the algorithms wrong?
+
+- The pre-replacement computation can fail even though using a
+  bound-region would have succeeded.
+- We will compute GLB(fn(fn($a)), fn(fn($b))) as fn($c) where $c is the
+  GLB of $a and $b.  But if inference finds that $a and $b must be mapped
+  to regions without a GLB, then this is effectively a failure to compute
+  the GLB.  However, the result `fn<$c>(fn($c))` is a valid GLB.
diff --git a/src/librustc/middle/infer/higher_ranked/doc.rs b/src/librustc/middle/infer/higher_ranked/doc.rs
deleted file mode 100644
index 6b520ab665c..00000000000
--- a/src/librustc/middle/infer/higher_ranked/doc.rs
+++ /dev/null
@@ -1,413 +0,0 @@
-// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-//! # Skolemization and functions
-//!
-//! One of the trickiest and most subtle aspects of regions is dealing
-//! with higher-ranked things which include bound region variables, such
-//! as function types. I strongly suggest that if you want to understand
-//! the situation, you read this paper (which is, admittedly, very long,
-//! but you don't have to read the whole thing):
-//!
-//! http://research.microsoft.com/en-us/um/people/simonpj/papers/higher-rank/
-//!
-//! Although my explanation will never compete with SPJ's (for one thing,
-//! his is approximately 100 pages), I will attempt to explain the basic
-//! problem and also how we solve it. Note that the paper only discusses
-//! subtyping, not the computation of LUB/GLB.
-//!
-//! The problem we are addressing is that there is a kind of subtyping
-//! between functions with bound region parameters. Consider, for
-//! example, whether the following relation holds:
-//!
-//!     for<'a> fn(&'a int) <: for<'b> fn(&'b int)? (Yes, a => b)
-//!
-//! The answer is that of course it does. These two types are basically
-//! the same, except that in one we used the name `a` and one we used
-//! the name `b`.
-//!
-//! In the examples that follow, it becomes very important to know whether
-//! a lifetime is bound in a function type (that is, is a lifetime
-//! parameter) or appears free (is defined in some outer scope).
-//! Therefore, from now on I will always write the bindings explicitly,
-//! using the Rust syntax `for<'a> fn(&'a int)` to indicate that `a` is a
-//! lifetime parameter.
-//!
-//! Now let's consider two more function types. Here, we assume that the
-//! `'b` lifetime is defined somewhere outside and hence is not a lifetime
-//! parameter bound by the function type (it "appears free"):
-//!
-//!     for<'a> fn(&'a int) <: fn(&'b int)? (Yes, a => b)
-//!
-//! This subtyping relation does in fact hold. To see why, you have to
-//! consider what subtyping means. One way to look at `T1 <: T2` is to
-//! say that it means that it is always ok to treat an instance of `T1` as
-//! if it had the type `T2`. So, with our functions, it is always ok to
-//! treat a function that can take pointers with any lifetime as if it
-//! were a function that can only take a pointer with the specific
-//! lifetime `'b`. After all, `'b` is a lifetime, after all, and
-//! the function can take values of any lifetime.
-//!
-//! You can also look at subtyping as the *is a* relationship. This amounts
-//! to the same thing: a function that accepts pointers with any lifetime
-//! *is a* function that accepts pointers with some specific lifetime.
-//!
-//! So, what if we reverse the order of the two function types, like this:
-//!
-//!     fn(&'b int) <: for<'a> fn(&'a int)? (No)
-//!
-//! Does the subtyping relationship still hold?  The answer of course is
-//! no. In this case, the function accepts *only the lifetime `'b`*,
-//! so it is not reasonable to treat it as if it were a function that
-//! accepted any lifetime.
-//!
-//! What about these two examples:
-//!
-//!     for<'a,'b> fn(&'a int, &'b int) <: for<'a>    fn(&'a int, &'a int)? (Yes)
-//!     for<'a>    fn(&'a int, &'a int) <: for<'a,'b> fn(&'a int, &'b int)? (No)
-//!
-//! Here, it is true that functions which take two pointers with any two
-//! lifetimes can be treated as if they only accepted two pointers with
-//! the same lifetime, but not the reverse.
-//!
-//! ## The algorithm
-//!
-//! Here is the algorithm we use to perform the subtyping check:
-//!
-//! 1. Replace all bound regions in the subtype with new variables
-//! 2. Replace all bound regions in the supertype with skolemized
-//!    equivalents. A "skolemized" region is just a new fresh region
-//!    name.
-//! 3. Check that the parameter and return types match as normal
-//! 4. Ensure that no skolemized regions 'leak' into region variables
-//!    visible from "the outside"
-//!
-//! Let's walk through some examples and see how this algorithm plays out.
-//!
-//! #### First example
-//!
-//! We'll start with the first example, which was:
-//!
-//!     1. for<'a> fn(&'a T) <: for<'b> fn(&'b T)?        Yes: a -> b
-//!
-//! After steps 1 and 2 of the algorithm we will have replaced the types
-//! like so:
-//!
-//!     1. fn(&'A T) <: fn(&'x T)?
-//!
-//! Here the upper case `&A` indicates a *region variable*, that is, a
-//! region whose value is being inferred by the system. I also replaced
-//! `&b` with `&x`---I'll use letters late in the alphabet (`x`, `y`, `z`)
-//! to indicate skolemized region names. We can assume they don't appear
-//! elsewhere. Note that neither the sub- nor the supertype bind any
-//! region names anymore (as indicated by the absence of `<` and `>`).
-//!
-//! The next step is to check that the parameter types match. Because
-//! parameters are contravariant, this means that we check whether:
-//!
-//!     &'x T <: &'A T
-//!
-//! Region pointers are contravariant so this implies that
-//!
-//!     &A <= &x
-//!
-//! must hold, where `<=` is the subregion relationship. Processing
-//! *this* constrain simply adds a constraint into our graph that `&A <=
-//! &x` and is considered successful (it can, for example, be satisfied by
-//! choosing the value `&x` for `&A`).
-//!
-//! So far we have encountered no error, so the subtype check succeeds.
-//!
-//! #### The third example
-//!
-//! Now let's look first at the third example, which was:
-//!
-//!     3. fn(&'a T)    <: for<'b> fn(&'b T)?        No!
-//!
-//! After steps 1 and 2 of the algorithm we will have replaced the types
-//! like so:
-//!
-//!     3. fn(&'a T) <: fn(&'x T)?
-//!
-//! This looks pretty much the same as before, except that on the LHS
-//! `'a` was not bound, and hence was left as-is and not replaced with
-//! a variable. The next step is again to check that the parameter types
-//! match. This will ultimately require (as before) that `'a` <= `&x`
-//! must hold: but this does not hold. `self` and `x` are both distinct
-//! free regions. So the subtype check fails.
-//!
-//! #### Checking for skolemization leaks
-//!
-//! You may be wondering about that mysterious last step in the algorithm.
-//! So far it has not been relevant. The purpose of that last step is to
-//! catch something like *this*:
-//!
-//!     for<'a> fn() -> fn(&'a T) <: fn() -> for<'b> fn(&'b T)?   No.
-//!
-//! Here the function types are the same but for where the binding occurs.
-//! The subtype returns a function that expects a value in precisely one
-//! region. The supertype returns a function that expects a value in any
-//! region. If we allow an instance of the subtype to be used where the
-//! supertype is expected, then, someone could call the fn and think that
-//! the return value has type `fn<b>(&'b T)` when it really has type
-//! `fn(&'a T)` (this is case #3, above). Bad.
-//!
-//! So let's step through what happens when we perform this subtype check.
-//! We first replace the bound regions in the subtype (the supertype has
-//! no bound regions). This gives us:
-//!
-//!     fn() -> fn(&'A T) <: fn() -> for<'b> fn(&'b T)?
-//!
-//! Now we compare the return types, which are covariant, and hence we have:
-//!
-//!     fn(&'A T) <: for<'b> fn(&'b T)?
-//!
-//! Here we skolemize the bound region in the supertype to yield:
-//!
-//!     fn(&'A T) <: fn(&'x T)?
-//!
-//! And then proceed to compare the argument types:
-//!
-//!     &'x T <: &'A T
-//!     'A <= 'x
-//!
-//! Finally, this is where it gets interesting!  This is where an error
-//! *should* be reported. But in fact this will not happen. The reason why
-//! is that `A` is a variable: we will infer that its value is the fresh
-//! region `x` and think that everything is happy. In fact, this behavior
-//! is *necessary*, it was key to the first example we walked through.
-//!
-//! The difference between this example and the first one is that the variable
-//! `A` already existed at the point where the skolemization occurred. In
-//! the first example, you had two functions:
-//!
-//!     for<'a> fn(&'a T) <: for<'b> fn(&'b T)
-//!
-//! and hence `&A` and `&x` were created "together". In general, the
-//! intention of the skolemized names is that they are supposed to be
-//! fresh names that could never be equal to anything from the outside.
-//! But when inference comes into play, we might not be respecting this
-//! rule.
-//!
-//! So the way we solve this is to add a fourth step that examines the
-//! constraints that refer to skolemized names. Basically, consider a
-//! non-directed version of the constraint graph. Let `Tainted(x)` be the
-//! set of all things reachable from a skolemized variable `x`.
-//! `Tainted(x)` should not contain any regions that existed before the
-//! step at which the skolemization was performed. So this case here
-//! would fail because `&x` was created alone, but is relatable to `&A`.
-//!
-//! ## Computing the LUB and GLB
-//!
-//! The paper I pointed you at is written for Haskell. It does not
-//! therefore considering subtyping and in particular does not consider
-//! LUB or GLB computation. We have to consider this. Here is the
-//! algorithm I implemented.
-//!
-//! First though, let's discuss what we are trying to compute in more
-//! detail. The LUB is basically the "common supertype" and the GLB is
-//! "common subtype"; one catch is that the LUB should be the
-//! *most-specific* common supertype and the GLB should be *most general*
-//! common subtype (as opposed to any common supertype or any common
-//! subtype).
-//!
-//! Anyway, to help clarify, here is a table containing some function
-//! pairs and their LUB/GLB (for conciseness, in this table, I'm just
-//! including the lifetimes here, not the rest of the types, and I'm
-//! writing `fn<>` instead of `for<> fn`):
-//!
-//! ```
-//! Type 1                Type 2                LUB                    GLB
-//! fn<'a>('a)            fn('X)                fn('X)                 fn<'a>('a)
-//! fn('a)                fn('X)                --                     fn<'a>('a)
-//! fn<'a,'b>('a, 'b)     fn<'x>('x, 'x)        fn<'a>('a, 'a)         fn<'a,'b>('a, 'b)
-//! fn<'a,'b>('a, 'b, 'a) fn<'x,'y>('x, 'y, 'y) fn<'a>('a, 'a, 'a)     fn<'a,'b,'c>('a,'b,'c)
-//! ```
-//!
-//! ### Conventions
-//!
-//! I use lower-case letters (e.g., `&a`) for bound regions and upper-case
-//! letters for free regions (`&A`).  Region variables written with a
-//! dollar-sign (e.g., `$a`).  I will try to remember to enumerate the
-//! bound-regions on the fn type as well (e.g., `for<'a> fn(&a)`).
-//!
-//! ### High-level summary
-//!
-//! Both the LUB and the GLB algorithms work in a similar fashion.  They
-//! begin by replacing all bound regions (on both sides) with fresh region
-//! inference variables.  Therefore, both functions are converted to types
-//! that contain only free regions.  We can then compute the LUB/GLB in a
-//! straightforward way, as described in `combine.rs`.  This results in an
-//! interim type T.  The algorithms then examine the regions that appear
-//! in T and try to, in some cases, replace them with bound regions to
-//! yield the final result.
-//!
-//! To decide whether to replace a region `R` that appears in `T` with
-//! a bound region, the algorithms make use of two bits of
-//! information.  First is a set `V` that contains all region
-//! variables created as part of the LUB/GLB computation (roughly; see
-//! `region_vars_confined_to_snapshot()` for full details). `V` will
-//! contain the region variables created to replace the bound regions
-//! in the input types, but it also contains 'intermediate' variables
-//! created to represent the LUB/GLB of individual regions.
-//! Basically, when asked to compute the LUB/GLB of a region variable
-//! with another region, the inferencer cannot oblige immediately
-//! since the values of that variables are not known.  Therefore, it
-//! creates a new variable that is related to the two regions.  For
-//! example, the LUB of two variables `$x` and `$y` is a fresh
-//! variable `$z` that is constrained such that `$x <= $z` and `$y <=
-//! $z`.  So `V` will contain these intermediate variables as well.
-//!
-//! The other important factor in deciding how to replace a region in T is
-//! the function `Tainted($r)` which, for a region variable, identifies
-//! all regions that the region variable is related to in some way
-//! (`Tainted()` made an appearance in the subtype computation as well).
-//!
-//! ### LUB
-//!
-//! The LUB algorithm proceeds in three steps:
-//!
-//! 1. Replace all bound regions (on both sides) with fresh region
-//!    inference variables.
-//! 2. Compute the LUB "as normal", meaning compute the GLB of each
-//!    pair of argument types and the LUB of the return types and
-//!    so forth.  Combine those to a new function type `F`.
-//! 3. Replace each region `R` that appears in `F` as follows:
-//!    - Let `V` be the set of variables created during the LUB
-//!      computational steps 1 and 2, as described in the previous section.
-//!    - If `R` is not in `V`, replace `R` with itself.
-//!    - If `Tainted(R)` contains a region that is not in `V`,
-//!      replace `R` with itself.
-//!    - Otherwise, select the earliest variable in `Tainted(R)` that originates
-//!      from the left-hand side and replace `R` with the bound region that
-//!      this variable was a replacement for.
-//!
-//! So, let's work through the simplest example: `fn(&A)` and `for<'a> fn(&a)`.
-//! In this case, `&a` will be replaced with `$a` and the interim LUB type
-//! `fn($b)` will be computed, where `$b=GLB(&A,$a)`.  Therefore, `V =
-//! {$a, $b}` and `Tainted($b) = { $b, $a, &A }`.  When we go to replace
-//! `$b`, we find that since `&A \in Tainted($b)` is not a member of `V`,
-//! we leave `$b` as is.  When region inference happens, `$b` will be
-//! resolved to `&A`, as we wanted.
-//!
-//! Let's look at a more complex one: `fn(&a, &b)` and `fn(&x, &x)`.  In
-//! this case, we'll end up with a (pre-replacement) LUB type of `fn(&g,
-//! &h)` and a graph that looks like:
-//!
-//! ```
-//!      $a        $b     *--$x
-//!        \        \    /  /
-//!         \        $h-*  /
-//!          $g-----------*
-//! ```
-//!
-//! Here `$g` and `$h` are fresh variables that are created to represent
-//! the LUB/GLB of things requiring inference.  This means that `V` and
-//! `Tainted` will look like:
-//!
-//! ```
-//! V = {$a, $b, $g, $h, $x}
-//! Tainted($g) = Tainted($h) = { $a, $b, $h, $g, $x }
-//! ```
-//!
-//! Therefore we replace both `$g` and `$h` with `$a`, and end up
-//! with the type `fn(&a, &a)`.
-//!
-//! ### GLB
-//!
-//! The procedure for computing the GLB is similar.  The difference lies
-//! in computing the replacements for the various variables. For each
-//! region `R` that appears in the type `F`, we again compute `Tainted(R)`
-//! and examine the results:
-//!
-//! 1. If `R` is not in `V`, it is not replaced.
-//! 2. Else, if `Tainted(R)` contains only variables in `V`, and it
-//!    contains exactly one variable from the LHS and one variable from
-//!    the RHS, then `R` can be mapped to the bound version of the
-//!    variable from the LHS.
-//! 3. Else, if `Tainted(R)` contains no variable from the LHS and no
-//!    variable from the RHS, then `R` can be mapped to itself.
-//! 4. Else, `R` is mapped to a fresh bound variable.
-//!
-//! These rules are pretty complex.  Let's look at some examples to see
-//! how they play out.
-//!
-//! Out first example was `fn(&a)` and `fn(&X)`.  In this case, `&a` will
-//! be replaced with `$a` and we will ultimately compute a
-//! (pre-replacement) GLB type of `fn($g)` where `$g=LUB($a,&X)`.
-//! Therefore, `V={$a,$g}` and `Tainted($g)={$g,$a,&X}.  To find the
-//! replacement for `$g` we consult the rules above:
-//! - Rule (1) does not apply because `$g \in V`
-//! - Rule (2) does not apply because `&X \in Tainted($g)`
-//! - Rule (3) does not apply because `$a \in Tainted($g)`
-//! - Hence, by rule (4), we replace `$g` with a fresh bound variable `&z`.
-//! So our final result is `fn(&z)`, which is correct.
-//!
-//! The next example is `fn(&A)` and `fn(&Z)`. In this case, we will again
-//! have a (pre-replacement) GLB of `fn(&g)`, where `$g = LUB(&A,&Z)`.
-//! Therefore, `V={$g}` and `Tainted($g) = {$g, &A, &Z}`.  In this case,
-//! by rule (3), `$g` is mapped to itself, and hence the result is
-//! `fn($g)`.  This result is correct (in this case, at least), but it is
-//! indicative of a case that *can* lead us into concluding that there is
-//! no GLB when in fact a GLB does exist.  See the section "Questionable
-//! Results" below for more details.
-//!
-//! The next example is `fn(&a, &b)` and `fn(&c, &c)`. In this case, as
-//! before, we'll end up with `F=fn($g, $h)` where `Tainted($g) =
-//! Tainted($h) = {$g, $h, $a, $b, $c}`.  Only rule (4) applies and hence
-//! we'll select fresh bound variables `y` and `z` and wind up with
-//! `fn(&y, &z)`.
-//!
-//! For the last example, let's consider what may seem trivial, but is
-//! not: `fn(&a, &a)` and `fn(&b, &b)`.  In this case, we'll get `F=fn($g,
-//! $h)` where `Tainted($g) = {$g, $a, $x}` and `Tainted($h) = {$h, $a,
-//! $x}`.  Both of these sets contain exactly one bound variable from each
-//! side, so we'll map them both to `&a`, resulting in `fn(&a, &a)`, which
-//! is the desired result.
-//!
-//! ### Shortcomings and correctness
-//!
-//! You may be wondering whether this algorithm is correct.  The answer is
-//! "sort of".  There are definitely cases where they fail to compute a
-//! result even though a correct result exists.  I believe, though, that
-//! if they succeed, then the result is valid, and I will attempt to
-//! convince you.  The basic argument is that the "pre-replacement" step
-//! computes a set of constraints.  The replacements, then, attempt to
-//! satisfy those constraints, using bound identifiers where needed.
-//!
-//! For now I will briefly go over the cases for LUB/GLB and identify
-//! their intent:
-//!
-//! - LUB:
-//!   - The region variables that are substituted in place of bound regions
-//!     are intended to collect constraints on those bound regions.
-//!   - If Tainted(R) contains only values in V, then this region is unconstrained
-//!     and can therefore be generalized, otherwise it cannot.
-//! - GLB:
-//!   - The region variables that are substituted in place of bound regions
-//!     are intended to collect constraints on those bound regions.
-//!   - If Tainted(R) contains exactly one variable from each side, and
-//!     only variables in V, that indicates that those two bound regions
-//!     must be equated.
-//!   - Otherwise, if Tainted(R) references any variables from left or right
-//!     side, then it is trying to combine a bound region with a free one or
-//!     multiple bound regions, so we need to select fresh bound regions.
-//!
-//! Sorry this is more of a shorthand to myself.  I will try to write up something
-//! more convincing in the future.
-//!
-//! #### Where are the algorithms wrong?
-//!
-//! - The pre-replacement computation can fail even though using a
-//!   bound-region would have succeeded.
-//! - We will compute GLB(fn(fn($a)), fn(fn($b))) as fn($c) where $c is the
-//!   GLB of $a and $b.  But if inference finds that $a and $b must be mapped
-//!   to regions without a GLB, then this is effectively a failure to compute
-//!   the GLB.  However, the result `fn<$c>(fn($c))` is a valid GLB.
diff --git a/src/librustc/middle/infer/mod.rs b/src/librustc/middle/infer/mod.rs
index 41310f05588..6c987bba389 100644
--- a/src/librustc/middle/infer/mod.rs
+++ b/src/librustc/middle/infer/mod.rs
@@ -8,7 +8,7 @@
 // option. This file may not be copied, modified, or distributed
 // except according to those terms.
 
-//! See doc.rs for documentation
+//! See the Book for more information.
 
 #![allow(non_camel_case_types)]
 
@@ -46,7 +46,6 @@ use self::unify::{UnificationTable, InferCtxtMethodsForSimplyUnifiableTypes};
 use self::error_reporting::ErrorReporting;
 
 pub mod combine;
-pub mod doc;
 pub mod equate;
 pub mod error_reporting;
 pub mod glb;
diff --git a/src/librustc/middle/infer/region_inference/README.md b/src/librustc/middle/infer/region_inference/README.md
new file mode 100644
index 00000000000..a009e0a8234
--- /dev/null
+++ b/src/librustc/middle/infer/region_inference/README.md
@@ -0,0 +1,364 @@
+Region inference
+
+# Terminology
+
+Note that we use the terms region and lifetime interchangeably,
+though the term `lifetime` is preferred.
+
+# Introduction
+
+Region inference uses a somewhat more involved algorithm than type
+inference.  It is not the most efficient thing ever written though it
+seems to work well enough in practice (famous last words).  The reason
+that we use a different algorithm is because, unlike with types, it is
+impractical to hand-annotate with regions (in some cases, there aren't
+even the requisite syntactic forms).  So we have to get it right, and
+it's worth spending more time on a more involved analysis.  Moreover,
+regions are a simpler case than types: they don't have aggregate
+structure, for example.
+
+Unlike normal type inference, which is similar in spirit to H-M and thus
+works progressively, the region type inference works by accumulating
+constraints over the course of a function.  Finally, at the end of
+processing a function, we process and solve the constraints all at
+once.
+
+The constraints are always of one of three possible forms:
+
+- ConstrainVarSubVar(R_i, R_j) states that region variable R_i
+  must be a subregion of R_j
+- ConstrainRegSubVar(R, R_i) states that the concrete region R
+  (which must not be a variable) must be a subregion of the variable R_i
+- ConstrainVarSubReg(R_i, R) is the inverse
+
+# Building up the constraints
+
+Variables and constraints are created using the following methods:
+
+- `new_region_var()` creates a new, unconstrained region variable;
+- `make_subregion(R_i, R_j)` states that R_i is a subregion of R_j
+- `lub_regions(R_i, R_j) -> R_k` returns a region R_k which is
+  the smallest region that is greater than both R_i and R_j
+- `glb_regions(R_i, R_j) -> R_k` returns a region R_k which is
+  the greatest region that is smaller than both R_i and R_j
+
+The actual region resolution algorithm is not entirely
+obvious, though it is also not overly complex.
+
+## Snapshotting
+
+It is also permitted to try (and rollback) changes to the graph.  This
+is done by invoking `start_snapshot()`, which returns a value.  Then
+later you can call `rollback_to()` which undoes the work.
+Alternatively, you can call `commit()` which ends all snapshots.
+Snapshots can be recursive---so you can start a snapshot when another
+is in progress, but only the root snapshot can "commit".
+
+# Resolving constraints
+
+The constraint resolution algorithm is not super complex but also not
+entirely obvious.  Here I describe the problem somewhat abstractly,
+then describe how the current code works.  There may be other, smarter
+ways of doing this with which I am unfamiliar and can't be bothered to
+research at the moment. - NDM
+
+## The problem
+
+Basically our input is a directed graph where nodes can be divided
+into two categories: region variables and concrete regions.  Each edge
+`R -> S` in the graph represents a constraint that the region `R` is a
+subregion of the region `S`.
+
+Region variable nodes can have arbitrary degree.  There is one region
+variable node per region variable.
+
+Each concrete region node is associated with some, well, concrete
+region: e.g., a free lifetime, or the region for a particular scope.
+Note that there may be more than one concrete region node for a
+particular region value.  Moreover, because of how the graph is built,
+we know that all concrete region nodes have either in-degree 1 or
+out-degree 1.
+
+Before resolution begins, we build up the constraints in a hashmap
+that maps `Constraint` keys to spans.  During resolution, we construct
+the actual `Graph` structure that we describe here.
+
+## Our current algorithm
+
+We divide region variables into two groups: Expanding and Contracting.
+Expanding region variables are those that have a concrete region
+predecessor (direct or indirect).  Contracting region variables are
+all others.
+
+We first resolve the values of Expanding region variables and then
+process Contracting ones.  We currently use an iterative, fixed-point
+procedure (but read on, I believe this could be replaced with a linear
+walk).  Basically we iterate over the edges in the graph, ensuring
+that, if the source of the edge has a value, then this value is a
+subregion of the target value.  If the target does not yet have a
+value, it takes the value from the source.  If the target already had
+a value, then the resulting value is Least Upper Bound of the old and
+new values. When we are done, each Expanding node will have the
+smallest region that it could possibly have and still satisfy the
+constraints.
+
+We next process the Contracting nodes.  Here we again iterate over the
+edges, only this time we move values from target to source (if the
+source is a Contracting node).  For each contracting node, we compute
+its value as the GLB of all its successors.  Basically contracting
+nodes ensure that there is overlap between their successors; we will
+ultimately infer the largest overlap possible.
+
+# The Region Hierarchy
+
+## Without closures
+
+Let's first consider the region hierarchy without thinking about
+closures, because they add a lot of complications. The region
+hierarchy *basically* mirrors the lexical structure of the code.
+There is a region for every piece of 'evaluation' that occurs, meaning
+every expression, block, and pattern (patterns are considered to
+"execute" by testing the value they are applied to and creating any
+relevant bindings).  So, for example:
+
+    fn foo(x: int, y: int) { // -+
+    //  +------------+       //  |
+    //  |      +-----+       //  |
+    //  |  +-+ +-+ +-+       //  |
+    //  |  | | | | | |       //  |
+    //  v  v v v v v v       //  |
+        let z = x + y;       //  |
+        ...                  //  |
+    }                        // -+
+
+    fn bar() { ... }
+
+In this example, there is a region for the fn body block as a whole,
+and then a subregion for the declaration of the local variable.
+Within that, there are sublifetimes for the assignment pattern and
+also the expression `x + y`. The expression itself has sublifetimes
+for evaluating `x` and `y`.
+
+## Function calls
+
+Function calls are a bit tricky. I will describe how we handle them
+*now* and then a bit about how we can improve them (Issue #6268).
+
+Consider a function call like `func(expr1, expr2)`, where `func`,
+`arg1`, and `arg2` are all arbitrary expressions. Currently,
+we construct a region hierarchy like:
+
+    +----------------+
+    |                |
+    +--+ +---+  +---+|
+    v  v v   v  v   vv
+    func(expr1, expr2)
+
+Here you can see that the call as a whole has a region and the
+function plus arguments are subregions of that. As a side-effect of
+this, we get a lot of spurious errors around nested calls, in
+particular when combined with `&mut` functions. For example, a call
+like this one
+
+    self.foo(self.bar())
+
+where both `foo` and `bar` are `&mut self` functions will always yield
+an error.
+
+Here is a more involved example (which is safe) so we can see what's
+going on:
+
+    struct Foo { f: uint, g: uint }
+    ...
+    fn add(p: &mut uint, v: uint) {
+        *p += v;
+    }
+    ...
+    fn inc(p: &mut uint) -> uint {
+        *p += 1; *p
+    }
+    fn weird() {
+        let mut x: Box<Foo> = box Foo { ... };
+        'a: add(&mut (*x).f,
+                'b: inc(&mut (*x).f)) // (..)
+    }
+
+The important part is the line marked `(..)` which contains a call to
+`add()`. The first argument is a mutable borrow of the field `f`.  The
+second argument also borrows the field `f`. Now, in the current borrow
+checker, the first borrow is given the lifetime of the call to
+`add()`, `'a`.  The second borrow is given the lifetime of `'b` of the
+call to `inc()`. Because `'b` is considered to be a sublifetime of
+`'a`, an error is reported since there are two co-existing mutable
+borrows of the same data.
+
+However, if we were to examine the lifetimes a bit more carefully, we
+can see that this error is unnecessary. Let's examine the lifetimes
+involved with `'a` in detail. We'll break apart all the steps involved
+in a call expression:
+
+    'a: {
+        'a_arg1: let a_temp1: ... = add;
+        'a_arg2: let a_temp2: &'a mut uint = &'a mut (*x).f;
+        'a_arg3: let a_temp3: uint = {
+            let b_temp1: ... = inc;
+            let b_temp2: &'b = &'b mut (*x).f;
+            'b_call: b_temp1(b_temp2)
+        };
+        'a_call: a_temp1(a_temp2, a_temp3) // (**)
+    }
+
+Here we see that the lifetime `'a` includes a number of substatements.
+In particular, there is this lifetime I've called `'a_call` that
+corresponds to the *actual execution of the function `add()`*, after
+all arguments have been evaluated. There is a corresponding lifetime
+`'b_call` for the execution of `inc()`. If we wanted to be precise
+about it, the lifetime of the two borrows should be `'a_call` and
+`'b_call` respectively, since the references that were created
+will not be dereferenced except during the execution itself.
+
+However, this model by itself is not sound. The reason is that
+while the two references that are created will never be used
+simultaneously, it is still true that the first reference is
+*created* before the second argument is evaluated, and so even though
+it will not be *dereferenced* during the evaluation of the second
+argument, it can still be *invalidated* by that evaluation. Consider
+this similar but unsound example:
+
+    struct Foo { f: uint, g: uint }
+    ...
+    fn add(p: &mut uint, v: uint) {
+        *p += v;
+    }
+    ...
+    fn consume(x: Box<Foo>) -> uint {
+        x.f + x.g
+    }
+    fn weird() {
+        let mut x: Box<Foo> = box Foo { ... };
+        'a: add(&mut (*x).f, consume(x)) // (..)
+    }
+
+In this case, the second argument to `add` actually consumes `x`, thus
+invalidating the first argument.
+
+So, for now, we exclude the `call` lifetimes from our model.
+Eventually I would like to include them, but we will have to make the
+borrow checker handle this situation correctly. In particular, if
+there is a reference created whose lifetime does not enclose
+the borrow expression, we must issue sufficient restrictions to ensure
+that the pointee remains valid.
+
+## Adding closures
+
+The other significant complication to the region hierarchy is
+closures. I will describe here how closures should work, though some
+of the work to implement this model is ongoing at the time of this
+writing.
+
+The body of closures are type-checked along with the function that
+creates them. However, unlike other expressions that appear within the
+function body, it is not entirely obvious when a closure body executes
+with respect to the other expressions. This is because the closure
+body will execute whenever the closure is called; however, we can
+never know precisely when the closure will be called, especially
+without some sort of alias analysis.
+
+However, we can place some sort of limits on when the closure
+executes.  In particular, the type of every closure `fn:'r K` includes
+a region bound `'r`. This bound indicates the maximum lifetime of that
+closure; once we exit that region, the closure cannot be called
+anymore. Therefore, we say that the lifetime of the closure body is a
+sublifetime of the closure bound, but the closure body itself is unordered
+with respect to other parts of the code.
+
+For example, consider the following fragment of code:
+
+    'a: {
+         let closure: fn:'a() = || 'b: {
+             'c: ...
+         };
+         'd: ...
+    }
+
+Here we have four lifetimes, `'a`, `'b`, `'c`, and `'d`. The closure
+`closure` is bounded by the lifetime `'a`. The lifetime `'b` is the
+lifetime of the closure body, and `'c` is some statement within the
+closure body. Finally, `'d` is a statement within the outer block that
+created the closure.
+
+We can say that the closure body `'b` is a sublifetime of `'a` due to
+the closure bound. By the usual lexical scoping conventions, the
+statement `'c` is clearly a sublifetime of `'b`, and `'d` is a
+sublifetime of `'d`. However, there is no ordering between `'c` and
+`'d` per se (this kind of ordering between statements is actually only
+an issue for dataflow; passes like the borrow checker must assume that
+closures could execute at any time from the moment they are created
+until they go out of scope).
+
+### Complications due to closure bound inference
+
+There is only one problem with the above model: in general, we do not
+actually *know* the closure bounds during region inference! In fact,
+closure bounds are almost always region variables! This is very tricky
+because the inference system implicitly assumes that we can do things
+like compute the LUB of two scoped lifetimes without needing to know
+the values of any variables.
+
+Here is an example to illustrate the problem:
+
+    fn identify<T>(x: T) -> T { x }
+
+    fn foo() { // 'foo is the function body
+      'a: {
+           let closure = identity(|| 'b: {
+               'c: ...
+           });
+           'd: closure();
+      }
+      'e: ...;
+    }
+
+In this example, the closure bound is not explicit. At compile time,
+we will create a region variable (let's call it `V0`) to represent the
+closure bound.
+
+The primary difficulty arises during the constraint propagation phase.
+Imagine there is some variable with incoming edges from `'c` and `'d`.
+This means that the value of the variable must be `LUB('c,
+'d)`. However, without knowing what the closure bound `V0` is, we
+can't compute the LUB of `'c` and `'d`! Any we don't know the closure
+bound until inference is done.
+
+The solution is to rely on the fixed point nature of inference.
+Basically, when we must compute `LUB('c, 'd)`, we just use the current
+value for `V0` as the closure's bound. If `V0`'s binding should
+change, then we will do another round of inference, and the result of
+`LUB('c, 'd)` will change.
+
+One minor implication of this is that the graph does not in fact track
+the full set of dependencies between edges. We cannot easily know
+whether the result of a LUB computation will change, since there may
+be indirect dependencies on other variables that are not reflected on
+the graph. Therefore, we must *always* iterate over all edges when
+doing the fixed point calculation, not just those adjacent to nodes
+whose values have changed.
+
+Were it not for this requirement, we could in fact avoid fixed-point
+iteration altogether. In that universe, we could instead first
+identify and remove strongly connected components (SCC) in the graph.
+Note that such components must consist solely of region variables; all
+of these variables can effectively be unified into a single variable.
+Once SCCs are removed, we are left with a DAG.  At this point, we
+could walk the DAG in topological order once to compute the expanding
+nodes, and again in reverse topological order to compute the
+contracting nodes. However, as I said, this does not work given the
+current treatment of closure bounds, but perhaps in the future we can
+address this problem somehow and make region inference somewhat more
+efficient. Note that this is solely a matter of performance, not
+expressiveness.
+
+### Skolemization
+
+For a discussion on skolemization and higher-ranked subtyping, please
+see the module `middle::infer::higher_ranked::doc`.
diff --git a/src/librustc/middle/infer/region_inference/doc.rs b/src/librustc/middle/infer/region_inference/doc.rs
deleted file mode 100644
index 2dc46af9084..00000000000
--- a/src/librustc/middle/infer/region_inference/doc.rs
+++ /dev/null
@@ -1,374 +0,0 @@
-// Copyright 2012 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-//! Region inference module.
-//!
-//! # Terminology
-//!
-//! Note that we use the terms region and lifetime interchangeably,
-//! though the term `lifetime` is preferred.
-//!
-//! # Introduction
-//!
-//! Region inference uses a somewhat more involved algorithm than type
-//! inference.  It is not the most efficient thing ever written though it
-//! seems to work well enough in practice (famous last words).  The reason
-//! that we use a different algorithm is because, unlike with types, it is
-//! impractical to hand-annotate with regions (in some cases, there aren't
-//! even the requisite syntactic forms).  So we have to get it right, and
-//! it's worth spending more time on a more involved analysis.  Moreover,
-//! regions are a simpler case than types: they don't have aggregate
-//! structure, for example.
-//!
-//! Unlike normal type inference, which is similar in spirit to H-M and thus
-//! works progressively, the region type inference works by accumulating
-//! constraints over the course of a function.  Finally, at the end of
-//! processing a function, we process and solve the constraints all at
-//! once.
-//!
-//! The constraints are always of one of three possible forms:
-//!
-//! - ConstrainVarSubVar(R_i, R_j) states that region variable R_i
-//!   must be a subregion of R_j
-//! - ConstrainRegSubVar(R, R_i) states that the concrete region R
-//!   (which must not be a variable) must be a subregion of the variable R_i
-//! - ConstrainVarSubReg(R_i, R) is the inverse
-//!
-//! # Building up the constraints
-//!
-//! Variables and constraints are created using the following methods:
-//!
-//! - `new_region_var()` creates a new, unconstrained region variable;
-//! - `make_subregion(R_i, R_j)` states that R_i is a subregion of R_j
-//! - `lub_regions(R_i, R_j) -> R_k` returns a region R_k which is
-//!   the smallest region that is greater than both R_i and R_j
-//! - `glb_regions(R_i, R_j) -> R_k` returns a region R_k which is
-//!   the greatest region that is smaller than both R_i and R_j
-//!
-//! The actual region resolution algorithm is not entirely
-//! obvious, though it is also not overly complex.
-//!
-//! ## Snapshotting
-//!
-//! It is also permitted to try (and rollback) changes to the graph.  This
-//! is done by invoking `start_snapshot()`, which returns a value.  Then
-//! later you can call `rollback_to()` which undoes the work.
-//! Alternatively, you can call `commit()` which ends all snapshots.
-//! Snapshots can be recursive---so you can start a snapshot when another
-//! is in progress, but only the root snapshot can "commit".
-//!
-//! # Resolving constraints
-//!
-//! The constraint resolution algorithm is not super complex but also not
-//! entirely obvious.  Here I describe the problem somewhat abstractly,
-//! then describe how the current code works.  There may be other, smarter
-//! ways of doing this with which I am unfamiliar and can't be bothered to
-//! research at the moment. - NDM
-//!
-//! ## The problem
-//!
-//! Basically our input is a directed graph where nodes can be divided
-//! into two categories: region variables and concrete regions.  Each edge
-//! `R -> S` in the graph represents a constraint that the region `R` is a
-//! subregion of the region `S`.
-//!
-//! Region variable nodes can have arbitrary degree.  There is one region
-//! variable node per region variable.
-//!
-//! Each concrete region node is associated with some, well, concrete
-//! region: e.g., a free lifetime, or the region for a particular scope.
-//! Note that there may be more than one concrete region node for a
-//! particular region value.  Moreover, because of how the graph is built,
-//! we know that all concrete region nodes have either in-degree 1 or
-//! out-degree 1.
-//!
-//! Before resolution begins, we build up the constraints in a hashmap
-//! that maps `Constraint` keys to spans.  During resolution, we construct
-//! the actual `Graph` structure that we describe here.
-//!
-//! ## Our current algorithm
-//!
-//! We divide region variables into two groups: Expanding and Contracting.
-//! Expanding region variables are those that have a concrete region
-//! predecessor (direct or indirect).  Contracting region variables are
-//! all others.
-//!
-//! We first resolve the values of Expanding region variables and then
-//! process Contracting ones.  We currently use an iterative, fixed-point
-//! procedure (but read on, I believe this could be replaced with a linear
-//! walk).  Basically we iterate over the edges in the graph, ensuring
-//! that, if the source of the edge has a value, then this value is a
-//! subregion of the target value.  If the target does not yet have a
-//! value, it takes the value from the source.  If the target already had
-//! a value, then the resulting value is Least Upper Bound of the old and
-//! new values. When we are done, each Expanding node will have the
-//! smallest region that it could possibly have and still satisfy the
-//! constraints.
-//!
-//! We next process the Contracting nodes.  Here we again iterate over the
-//! edges, only this time we move values from target to source (if the
-//! source is a Contracting node).  For each contracting node, we compute
-//! its value as the GLB of all its successors.  Basically contracting
-//! nodes ensure that there is overlap between their successors; we will
-//! ultimately infer the largest overlap possible.
-//!
-//! # The Region Hierarchy
-//!
-//! ## Without closures
-//!
-//! Let's first consider the region hierarchy without thinking about
-//! closures, because they add a lot of complications. The region
-//! hierarchy *basically* mirrors the lexical structure of the code.
-//! There is a region for every piece of 'evaluation' that occurs, meaning
-//! every expression, block, and pattern (patterns are considered to
-//! "execute" by testing the value they are applied to and creating any
-//! relevant bindings).  So, for example:
-//!
-//!     fn foo(x: int, y: int) { // -+
-//!     //  +------------+       //  |
-//!     //  |      +-----+       //  |
-//!     //  |  +-+ +-+ +-+       //  |
-//!     //  |  | | | | | |       //  |
-//!     //  v  v v v v v v       //  |
-//!         let z = x + y;       //  |
-//!         ...                  //  |
-//!     }                        // -+
-//!
-//!     fn bar() { ... }
-//!
-//! In this example, there is a region for the fn body block as a whole,
-//! and then a subregion for the declaration of the local variable.
-//! Within that, there are sublifetimes for the assignment pattern and
-//! also the expression `x + y`. The expression itself has sublifetimes
-//! for evaluating `x` and `y`.
-//!
-//! ## Function calls
-//!
-//! Function calls are a bit tricky. I will describe how we handle them
-//! *now* and then a bit about how we can improve them (Issue #6268).
-//!
-//! Consider a function call like `func(expr1, expr2)`, where `func`,
-//! `arg1`, and `arg2` are all arbitrary expressions. Currently,
-//! we construct a region hierarchy like:
-//!
-//!     +----------------+
-//!     |                |
-//!     +--+ +---+  +---+|
-//!     v  v v   v  v   vv
-//!     func(expr1, expr2)
-//!
-//! Here you can see that the call as a whole has a region and the
-//! function plus arguments are subregions of that. As a side-effect of
-//! this, we get a lot of spurious errors around nested calls, in
-//! particular when combined with `&mut` functions. For example, a call
-//! like this one
-//!
-//!     self.foo(self.bar())
-//!
-//! where both `foo` and `bar` are `&mut self` functions will always yield
-//! an error.
-//!
-//! Here is a more involved example (which is safe) so we can see what's
-//! going on:
-//!
-//!     struct Foo { f: uint, g: uint }
-//!     ...
-//!     fn add(p: &mut uint, v: uint) {
-//!         *p += v;
-//!     }
-//!     ...
-//!     fn inc(p: &mut uint) -> uint {
-//!         *p += 1; *p
-//!     }
-//!     fn weird() {
-//!         let mut x: Box<Foo> = box Foo { ... };
-//!         'a: add(&mut (*x).f,
-//!                 'b: inc(&mut (*x).f)) // (..)
-//!     }
-//!
-//! The important part is the line marked `(..)` which contains a call to
-//! `add()`. The first argument is a mutable borrow of the field `f`.  The
-//! second argument also borrows the field `f`. Now, in the current borrow
-//! checker, the first borrow is given the lifetime of the call to
-//! `add()`, `'a`.  The second borrow is given the lifetime of `'b` of the
-//! call to `inc()`. Because `'b` is considered to be a sublifetime of
-//! `'a`, an error is reported since there are two co-existing mutable
-//! borrows of the same data.
-//!
-//! However, if we were to examine the lifetimes a bit more carefully, we
-//! can see that this error is unnecessary. Let's examine the lifetimes
-//! involved with `'a` in detail. We'll break apart all the steps involved
-//! in a call expression:
-//!
-//!     'a: {
-//!         'a_arg1: let a_temp1: ... = add;
-//!         'a_arg2: let a_temp2: &'a mut uint = &'a mut (*x).f;
-//!         'a_arg3: let a_temp3: uint = {
-//!             let b_temp1: ... = inc;
-//!             let b_temp2: &'b = &'b mut (*x).f;
-//!             'b_call: b_temp1(b_temp2)
-//!         };
-//!         'a_call: a_temp1(a_temp2, a_temp3) // (**)
-//!     }
-//!
-//! Here we see that the lifetime `'a` includes a number of substatements.
-//! In particular, there is this lifetime I've called `'a_call` that
-//! corresponds to the *actual execution of the function `add()`*, after
-//! all arguments have been evaluated. There is a corresponding lifetime
-//! `'b_call` for the execution of `inc()`. If we wanted to be precise
-//! about it, the lifetime of the two borrows should be `'a_call` and
-//! `'b_call` respectively, since the references that were created
-//! will not be dereferenced except during the execution itself.
-//!
-//! However, this model by itself is not sound. The reason is that
-//! while the two references that are created will never be used
-//! simultaneously, it is still true that the first reference is
-//! *created* before the second argument is evaluated, and so even though
-//! it will not be *dereferenced* during the evaluation of the second
-//! argument, it can still be *invalidated* by that evaluation. Consider
-//! this similar but unsound example:
-//!
-//!     struct Foo { f: uint, g: uint }
-//!     ...
-//!     fn add(p: &mut uint, v: uint) {
-//!         *p += v;
-//!     }
-//!     ...
-//!     fn consume(x: Box<Foo>) -> uint {
-//!         x.f + x.g
-//!     }
-//!     fn weird() {
-//!         let mut x: Box<Foo> = box Foo { ... };
-//!         'a: add(&mut (*x).f, consume(x)) // (..)
-//!     }
-//!
-//! In this case, the second argument to `add` actually consumes `x`, thus
-//! invalidating the first argument.
-//!
-//! So, for now, we exclude the `call` lifetimes from our model.
-//! Eventually I would like to include them, but we will have to make the
-//! borrow checker handle this situation correctly. In particular, if
-//! there is a reference created whose lifetime does not enclose
-//! the borrow expression, we must issue sufficient restrictions to ensure
-//! that the pointee remains valid.
-//!
-//! ## Adding closures
-//!
-//! The other significant complication to the region hierarchy is
-//! closures. I will describe here how closures should work, though some
-//! of the work to implement this model is ongoing at the time of this
-//! writing.
-//!
-//! The body of closures are type-checked along with the function that
-//! creates them. However, unlike other expressions that appear within the
-//! function body, it is not entirely obvious when a closure body executes
-//! with respect to the other expressions. This is because the closure
-//! body will execute whenever the closure is called; however, we can
-//! never know precisely when the closure will be called, especially
-//! without some sort of alias analysis.
-//!
-//! However, we can place some sort of limits on when the closure
-//! executes.  In particular, the type of every closure `fn:'r K` includes
-//! a region bound `'r`. This bound indicates the maximum lifetime of that
-//! closure; once we exit that region, the closure cannot be called
-//! anymore. Therefore, we say that the lifetime of the closure body is a
-//! sublifetime of the closure bound, but the closure body itself is unordered
-//! with respect to other parts of the code.
-//!
-//! For example, consider the following fragment of code:
-//!
-//!     'a: {
-//!          let closure: fn:'a() = || 'b: {
-//!              'c: ...
-//!          };
-//!          'd: ...
-//!     }
-//!
-//! Here we have four lifetimes, `'a`, `'b`, `'c`, and `'d`. The closure
-//! `closure` is bounded by the lifetime `'a`. The lifetime `'b` is the
-//! lifetime of the closure body, and `'c` is some statement within the
-//! closure body. Finally, `'d` is a statement within the outer block that
-//! created the closure.
-//!
-//! We can say that the closure body `'b` is a sublifetime of `'a` due to
-//! the closure bound. By the usual lexical scoping conventions, the
-//! statement `'c` is clearly a sublifetime of `'b`, and `'d` is a
-//! sublifetime of `'d`. However, there is no ordering between `'c` and
-//! `'d` per se (this kind of ordering between statements is actually only
-//! an issue for dataflow; passes like the borrow checker must assume that
-//! closures could execute at any time from the moment they are created
-//! until they go out of scope).
-//!
-//! ### Complications due to closure bound inference
-//!
-//! There is only one problem with the above model: in general, we do not
-//! actually *know* the closure bounds during region inference! In fact,
-//! closure bounds are almost always region variables! This is very tricky
-//! because the inference system implicitly assumes that we can do things
-//! like compute the LUB of two scoped lifetimes without needing to know
-//! the values of any variables.
-//!
-//! Here is an example to illustrate the problem:
-//!
-//!     fn identify<T>(x: T) -> T { x }
-//!
-//!     fn foo() { // 'foo is the function body
-//!       'a: {
-//!            let closure = identity(|| 'b: {
-//!                'c: ...
-//!            });
-//!            'd: closure();
-//!       }
-//!       'e: ...;
-//!     }
-//!
-//! In this example, the closure bound is not explicit. At compile time,
-//! we will create a region variable (let's call it `V0`) to represent the
-//! closure bound.
-//!
-//! The primary difficulty arises during the constraint propagation phase.
-//! Imagine there is some variable with incoming edges from `'c` and `'d`.
-//! This means that the value of the variable must be `LUB('c,
-//! 'd)`. However, without knowing what the closure bound `V0` is, we
-//! can't compute the LUB of `'c` and `'d`! Any we don't know the closure
-//! bound until inference is done.
-//!
-//! The solution is to rely on the fixed point nature of inference.
-//! Basically, when we must compute `LUB('c, 'd)`, we just use the current
-//! value for `V0` as the closure's bound. If `V0`'s binding should
-//! change, then we will do another round of inference, and the result of
-//! `LUB('c, 'd)` will change.
-//!
-//! One minor implication of this is that the graph does not in fact track
-//! the full set of dependencies between edges. We cannot easily know
-//! whether the result of a LUB computation will change, since there may
-//! be indirect dependencies on other variables that are not reflected on
-//! the graph. Therefore, we must *always* iterate over all edges when
-//! doing the fixed point calculation, not just those adjacent to nodes
-//! whose values have changed.
-//!
-//! Were it not for this requirement, we could in fact avoid fixed-point
-//! iteration altogether. In that universe, we could instead first
-//! identify and remove strongly connected components (SCC) in the graph.
-//! Note that such components must consist solely of region variables; all
-//! of these variables can effectively be unified into a single variable.
-//! Once SCCs are removed, we are left with a DAG.  At this point, we
-//! could walk the DAG in topological order once to compute the expanding
-//! nodes, and again in reverse topological order to compute the
-//! contracting nodes. However, as I said, this does not work given the
-//! current treatment of closure bounds, but perhaps in the future we can
-//! address this problem somehow and make region inference somewhat more
-//! efficient. Note that this is solely a matter of performance, not
-//! expressiveness.
-//!
-//! ### Skolemization
-//!
-//! For a discussion on skolemization and higher-ranked subtyping, please
-//! see the module `middle::infer::higher_ranked::doc`.
diff --git a/src/librustc/middle/infer/region_inference/mod.rs b/src/librustc/middle/infer/region_inference/mod.rs
index 3dba8045d60..5cdfdcc7c9b 100644
--- a/src/librustc/middle/infer/region_inference/mod.rs
+++ b/src/librustc/middle/infer/region_inference/mod.rs
@@ -38,7 +38,6 @@ use std::iter::repeat;
 use std::u32;
 use syntax::ast;
 
-mod doc;
 mod graphviz;
 
 // A constraint that influences the inference process.
diff --git a/src/librustc/middle/traits/doc.rs b/src/librustc/middle/traits/README.md
index 7af046401ad..9c47d7f217a 100644
--- a/src/librustc/middle/traits/doc.rs
+++ b/src/librustc/middle/traits/README.md
@@ -1,15 +1,3 @@
-// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-/*!
-
 # TRAIT RESOLUTION
 
 This document describes the general process and points out some non-obvious
@@ -440,5 +428,3 @@ We used to try and draw finer-grained distinctions, but that led to a
 serious of annoying and weird bugs like #22019 and #18290. This simple
 rule seems to be pretty clearly safe and also still retains a very
 high hit rate (~95% when compiling rustc).
-
-*/
diff --git a/src/librustc/middle/traits/mod.rs b/src/librustc/middle/traits/mod.rs
index 395e486059e..83090dd72aa 100644
--- a/src/librustc/middle/traits/mod.rs
+++ b/src/librustc/middle/traits/mod.rs
@@ -8,7 +8,7 @@
 // option. This file may not be copied, modified, or distributed
 // except according to those terms.
 
-//! Trait Resolution. See doc.rs.
+//! Trait Resolution. See the Book for more.
 
 pub use self::SelectionError::*;
 pub use self::FulfillmentErrorCode::*;
diff --git a/src/librustc_borrowck/borrowck/README.md b/src/librustc_borrowck/borrowck/README.md
new file mode 100644
index 00000000000..c5a30428922
--- /dev/null
+++ b/src/librustc_borrowck/borrowck/README.md
@@ -0,0 +1,1212 @@
+% The Borrow Checker
+
+This pass has the job of enforcing memory safety. This is a subtle
+topic. This docs aim to explain both the practice and the theory
+behind the borrow checker. They start with a high-level overview of
+how it works, and then proceed to dive into the theoretical
+background. Finally, they go into detail on some of the more subtle
+aspects.
+
+# Table of contents
+
+These docs are long. Search for the section you are interested in.
+
+- Overview
+- Formal model
+- Borrowing and loans
+- Moves and initialization
+- Drop flags and structural fragments
+- Future work
+
+# Overview
+
+The borrow checker checks one function at a time. It operates in two
+passes. The first pass, called `gather_loans`, walks over the function
+and identifies all of the places where borrows (e.g., `&` expressions
+and `ref` bindings) and moves (copies or captures of a linear value)
+occur. It also tracks initialization sites. For each borrow and move,
+it checks various basic safety conditions at this time (for example,
+that the lifetime of the borrow doesn't exceed the lifetime of the
+value being borrowed, or that there is no move out of an `&T`
+referent).
+
+It then uses the dataflow module to propagate which of those borrows
+may be in scope at each point in the procedure. A loan is considered
+to come into scope at the expression that caused it and to go out of
+scope when the lifetime of the resulting reference expires.
+
+Once the in-scope loans are known for each point in the program, the
+borrow checker walks the IR again in a second pass called
+`check_loans`. This pass examines each statement and makes sure that
+it is safe with respect to the in-scope loans.
+
+# Formal model
+
+Throughout the docs we'll consider a simple subset of Rust in which
+you can only borrow from lvalues, defined like so:
+
+```text
+LV = x | LV.f | *LV
+```
+
+Here `x` represents some variable, `LV.f` is a field reference,
+and `*LV` is a pointer dereference. There is no auto-deref or other
+niceties. This means that if you have a type like:
+
+```text
+struct S { f: uint }
+```
+
+and a variable `a: Box<S>`, then the rust expression `a.f` would correspond
+to an `LV` of `(*a).f`.
+
+Here is the formal grammar for the types we'll consider:
+
+```text
+TY = () | S<'LT...> | Box<TY> | & 'LT MQ TY
+MQ = mut | imm | const
+```
+
+Most of these types should be pretty self explanatory. Here `S` is a
+struct name and we assume structs are declared like so:
+
+```text
+SD = struct S<'LT...> { (f: TY)... }
+```
+
+# Borrowing and loans
+
+## An intuitive explanation
+
+### Issuing loans
+
+Now, imagine we had a program like this:
+
+```text
+struct Foo { f: uint, g: uint }
+...
+'a: {
+  let mut x: Box<Foo> = ...;
+  let y = &mut (*x).f;
+  x = ...;
+}
+```
+
+This is of course dangerous because mutating `x` will free the old
+value and hence invalidate `y`. The borrow checker aims to prevent
+this sort of thing.
+
+#### Loans and restrictions
+
+The way the borrow checker works is that it analyzes each borrow
+expression (in our simple model, that's stuff like `&LV`, though in
+real life there are a few other cases to consider). For each borrow
+expression, it computes a `Loan`, which is a data structure that
+records (1) the value being borrowed, (2) the mutability and scope of
+the borrow, and (3) a set of restrictions. In the code, `Loan` is a
+struct defined in `middle::borrowck`. Formally, we define `LOAN` as
+follows:
+
+```text
+LOAN = (LV, LT, MQ, RESTRICTION*)
+RESTRICTION = (LV, ACTION*)
+ACTION = MUTATE | CLAIM | FREEZE
+```
+
+Here the `LOAN` tuple defines the lvalue `LV` being borrowed; the
+lifetime `LT` of that borrow; the mutability `MQ` of the borrow; and a
+list of restrictions. The restrictions indicate actions which, if
+taken, could invalidate the loan and lead to type safety violations.
+
+Each `RESTRICTION` is a pair of a restrictive lvalue `LV` (which will
+either be the path that was borrowed or some prefix of the path that
+was borrowed) and a set of restricted actions.  There are three kinds
+of actions that may be restricted for the path `LV`:
+
+- `MUTATE` means that `LV` cannot be assigned to;
+- `CLAIM` means that the `LV` cannot be borrowed mutably;
+- `FREEZE` means that the `LV` cannot be borrowed immutably;
+
+Finally, it is never possible to move from an lvalue that appears in a
+restriction. This implies that the "empty restriction" `(LV, [])`,
+which contains an empty set of actions, still has a purpose---it
+prevents moves from `LV`. I chose not to make `MOVE` a fourth kind of
+action because that would imply that sometimes moves are permitted
+from restricted values, which is not the case.
+
+#### Example
+
+To give you a better feeling for what kind of restrictions derived
+from a loan, let's look at the loan `L` that would be issued as a
+result of the borrow `&mut (*x).f` in the example above:
+
+```text
+L = ((*x).f, 'a, mut, RS) where
+    RS = [((*x).f, [MUTATE, CLAIM, FREEZE]),
+          (*x, [MUTATE, CLAIM, FREEZE]),
+          (x, [MUTATE, CLAIM, FREEZE])]
+```
+
+The loan states that the expression `(*x).f` has been loaned as
+mutable for the lifetime `'a`. Because the loan is mutable, that means
+that the value `(*x).f` may be mutated via the newly created reference
+(and *only* via that pointer). This is reflected in the
+restrictions `RS` that accompany the loan.
+
+The first restriction `((*x).f, [MUTATE, CLAIM, FREEZE])` states that
+the lender may not mutate, freeze, nor alias `(*x).f`. Mutation is
+illegal because `(*x).f` is only supposed to be mutated via the new
+reference, not by mutating the original path `(*x).f`. Freezing is
+illegal because the path now has an `&mut` alias; so even if we the
+lender were to consider `(*x).f` to be immutable, it might be mutated
+via this alias. They will be enforced for the lifetime `'a` of the
+loan. After the loan expires, the restrictions no longer apply.
+
+The second restriction on `*x` is interesting because it does not
+apply to the path that was lent (`(*x).f`) but rather to a prefix of
+the borrowed path. This is due to the rules of inherited mutability:
+if the user were to assign to (or freeze) `*x`, they would indirectly
+overwrite (or freeze) `(*x).f`, and thus invalidate the reference
+that was created. In general it holds that when a path is
+lent, restrictions are issued for all the owning prefixes of that
+path. In this case, the path `*x` owns the path `(*x).f` and,
+because `x` is an owned pointer, the path `x` owns the path `*x`.
+Therefore, borrowing `(*x).f` yields restrictions on both
+`*x` and `x`.
+
+### Checking for illegal assignments, moves, and reborrows
+
+Once we have computed the loans introduced by each borrow, the borrow
+checker uses a data flow propagation to compute the full set of loans
+in scope at each expression and then uses that set to decide whether
+that expression is legal.  Remember that the scope of loan is defined
+by its lifetime LT.  We sometimes say that a loan which is in-scope at
+a particular point is an "outstanding loan", and the set of
+restrictions included in those loans as the "outstanding
+restrictions".
+
+The kinds of expressions which in-scope loans can render illegal are:
+- *assignments* (`lv = v`): illegal if there is an in-scope restriction
+  against mutating `lv`;
+- *moves*: illegal if there is any in-scope restriction on `lv` at all;
+- *mutable borrows* (`&mut lv`): illegal there is an in-scope restriction
+  against claiming `lv`;
+- *immutable borrows* (`&lv`): illegal there is an in-scope restriction
+  against freezing `lv`.
+
+## Formal rules
+
+Now that we hopefully have some kind of intuitive feeling for how the
+borrow checker works, let's look a bit more closely now at the precise
+conditions that it uses. For simplicity I will ignore const loans.
+
+I will present the rules in a modified form of standard inference
+rules, which looks as follows:
+
+```text
+PREDICATE(X, Y, Z)                  // Rule-Name
+  Condition 1
+  Condition 2
+  Condition 3
+```
+
+The initial line states the predicate that is to be satisfied.  The
+indented lines indicate the conditions that must be met for the
+predicate to be satisfied. The right-justified comment states the name
+of this rule: there are comments in the borrowck source referencing
+these names, so that you can cross reference to find the actual code
+that corresponds to the formal rule.
+
+### Invariants
+
+I want to collect, at a high-level, the invariants the borrow checker
+maintains. I will give them names and refer to them throughout the
+text. Together these invariants are crucial for the overall soundness
+of the system.
+
+**Mutability requires uniqueness.** To mutate a path
+
+**Unique mutability.** There is only one *usable* mutable path to any
+given memory at any given time. This implies that when claiming memory
+with an expression like `p = &mut x`, the compiler must guarantee that
+the borrowed value `x` can no longer be mutated so long as `p` is
+live. (This is done via restrictions, read on.)
+
+**.**
+
+
+### The `gather_loans` pass
+
+We start with the `gather_loans` pass, which walks the AST looking for
+borrows.  For each borrow, there are three bits of information: the
+lvalue `LV` being borrowed and the mutability `MQ` and lifetime `LT`
+of the resulting pointer. Given those, `gather_loans` applies four
+validity tests:
+
+1. `MUTABILITY(LV, MQ)`: The mutability of the reference is
+compatible with the mutability of `LV` (i.e., not borrowing immutable
+data as mutable).
+
+2. `ALIASABLE(LV, MQ)`: The aliasability of the reference is
+compatible with the aliasability of `LV`. The goal is to prevent
+`&mut` borrows of aliasability data.
+
+3. `LIFETIME(LV, LT, MQ)`: The lifetime of the borrow does not exceed
+the lifetime of the value being borrowed.
+
+4. `RESTRICTIONS(LV, LT, ACTIONS) = RS`: This pass checks and computes the
+restrictions to maintain memory safety. These are the restrictions
+that will go into the final loan. We'll discuss in more detail below.
+
+## Checking mutability
+
+Checking mutability is fairly straightforward. We just want to prevent
+immutable data from being borrowed as mutable. Note that it is ok to
+borrow mutable data as immutable, since that is simply a
+freeze. Formally we define a predicate `MUTABLE(LV, MQ)` which, if
+defined, means that "borrowing `LV` with mutability `MQ` is ok. The
+Rust code corresponding to this predicate is the function
+`check_mutability` in `middle::borrowck::gather_loans`.
+
+### Checking mutability of variables
+
+*Code pointer:* Function `check_mutability()` in `gather_loans/mod.rs`,
+but also the code in `mem_categorization`.
+
+Let's begin with the rules for variables, which state that if a
+variable is declared as mutable, it may be borrowed any which way, but
+otherwise the variable must be borrowed as immutable or const:
+
+```text
+MUTABILITY(X, MQ)                   // M-Var-Mut
+  DECL(X) = mut
+
+MUTABILITY(X, MQ)                   // M-Var-Imm
+  DECL(X) = imm
+  MQ = imm | const
+```
+
+### Checking mutability of owned content
+
+Fields and owned pointers inherit their mutability from
+their base expressions, so both of their rules basically
+delegate the check to the base expression `LV`:
+
+```text
+MUTABILITY(LV.f, MQ)                // M-Field
+  MUTABILITY(LV, MQ)
+
+MUTABILITY(*LV, MQ)                 // M-Deref-Unique
+  TYPE(LV) = Box<Ty>
+  MUTABILITY(LV, MQ)
+```
+
+### Checking mutability of immutable pointer types
+
+Immutable pointer types like `&T` can only
+be borrowed if MQ is immutable or const:
+
+```text
+MUTABILITY(*LV, MQ)                // M-Deref-Borrowed-Imm
+  TYPE(LV) = &Ty
+  MQ == imm | const
+```
+
+### Checking mutability of mutable pointer types
+
+`&mut T` can be frozen, so it is acceptable to borrow it as either imm or mut:
+
+```text
+MUTABILITY(*LV, MQ)                 // M-Deref-Borrowed-Mut
+  TYPE(LV) = &mut Ty
+```
+
+## Checking aliasability
+
+The goal of the aliasability check is to ensure that we never permit
+`&mut` borrows of aliasable data. Formally we define a predicate
+`ALIASABLE(LV, MQ)` which if defined means that
+"borrowing `LV` with mutability `MQ` is ok". The
+Rust code corresponding to this predicate is the function
+`check_aliasability()` in `middle::borrowck::gather_loans`.
+
+### Checking aliasability of variables
+
+Local variables are never aliasable as they are accessible only within
+the stack frame.
+
+```text
+    ALIASABLE(X, MQ)                   // M-Var-Mut
+```
+
+### Checking aliasable of owned content
+
+Owned content is aliasable if it is found in an aliasable location:
+
+```text
+ALIASABLE(LV.f, MQ)                // M-Field
+  ALIASABLE(LV, MQ)
+
+ALIASABLE(*LV, MQ)                 // M-Deref-Unique
+  ALIASABLE(LV, MQ)
+```
+
+### Checking mutability of immutable pointer types
+
+Immutable pointer types like `&T` are aliasable, and hence can only be
+borrowed immutably:
+
+```text
+ALIASABLE(*LV, imm)                // M-Deref-Borrowed-Imm
+  TYPE(LV) = &Ty
+```
+
+### Checking mutability of mutable pointer types
+
+`&mut T` can be frozen, so it is acceptable to borrow it as either imm or mut:
+
+```text
+ALIASABLE(*LV, MQ)                 // M-Deref-Borrowed-Mut
+  TYPE(LV) = &mut Ty
+```
+
+## Checking lifetime
+
+These rules aim to ensure that no data is borrowed for a scope that exceeds
+its lifetime. These two computations wind up being intimately related.
+Formally, we define a predicate `LIFETIME(LV, LT, MQ)`, which states that
+"the lvalue `LV` can be safely borrowed for the lifetime `LT` with mutability
+`MQ`". The Rust code corresponding to this predicate is the module
+`middle::borrowck::gather_loans::lifetime`.
+
+### The Scope function
+
+Several of the rules refer to a helper function `SCOPE(LV)=LT`.  The
+`SCOPE(LV)` yields the lifetime `LT` for which the lvalue `LV` is
+guaranteed to exist, presuming that no mutations occur.
+
+The scope of a local variable is the block where it is declared:
+
+```text
+  SCOPE(X) = block where X is declared
+```
+
+The scope of a field is the scope of the struct:
+
+```text
+  SCOPE(LV.f) = SCOPE(LV)
+```
+
+The scope of a unique referent is the scope of the pointer, since
+(barring mutation or moves) the pointer will not be freed until
+the pointer itself `LV` goes out of scope:
+
+```text
+  SCOPE(*LV) = SCOPE(LV) if LV has type Box<T>
+```
+
+The scope of a borrowed referent is the scope associated with the
+pointer.  This is a conservative approximation, since the data that
+the pointer points at may actually live longer:
+
+```text
+  SCOPE(*LV) = LT if LV has type &'LT T or &'LT mut T
+```
+
+### Checking lifetime of variables
+
+The rule for variables states that a variable can only be borrowed a
+lifetime `LT` that is a subregion of the variable's scope:
+
+```text
+LIFETIME(X, LT, MQ)                 // L-Local
+  LT <= SCOPE(X)
+```
+
+### Checking lifetime for owned content
+
+The lifetime of a field or owned pointer is the same as the lifetime
+of its owner:
+
+```text
+LIFETIME(LV.f, LT, MQ)              // L-Field
+  LIFETIME(LV, LT, MQ)
+
+LIFETIME(*LV, LT, MQ)               // L-Deref-Send
+  TYPE(LV) = Box<Ty>
+  LIFETIME(LV, LT, MQ)
+```
+
+### Checking lifetime for derefs of references
+
+References have a lifetime `LT'` associated with them.  The
+data they point at has been guaranteed to be valid for at least this
+lifetime. Therefore, the borrow is valid so long as the lifetime `LT`
+of the borrow is shorter than the lifetime `LT'` of the pointer
+itself:
+
+```text
+LIFETIME(*LV, LT, MQ)               // L-Deref-Borrowed
+  TYPE(LV) = &LT' Ty OR &LT' mut Ty
+  LT <= LT'
+```
+
+## Computing the restrictions
+
+The final rules govern the computation of *restrictions*, meaning that
+we compute the set of actions that will be illegal for the life of the
+loan. The predicate is written `RESTRICTIONS(LV, LT, ACTIONS) =
+RESTRICTION*`, which can be read "in order to prevent `ACTIONS` from
+occurring on `LV`, the restrictions `RESTRICTION*` must be respected
+for the lifetime of the loan".
+
+Note that there is an initial set of restrictions: these restrictions
+are computed based on the kind of borrow:
+
+```text
+&mut LV =>   RESTRICTIONS(LV, LT, MUTATE|CLAIM|FREEZE)
+&LV =>       RESTRICTIONS(LV, LT, MUTATE|CLAIM)
+&const LV => RESTRICTIONS(LV, LT, [])
+```
+
+The reasoning here is that a mutable borrow must be the only writer,
+therefore it prevents other writes (`MUTATE`), mutable borrows
+(`CLAIM`), and immutable borrows (`FREEZE`). An immutable borrow
+permits other immutable borrows but forbids writes and mutable borrows.
+Finally, a const borrow just wants to be sure that the value is not
+moved out from under it, so no actions are forbidden.
+
+### Restrictions for loans of a local variable
+
+The simplest case is a borrow of a local variable `X`:
+
+```text
+RESTRICTIONS(X, LT, ACTIONS) = (X, ACTIONS)            // R-Variable
+```
+
+In such cases we just record the actions that are not permitted.
+
+### Restrictions for loans of fields
+
+Restricting a field is the same as restricting the owner of that
+field:
+
+```text
+RESTRICTIONS(LV.f, LT, ACTIONS) = RS, (LV.f, ACTIONS)  // R-Field
+  RESTRICTIONS(LV, LT, ACTIONS) = RS
+```
+
+The reasoning here is as follows. If the field must not be mutated,
+then you must not mutate the owner of the field either, since that
+would indirectly modify the field. Similarly, if the field cannot be
+frozen or aliased, we cannot allow the owner to be frozen or aliased,
+since doing so indirectly freezes/aliases the field. This is the
+origin of inherited mutability.
+
+### Restrictions for loans of owned referents
+
+Because the mutability of owned referents is inherited, restricting an
+owned referent is similar to restricting a field, in that it implies
+restrictions on the pointer. However, owned pointers have an important
+twist: if the owner `LV` is mutated, that causes the owned referent
+`*LV` to be freed! So whenever an owned referent `*LV` is borrowed, we
+must prevent the owned pointer `LV` from being mutated, which means
+that we always add `MUTATE` and `CLAIM` to the restriction set imposed
+on `LV`:
+
+```text
+RESTRICTIONS(*LV, LT, ACTIONS) = RS, (*LV, ACTIONS)    // R-Deref-Send-Pointer
+  TYPE(LV) = Box<Ty>
+  RESTRICTIONS(LV, LT, ACTIONS|MUTATE|CLAIM) = RS
+```
+
+### Restrictions for loans of immutable borrowed referents
+
+Immutable borrowed referents are freely aliasable, meaning that
+the compiler does not prevent you from copying the pointer.  This
+implies that issuing restrictions is useless. We might prevent the
+user from acting on `*LV` itself, but there could be another path
+`*LV1` that refers to the exact same memory, and we would not be
+restricting that path. Therefore, the rule for `&Ty` pointers
+always returns an empty set of restrictions, and it only permits
+restricting `MUTATE` and `CLAIM` actions:
+
+```text
+RESTRICTIONS(*LV, LT, ACTIONS) = []                    // R-Deref-Imm-Borrowed
+  TYPE(LV) = &LT' Ty
+  LT <= LT'                                            // (1)
+  ACTIONS subset of [MUTATE, CLAIM]
+```
+
+The reason that we can restrict `MUTATE` and `CLAIM` actions even
+without a restrictions list is that it is never legal to mutate nor to
+borrow mutably the contents of a `&Ty` pointer. In other words,
+those restrictions are already inherent in the type.
+
+Clause (1) in the rule for `&Ty` deserves mention. Here I
+specify that the lifetime of the loan must be less than the lifetime
+of the `&Ty` pointer. In simple cases, this clause is redundant, since
+the `LIFETIME()` function will already enforce the required rule:
+
+```
+fn foo(point: &'a Point) -> &'static f32 {
+    &point.x // Error
+}
+```
+
+The above example fails to compile both because of clause (1) above
+but also by the basic `LIFETIME()` check. However, in more advanced
+examples involving multiple nested pointers, clause (1) is needed:
+
+```
+fn foo(point: &'a &'b mut Point) -> &'b f32 {
+    &point.x // Error
+}
+```
+
+The `LIFETIME` rule here would accept `'b` because, in fact, the
+*memory is* guaranteed to remain valid (i.e., not be freed) for the
+lifetime `'b`, since the `&mut` pointer is valid for `'b`. However, we
+are returning an immutable reference, so we need the memory to be both
+valid and immutable. Even though `point.x` is referenced by an `&mut`
+pointer, it can still be considered immutable so long as that `&mut`
+pointer is found in an aliased location. That means the memory is
+guaranteed to be *immutable* for the lifetime of the `&` pointer,
+which is only `'a`, not `'b`. Hence this example yields an error.
+
+As a final twist, consider the case of two nested *immutable*
+pointers, rather than a mutable pointer within an immutable one:
+
+```
+fn foo(point: &'a &'b Point) -> &'b f32 {
+    &point.x // OK
+}
+```
+
+This function is legal. The reason for this is that the inner pointer
+(`*point : &'b Point`) is enough to guarantee the memory is immutable
+and valid for the lifetime `'b`.  This is reflected in
+`RESTRICTIONS()` by the fact that we do not recurse (i.e., we impose
+no restrictions on `LV`, which in this particular case is the pointer
+`point : &'a &'b Point`).
+
+#### Why both `LIFETIME()` and `RESTRICTIONS()`?
+
+Given the previous text, it might seem that `LIFETIME` and
+`RESTRICTIONS` should be folded together into one check, but there is
+a reason that they are separated. They answer separate concerns.
+The rules pertaining to `LIFETIME` exist to ensure that we don't
+create a borrowed pointer that outlives the memory it points at. So
+`LIFETIME` prevents a function like this:
+
+```
+fn get_1<'a>() -> &'a int {
+    let x = 1;
+    &x
+}
+```
+
+Here we would be returning a pointer into the stack. Clearly bad.
+
+However, the `RESTRICTIONS` rules are more concerned with how memory
+is used. The example above doesn't generate an error according to
+`RESTRICTIONS` because, for local variables, we don't require that the
+loan lifetime be a subset of the local variable lifetime. The idea
+here is that we *can* guarantee that `x` is not (e.g.) mutated for the
+lifetime `'a`, even though `'a` exceeds the function body and thus
+involves unknown code in the caller -- after all, `x` ceases to exist
+after we return and hence the remaining code in `'a` cannot possibly
+mutate it. This distinction is important for type checking functions
+like this one:
+
+```
+fn inc_and_get<'a>(p: &'a mut Point) -> &'a int {
+    p.x += 1;
+    &p.x
+}
+```
+
+In this case, we take in a `&mut` and return a frozen borrowed pointer
+with the same lifetime. So long as the lifetime of the returned value
+doesn't exceed the lifetime of the `&mut` we receive as input, this is
+fine, though it may seem surprising at first (it surprised me when I
+first worked it through). After all, we're guaranteeing that `*p`
+won't be mutated for the lifetime `'a`, even though we can't "see" the
+entirety of the code during that lifetime, since some of it occurs in
+our caller. But we *do* know that nobody can mutate `*p` except
+through `p`. So if we don't mutate `*p` and we don't return `p`, then
+we know that the right to mutate `*p` has been lost to our caller --
+in terms of capability, the caller passed in the ability to mutate
+`*p`, and we never gave it back. (Note that we can't return `p` while
+`*p` is borrowed since that would be a move of `p`, as `&mut` pointers
+are affine.)
+
+### Restrictions for loans of const aliasable referents
+
+Freeze pointers are read-only. There may be `&mut` or `&` aliases, and
+we can not prevent *anything* but moves in that case. So the
+`RESTRICTIONS` function is only defined if `ACTIONS` is the empty set.
+Because moves from a `&const` lvalue are never legal, it is not
+necessary to add any restrictions at all to the final result.
+
+```text
+    RESTRICTIONS(*LV, LT, []) = []                // R-Deref-Freeze-Borrowed
+      TYPE(LV) = &const Ty
+```
+
+### Restrictions for loans of mutable borrowed referents
+
+Mutable borrowed pointers are guaranteed to be the only way to mutate
+their referent. This permits us to take greater license with them; for
+example, the referent can be frozen simply be ensuring that we do not
+use the original pointer to perform mutate. Similarly, we can allow
+the referent to be claimed, so long as the original pointer is unused
+while the new claimant is live.
+
+The rule for mutable borrowed pointers is as follows:
+
+```text
+RESTRICTIONS(*LV, LT, ACTIONS) = RS, (*LV, ACTIONS)    // R-Deref-Mut-Borrowed
+  TYPE(LV) = &LT' mut Ty
+  LT <= LT'                                            // (1)
+  RESTRICTIONS(LV, LT, ACTIONS) = RS                   // (2)
+```
+
+Let's examine the two numbered clauses:
+
+Clause (1) specifies that the lifetime of the loan (`LT`) cannot
+exceed the lifetime of the `&mut` pointer (`LT'`). The reason for this
+is that the `&mut` pointer is guaranteed to be the only legal way to
+mutate its referent -- but only for the lifetime `LT'`.  After that
+lifetime, the loan on the referent expires and hence the data may be
+modified by its owner again. This implies that we are only able to
+guarantee that the referent will not be modified or aliased for a
+maximum of `LT'`.
+
+Here is a concrete example of a bug this rule prevents:
+
+```
+// Test region-reborrow-from-shorter-mut-ref.rs:
+fn copy_pointer<'a,'b,T>(x: &'a mut &'b mut T) -> &'b mut T {
+    &mut **p // ERROR due to clause (1)
+}
+fn main() {
+    let mut x = 1;
+    let mut y = &mut x; // <-'b-----------------------------+
+    //      +-'a--------------------+                       |
+    //      v                       v                       |
+    let z = copy_borrowed_ptr(&mut y); // y is lent         |
+    *y += 1; // Here y==z, so both should not be usable...  |
+    *z += 1; // ...and yet they would be, but for clause 1. |
+} // <------------------------------------------------------+
+```
+
+Clause (2) propagates the restrictions on the referent to the pointer
+itself. This is the same as with an owned pointer, though the
+reasoning is mildly different. The basic goal in all cases is to
+prevent the user from establishing another route to the same data. To
+see what I mean, let's examine various cases of what can go wrong and
+show how it is prevented.
+
+**Example danger 1: Moving the base pointer.** One of the simplest
+ways to violate the rules is to move the base pointer to a new name
+and access it via that new name, thus bypassing the restrictions on
+the old name. Here is an example:
+
+```
+// src/test/compile-fail/borrowck-move-mut-base-ptr.rs
+fn foo(t0: &mut int) {
+    let p: &int = &*t0; // Freezes `*t0`
+    let t1 = t0;        //~ ERROR cannot move out of `t0`
+    *t1 = 22;           // OK, not a write through `*t0`
+}
+```
+
+Remember that `&mut` pointers are linear, and hence `let t1 = t0` is a
+move of `t0` -- or would be, if it were legal. Instead, we get an
+error, because clause (2) imposes restrictions on `LV` (`t0`, here),
+and any restrictions on a path make it impossible to move from that
+path.
+
+**Example danger 2: Claiming the base pointer.** Another possible
+danger is to mutably borrow the base path. This can lead to two bad
+scenarios. The most obvious is that the mutable borrow itself becomes
+another path to access the same data, as shown here:
+
+```
+// src/test/compile-fail/borrowck-mut-borrow-of-mut-base-ptr.rs
+fn foo<'a>(mut t0: &'a mut int,
+           mut t1: &'a mut int) {
+    let p: &int = &*t0;     // Freezes `*t0`
+    let mut t2 = &mut t0;   //~ ERROR cannot borrow `t0`
+    **t2 += 1;              // Mutates `*t0`
+}
+```
+
+In this example, `**t2` is the same memory as `*t0`. Because `t2` is
+an `&mut` pointer, `**t2` is a unique path and hence it would be
+possible to mutate `**t2` even though that memory was supposed to be
+frozen by the creation of `p`. However, an error is reported -- the
+reason is that the freeze `&*t0` will restrict claims and mutation
+against `*t0` which, by clause 2, in turn prevents claims and mutation
+of `t0`. Hence the claim `&mut t0` is illegal.
+
+Another danger with an `&mut` pointer is that we could swap the `t0`
+value away to create a new path:
+
+```
+// src/test/compile-fail/borrowck-swap-mut-base-ptr.rs
+fn foo<'a>(mut t0: &'a mut int,
+           mut t1: &'a mut int) {
+    let p: &int = &*t0;     // Freezes `*t0`
+    swap(&mut t0, &mut t1); //~ ERROR cannot borrow `t0`
+    *t1 = 22;
+}
+```
+
+This is illegal for the same reason as above. Note that if we added
+back a swap operator -- as we used to have -- we would want to be very
+careful to ensure this example is still illegal.
+
+**Example danger 3: Freeze the base pointer.** In the case where the
+referent is claimed, even freezing the base pointer can be dangerous,
+as shown in the following example:
+
+```
+// src/test/compile-fail/borrowck-borrow-of-mut-base-ptr.rs
+fn foo<'a>(mut t0: &'a mut int,
+           mut t1: &'a mut int) {
+    let p: &mut int = &mut *t0; // Claims `*t0`
+    let mut t2 = &t0;           //~ ERROR cannot borrow `t0`
+    let q: &int = &*t2;         // Freezes `*t0` but not through `*p`
+    *p += 1;                    // violates type of `*q`
+}
+```
+
+Here the problem is that `*t0` is claimed by `p`, and hence `p` wants
+to be the controlling pointer through which mutation or freezes occur.
+But `t2` would -- if it were legal -- have the type `& &mut int`, and
+hence would be a mutable pointer in an aliasable location, which is
+considered frozen (since no one can write to `**t2` as it is not a
+unique path). Therefore, we could reasonably create a frozen `&int`
+pointer pointing at `*t0` that coexists with the mutable pointer `p`,
+which is clearly unsound.
+
+However, it is not always unsafe to freeze the base pointer. In
+particular, if the referent is frozen, there is no harm in it:
+
+```
+// src/test/run-pass/borrowck-borrow-of-mut-base-ptr-safe.rs
+fn foo<'a>(mut t0: &'a mut int,
+           mut t1: &'a mut int) {
+    let p: &int = &*t0; // Freezes `*t0`
+    let mut t2 = &t0;
+    let q: &int = &*t2; // Freezes `*t0`, but that's ok...
+    let r: &int = &*t0; // ...after all, could do same thing directly.
+}
+```
+
+In this case, creating the alias `t2` of `t0` is safe because the only
+thing `t2` can be used for is to further freeze `*t0`, which is
+already frozen. In particular, we cannot assign to `*t0` through the
+new alias `t2`, as demonstrated in this test case:
+
+```
+// src/test/run-pass/borrowck-borrow-mut-base-ptr-in-aliasable-loc.rs
+fn foo(t0: & &mut int) {
+    let t1 = t0;
+    let p: &int = &**t0;
+    **t1 = 22; //~ ERROR cannot assign
+}
+```
+
+This distinction is reflected in the rules. When doing an `&mut`
+borrow -- as in the first example -- the set `ACTIONS` will be
+`CLAIM|MUTATE|FREEZE`, because claiming the referent implies that it
+cannot be claimed, mutated, or frozen by anyone else. These
+restrictions are propagated back to the base path and hence the base
+path is considered unfreezable.
+
+In contrast, when the referent is merely frozen -- as in the second
+example -- the set `ACTIONS` will be `CLAIM|MUTATE`, because freezing
+the referent implies that it cannot be claimed or mutated but permits
+others to freeze. Hence when these restrictions are propagated back to
+the base path, it will still be considered freezable.
+
+
+
+**FIXME #10520: Restrictions against mutating the base pointer.** When
+an `&mut` pointer is frozen or claimed, we currently pass along the
+restriction against MUTATE to the base pointer. I do not believe this
+restriction is needed. It dates from the days when we had a way to
+mutate that preserved the value being mutated (i.e., swap). Nowadays
+the only form of mutation is assignment, which destroys the pointer
+being mutated -- therefore, a mutation cannot create a new path to the
+same data. Rather, it removes an existing path. This implies that not
+only can we permit mutation, we can have mutation kill restrictions in
+the dataflow sense.
+
+**WARNING:** We do not currently have `const` borrows in the
+language. If they are added back in, we must ensure that they are
+consistent with all of these examples. The crucial question will be
+what sorts of actions are permitted with a `&const &mut` pointer. I
+would suggest that an `&mut` referent found in an `&const` location be
+prohibited from both freezes and claims. This would avoid the need to
+prevent `const` borrows of the base pointer when the referent is
+borrowed.
+
+# Moves and initialization
+
+The borrow checker is also in charge of ensuring that:
+
+- all memory which is accessed is initialized
+- immutable local variables are assigned at most once.
+
+These are two separate dataflow analyses built on the same
+framework. Let's look at checking that memory is initialized first;
+the checking of immutable local variable assignments works in a very
+similar way.
+
+To track the initialization of memory, we actually track all the
+points in the program that *create uninitialized memory*, meaning
+moves and the declaration of uninitialized variables. For each of
+these points, we create a bit in the dataflow set. Assignments to a
+variable `x` or path `a.b.c` kill the move/uninitialization bits for
+those paths and any subpaths (e.g., `x`, `x.y`, `a.b.c`, `*a.b.c`).
+Bits are unioned when two control-flow paths join. Thus, the
+presence of a bit indicates that the move may have occurred without an
+intervening assignment to the same memory. At each use of a variable,
+we examine the bits in scope, and check that none of them are
+moves/uninitializations of the variable that is being used.
+
+Let's look at a simple example:
+
+```
+fn foo(a: Box<int>) {
+    let b: Box<int>;   // Gen bit 0.
+
+    if cond {          // Bits: 0
+        use(&*a);
+        b = a;         // Gen bit 1, kill bit 0.
+        use(&*b);
+    } else {
+                       // Bits: 0
+    }
+                       // Bits: 0,1
+    use(&*a);          // Error.
+    use(&*b);          // Error.
+}
+
+fn use(a: &int) { }
+```
+
+In this example, the variable `b` is created uninitialized. In one
+branch of an `if`, we then move the variable `a` into `b`. Once we
+exit the `if`, therefore, it is an error to use `a` or `b` since both
+are only conditionally initialized. I have annotated the dataflow
+state using comments. There are two dataflow bits, with bit 0
+corresponding to the creation of `b` without an initializer, and bit 1
+corresponding to the move of `a`. The assignment `b = a` both
+generates bit 1, because it is a move of `a`, and kills bit 0, because
+`b` is now initialized. On the else branch, though, `b` is never
+initialized, and so bit 0 remains untouched. When the two flows of
+control join, we union the bits from both sides, resulting in both
+bits 0 and 1 being set. Thus any attempt to use `a` uncovers the bit 1
+from the "then" branch, showing that `a` may be moved, and any attempt
+to use `b` uncovers bit 0, from the "else" branch, showing that `b`
+may not be initialized.
+
+## Initialization of immutable variables
+
+Initialization of immutable variables works in a very similar way,
+except that:
+
+1. we generate bits for each assignment to a variable;
+2. the bits are never killed except when the variable goes out of scope.
+
+Thus the presence of an assignment bit indicates that the assignment
+may have occurred. Note that assignments are only killed when the
+variable goes out of scope, as it is not relevant whether or not there
+has been a move in the meantime. Using these bits, we can declare that
+an assignment to an immutable variable is legal iff there is no other
+assignment bit to that same variable in scope.
+
+## Why is the design made this way?
+
+It may seem surprising that we assign dataflow bits to *each move*
+rather than *each path being moved*. This is somewhat less efficient,
+since on each use, we must iterate through all moves and check whether
+any of them correspond to the path in question. Similar concerns apply
+to the analysis for double assignments to immutable variables. The
+main reason to do it this way is that it allows us to print better
+error messages, because when a use occurs, we can print out the
+precise move that may be in scope, rather than simply having to say
+"the variable may not be initialized".
+
+## Data structures used in the move analysis
+
+The move analysis maintains several data structures that enable it to
+cross-reference moves and assignments to determine when they may be
+moving/assigning the same memory. These are all collected into the
+`MoveData` and `FlowedMoveData` structs. The former represents the set
+of move paths, moves, and assignments, and the latter adds in the
+results of a dataflow computation.
+
+### Move paths
+
+The `MovePath` tree tracks every path that is moved or assigned to.
+These paths have the same form as the `LoanPath` data structure, which
+in turn is the "real world version of the lvalues `LV` that we
+introduced earlier. The difference between a `MovePath` and a `LoanPath`
+is that move paths are:
+
+1. Canonicalized, so that we have exactly one copy of each, and
+   we can refer to move paths by index;
+2. Cross-referenced with other paths into a tree, so that given a move
+   path we can efficiently find all parent move paths and all
+   extensions (e.g., given the `a.b` move path, we can easily find the
+   move path `a` and also the move paths `a.b.c`)
+3. Cross-referenced with moves and assignments, so that we can
+   easily find all moves and assignments to a given path.
+
+The mechanism that we use is to create a `MovePath` record for each
+move path. These are arranged in an array and are referenced using
+`MovePathIndex` values, which are newtype'd indices. The `MovePath`
+structs are arranged into a tree, representing using the standard
+Knuth representation where each node has a child 'pointer' and a "next
+sibling" 'pointer'. In addition, each `MovePath` has a parent
+'pointer'.  In this case, the 'pointers' are just `MovePathIndex`
+values.
+
+In this way, if we want to find all base paths of a given move path,
+we can just iterate up the parent pointers (see `each_base_path()` in
+the `move_data` module). If we want to find all extensions, we can
+iterate through the subtree (see `each_extending_path()`).
+
+### Moves and assignments
+
+There are structs to represent moves (`Move`) and assignments
+(`Assignment`), and these are also placed into arrays and referenced
+by index. All moves of a particular path are arranged into a linked
+lists, beginning with `MovePath.first_move` and continuing through
+`Move.next_move`.
+
+We distinguish between "var" assignments, which are assignments to a
+variable like `x = foo`, and "path" assignments (`x.f = foo`).  This
+is because we need to assign dataflows to the former, but not the
+latter, so as to check for double initialization of immutable
+variables.
+
+### Gathering and checking moves
+
+Like loans, we distinguish two phases. The first, gathering, is where
+we uncover all the moves and assignments. As with loans, we do some
+basic sanity checking in this phase, so we'll report errors if you
+attempt to move out of a borrowed pointer etc. Then we do the dataflow
+(see `FlowedMoveData::new`). Finally, in the `check_loans.rs` code, we
+walk back over, identify all uses, assignments, and captures, and
+check that they are legal given the set of dataflow bits we have
+computed for that program point.
+
+# Drop flags and structural fragments
+
+In addition to the job of enforcing memory safety, the borrow checker
+code is also responsible for identifying the *structural fragments* of
+data in the function, to support out-of-band dynamic drop flags
+allocated on the stack. (For background, see [RFC PR #320].)
+
+[RFC PR #320]: https://github.com/rust-lang/rfcs/pull/320
+
+Semantically, each piece of data that has a destructor may need a
+boolean flag to indicate whether or not its destructor has been run
+yet. However, in many cases there is no need to actually maintain such
+a flag: It can be apparent from the code itself that a given path is
+always initialized (or always deinitialized) when control reaches the
+end of its owner's scope, and thus we can unconditionally emit (or
+not) the destructor invocation for that path.
+
+A simple example of this is the following:
+
+```rust
+struct D { p: int }
+impl D { fn new(x: int) -> D { ... }
+impl Drop for D { ... }
+
+fn foo(a: D, b: D, t: || -> bool) {
+    let c: D;
+    let d: D;
+    if t() { c = b; }
+}
+```
+
+At the end of the body of `foo`, the compiler knows that `a` is
+initialized, introducing a drop obligation (deallocating the boxed
+integer) for the end of `a`'s scope that is run unconditionally.
+Likewise the compiler knows that `d` is not initialized, and thus it
+leave out the drop code for `d`.
+
+The compiler cannot statically know the drop-state of `b` nor `c` at
+the end of their scope, since that depends on the value of
+`t`. Therefore, we need to insert boolean flags to track whether we
+need to drop `b` and `c`.
+
+However, the matter is not as simple as just mapping local variables
+to their corresponding drop flags when necessary. In particular, in
+addition to being able to move data out of local variables, Rust
+allows one to move values in and out of structured data.
+
+Consider the following:
+
+```rust
+struct S { x: D, y: D, z: D }
+
+fn foo(a: S, mut b: S, t: || -> bool) {
+    let mut c: S;
+    let d: S;
+    let e: S = a.clone();
+    if t() {
+        c = b;
+        b.x = e.y;
+    }
+    if t() { c.y = D::new(4); }
+}
+```
+
+As before, the drop obligations of `a` and `d` can be statically
+determined, and again the state of `b` and `c` depend on dynamic
+state. But additionally, the dynamic drop obligations introduced by
+`b` and `c` are not just per-local boolean flags. For example, if the
+first call to `t` returns `false` and the second call `true`, then at
+the end of their scope, `b` will be completely initialized, but only
+`c.y` in `c` will be initialized.  If both calls to `t` return `true`,
+then at the end of their scope, `c` will be completely initialized,
+but only `b.x` will be initialized in `b`, and only `e.x` and `e.z`
+will be initialized in `e`.
+
+Note that we need to cover the `z` field in each case in some way,
+since it may (or may not) need to be dropped, even though `z` is never
+directly mentioned in the body of the `foo` function. We call a path
+like `b.z` a *fragment sibling* of `b.x`, since the field `z` comes
+from the same structure `S` that declared the field `x` in `b.x`.
+
+In general we need to maintain boolean flags that match the
+`S`-structure of both `b` and `c`.  In addition, we need to consult
+such a flag when doing an assignment (such as `c.y = D::new(4);`
+above), in order to know whether or not there is a previous value that
+needs to be dropped before we do the assignment.
+
+So for any given function, we need to determine what flags are needed
+to track its drop obligations. Our strategy for determining the set of
+flags is to represent the fragmentation of the structure explicitly:
+by starting initially from the paths that are explicitly mentioned in
+moves and assignments (such as `b.x` and `c.y` above), and then
+traversing the structure of the path's type to identify leftover
+*unmoved fragments*: assigning into `c.y` means that `c.x` and `c.z`
+are leftover unmoved fragments. Each fragment represents a drop
+obligation that may need to be tracked. Paths that are only moved or
+assigned in their entirety (like `a` and `d`) are treated as a single
+drop obligation.
+
+The fragment construction process works by piggy-backing on the
+existing `move_data` module. We already have callbacks that visit each
+direct move and assignment; these form the basis for the sets of
+moved_leaf_paths and assigned_leaf_paths. From these leaves, we can
+walk up their parent chain to identify all of their parent paths.
+We need to identify the parents because of cases like the following:
+
+```rust
+struct Pair<X,Y>{ x: X, y: Y }
+fn foo(dd_d_d: Pair<Pair<Pair<D, D>, D>, D>) {
+    other_function(dd_d_d.x.y);
+}
+```
+
+In this code, the move of the path `dd_d.x.y` leaves behind not only
+the fragment drop-obligation `dd_d.x.x` but also `dd_d.y` as well.
+
+Once we have identified the directly-referenced leaves and their
+parents, we compute the left-over fragments, in the function
+`fragments::add_fragment_siblings`. As of this writing this works by
+looking at each directly-moved or assigned path P, and blindly
+gathering all sibling fields of P (as well as siblings for the parents
+of P, etc). After accumulating all such siblings, we filter out the
+entries added as siblings of P that turned out to be
+directly-referenced paths (or parents of directly referenced paths)
+themselves, thus leaving the never-referenced "left-overs" as the only
+thing left from the gathering step.
+
+## Array structural fragments
+
+A special case of the structural fragments discussed above are
+the elements of an array that has been passed by value, such as
+the following:
+
+```rust
+fn foo(a: [D; 10], i: uint) -> D {
+    a[i]
+}
+```
+
+The above code moves a single element out of the input array `a`.
+The remainder of the array still needs to be dropped; i.e., it
+is a structural fragment. Note that after performing such a move,
+it is not legal to read from the array `a`. There are a number of
+ways to deal with this, but the important thing to note is that
+the semantics needs to distinguish in some manner between a
+fragment that is the *entire* array versus a fragment that represents
+all-but-one element of the array.  A place where that distinction
+would arise is the following:
+
+```rust
+fn foo(a: [D; 10], b: [D; 10], i: uint, t: bool) -> D {
+    if t {
+        a[i]
+    } else {
+        b[i]
+    }
+
+    // When control exits, we will need either to drop all of `a`
+    // and all-but-one of `b`, or to drop all of `b` and all-but-one
+    // of `a`.
+}
+```
+
+There are a number of ways that the trans backend could choose to
+compile this (e.g. a `[bool; 10]` array for each such moved array;
+or an `Option<uint>` for each moved array).  From the viewpoint of the
+borrow-checker, the important thing is to record what kind of fragment
+is implied by the relevant moves.
+
+# Future work
+
+While writing up these docs, I encountered some rules I believe to be
+stricter than necessary:
+
+- I think restricting the `&mut` LV against moves and `ALIAS` is sufficient,
+  `MUTATE` and `CLAIM` are overkill. `MUTATE` was necessary when swap was
+  a built-in operator, but as it is not, it is implied by `CLAIM`,
+  and `CLAIM` is implied by `ALIAS`. The only net effect of this is an
+  extra error message in some cases, though.
+- I have not described how closures interact. Current code is unsound.
+  I am working on describing and implementing the fix.
+- If we wish, we can easily extend the move checking to allow finer-grained
+  tracking of what is initialized and what is not, enabling code like
+  this:
+
+      a = x.f.g; // x.f.g is now uninitialized
+      // here, x and x.f are not usable, but x.f.h *is*
+      x.f.g = b; // x.f.g is not initialized
+      // now x, x.f, x.f.g, x.f.h are all usable
+
+  What needs to change here, most likely, is that the `moves` module
+  should record not only what paths are moved, but what expressions
+  are actual *uses*. For example, the reference to `x` in `x.f.g = b`
+  is not a true *use* in the sense that it requires `x` to be fully
+  initialized. This is in fact why the above code produces an error
+  today: the reference to `x` in `x.f.g = b` is considered illegal
+  because `x` is not fully initialized.
+
+There are also some possible refactorings:
+
+- It might be nice to replace all loan paths with the MovePath mechanism,
+  since they allow lightweight comparison using an integer.
diff --git a/src/librustc_borrowck/borrowck/doc.rs b/src/librustc_borrowck/borrowck/doc.rs
deleted file mode 100644
index 682a5f2f5ac..00000000000
--- a/src/librustc_borrowck/borrowck/doc.rs
+++ /dev/null
@@ -1,1222 +0,0 @@
-// Copyright 2012 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-//! # The Borrow Checker
-//!
-//! This pass has the job of enforcing memory safety. This is a subtle
-//! topic. This docs aim to explain both the practice and the theory
-//! behind the borrow checker. They start with a high-level overview of
-//! how it works, and then proceed to dive into the theoretical
-//! background. Finally, they go into detail on some of the more subtle
-//! aspects.
-//!
-//! # Table of contents
-//!
-//! These docs are long. Search for the section you are interested in.
-//!
-//! - Overview
-//! - Formal model
-//! - Borrowing and loans
-//! - Moves and initialization
-//! - Drop flags and structural fragments
-//! - Future work
-//!
-//! # Overview
-//!
-//! The borrow checker checks one function at a time. It operates in two
-//! passes. The first pass, called `gather_loans`, walks over the function
-//! and identifies all of the places where borrows (e.g., `&` expressions
-//! and `ref` bindings) and moves (copies or captures of a linear value)
-//! occur. It also tracks initialization sites. For each borrow and move,
-//! it checks various basic safety conditions at this time (for example,
-//! that the lifetime of the borrow doesn't exceed the lifetime of the
-//! value being borrowed, or that there is no move out of an `&T`
-//! referent).
-//!
-//! It then uses the dataflow module to propagate which of those borrows
-//! may be in scope at each point in the procedure. A loan is considered
-//! to come into scope at the expression that caused it and to go out of
-//! scope when the lifetime of the resulting reference expires.
-//!
-//! Once the in-scope loans are known for each point in the program, the
-//! borrow checker walks the IR again in a second pass called
-//! `check_loans`. This pass examines each statement and makes sure that
-//! it is safe with respect to the in-scope loans.
-//!
-//! # Formal model
-//!
-//! Throughout the docs we'll consider a simple subset of Rust in which
-//! you can only borrow from lvalues, defined like so:
-//!
-//! ```text
-//! LV = x | LV.f | *LV
-//! ```
-//!
-//! Here `x` represents some variable, `LV.f` is a field reference,
-//! and `*LV` is a pointer dereference. There is no auto-deref or other
-//! niceties. This means that if you have a type like:
-//!
-//! ```text
-//! struct S { f: uint }
-//! ```
-//!
-//! and a variable `a: Box<S>`, then the rust expression `a.f` would correspond
-//! to an `LV` of `(*a).f`.
-//!
-//! Here is the formal grammar for the types we'll consider:
-//!
-//! ```text
-//! TY = () | S<'LT...> | Box<TY> | & 'LT MQ TY
-//! MQ = mut | imm | const
-//! ```
-//!
-//! Most of these types should be pretty self explanatory. Here `S` is a
-//! struct name and we assume structs are declared like so:
-//!
-//! ```text
-//! SD = struct S<'LT...> { (f: TY)... }
-//! ```
-//!
-//! # Borrowing and loans
-//!
-//! ## An intuitive explanation
-//!
-//! ### Issuing loans
-//!
-//! Now, imagine we had a program like this:
-//!
-//! ```text
-//! struct Foo { f: uint, g: uint }
-//! ...
-//! 'a: {
-//!   let mut x: Box<Foo> = ...;
-//!   let y = &mut (*x).f;
-//!   x = ...;
-//! }
-//! ```
-//!
-//! This is of course dangerous because mutating `x` will free the old
-//! value and hence invalidate `y`. The borrow checker aims to prevent
-//! this sort of thing.
-//!
-//! #### Loans and restrictions
-//!
-//! The way the borrow checker works is that it analyzes each borrow
-//! expression (in our simple model, that's stuff like `&LV`, though in
-//! real life there are a few other cases to consider). For each borrow
-//! expression, it computes a `Loan`, which is a data structure that
-//! records (1) the value being borrowed, (2) the mutability and scope of
-//! the borrow, and (3) a set of restrictions. In the code, `Loan` is a
-//! struct defined in `middle::borrowck`. Formally, we define `LOAN` as
-//! follows:
-//!
-//! ```text
-//! LOAN = (LV, LT, MQ, RESTRICTION*)
-//! RESTRICTION = (LV, ACTION*)
-//! ACTION = MUTATE | CLAIM | FREEZE
-//! ```
-//!
-//! Here the `LOAN` tuple defines the lvalue `LV` being borrowed; the
-//! lifetime `LT` of that borrow; the mutability `MQ` of the borrow; and a
-//! list of restrictions. The restrictions indicate actions which, if
-//! taken, could invalidate the loan and lead to type safety violations.
-//!
-//! Each `RESTRICTION` is a pair of a restrictive lvalue `LV` (which will
-//! either be the path that was borrowed or some prefix of the path that
-//! was borrowed) and a set of restricted actions.  There are three kinds
-//! of actions that may be restricted for the path `LV`:
-//!
-//! - `MUTATE` means that `LV` cannot be assigned to;
-//! - `CLAIM` means that the `LV` cannot be borrowed mutably;
-//! - `FREEZE` means that the `LV` cannot be borrowed immutably;
-//!
-//! Finally, it is never possible to move from an lvalue that appears in a
-//! restriction. This implies that the "empty restriction" `(LV, [])`,
-//! which contains an empty set of actions, still has a purpose---it
-//! prevents moves from `LV`. I chose not to make `MOVE` a fourth kind of
-//! action because that would imply that sometimes moves are permitted
-//! from restricted values, which is not the case.
-//!
-//! #### Example
-//!
-//! To give you a better feeling for what kind of restrictions derived
-//! from a loan, let's look at the loan `L` that would be issued as a
-//! result of the borrow `&mut (*x).f` in the example above:
-//!
-//! ```text
-//! L = ((*x).f, 'a, mut, RS) where
-//!     RS = [((*x).f, [MUTATE, CLAIM, FREEZE]),
-//!           (*x, [MUTATE, CLAIM, FREEZE]),
-//!           (x, [MUTATE, CLAIM, FREEZE])]
-//! ```
-//!
-//! The loan states that the expression `(*x).f` has been loaned as
-//! mutable for the lifetime `'a`. Because the loan is mutable, that means
-//! that the value `(*x).f` may be mutated via the newly created reference
-//! (and *only* via that pointer). This is reflected in the
-//! restrictions `RS` that accompany the loan.
-//!
-//! The first restriction `((*x).f, [MUTATE, CLAIM, FREEZE])` states that
-//! the lender may not mutate, freeze, nor alias `(*x).f`. Mutation is
-//! illegal because `(*x).f` is only supposed to be mutated via the new
-//! reference, not by mutating the original path `(*x).f`. Freezing is
-//! illegal because the path now has an `&mut` alias; so even if we the
-//! lender were to consider `(*x).f` to be immutable, it might be mutated
-//! via this alias. They will be enforced for the lifetime `'a` of the
-//! loan. After the loan expires, the restrictions no longer apply.
-//!
-//! The second restriction on `*x` is interesting because it does not
-//! apply to the path that was lent (`(*x).f`) but rather to a prefix of
-//! the borrowed path. This is due to the rules of inherited mutability:
-//! if the user were to assign to (or freeze) `*x`, they would indirectly
-//! overwrite (or freeze) `(*x).f`, and thus invalidate the reference
-//! that was created. In general it holds that when a path is
-//! lent, restrictions are issued for all the owning prefixes of that
-//! path. In this case, the path `*x` owns the path `(*x).f` and,
-//! because `x` is an owned pointer, the path `x` owns the path `*x`.
-//! Therefore, borrowing `(*x).f` yields restrictions on both
-//! `*x` and `x`.
-//!
-//! ### Checking for illegal assignments, moves, and reborrows
-//!
-//! Once we have computed the loans introduced by each borrow, the borrow
-//! checker uses a data flow propagation to compute the full set of loans
-//! in scope at each expression and then uses that set to decide whether
-//! that expression is legal.  Remember that the scope of loan is defined
-//! by its lifetime LT.  We sometimes say that a loan which is in-scope at
-//! a particular point is an "outstanding loan", and the set of
-//! restrictions included in those loans as the "outstanding
-//! restrictions".
-//!
-//! The kinds of expressions which in-scope loans can render illegal are:
-//! - *assignments* (`lv = v`): illegal if there is an in-scope restriction
-//!   against mutating `lv`;
-//! - *moves*: illegal if there is any in-scope restriction on `lv` at all;
-//! - *mutable borrows* (`&mut lv`): illegal there is an in-scope restriction
-//!   against claiming `lv`;
-//! - *immutable borrows* (`&lv`): illegal there is an in-scope restriction
-//!   against freezing `lv`.
-//!
-//! ## Formal rules
-//!
-//! Now that we hopefully have some kind of intuitive feeling for how the
-//! borrow checker works, let's look a bit more closely now at the precise
-//! conditions that it uses. For simplicity I will ignore const loans.
-//!
-//! I will present the rules in a modified form of standard inference
-//! rules, which looks as follows:
-//!
-//! ```text
-//! PREDICATE(X, Y, Z)                  // Rule-Name
-//!   Condition 1
-//!   Condition 2
-//!   Condition 3
-//! ```
-//!
-//! The initial line states the predicate that is to be satisfied.  The
-//! indented lines indicate the conditions that must be met for the
-//! predicate to be satisfied. The right-justified comment states the name
-//! of this rule: there are comments in the borrowck source referencing
-//! these names, so that you can cross reference to find the actual code
-//! that corresponds to the formal rule.
-//!
-//! ### Invariants
-//!
-//! I want to collect, at a high-level, the invariants the borrow checker
-//! maintains. I will give them names and refer to them throughout the
-//! text. Together these invariants are crucial for the overall soundness
-//! of the system.
-//!
-//! **Mutability requires uniqueness.** To mutate a path
-//!
-//! **Unique mutability.** There is only one *usable* mutable path to any
-//! given memory at any given time. This implies that when claiming memory
-//! with an expression like `p = &mut x`, the compiler must guarantee that
-//! the borrowed value `x` can no longer be mutated so long as `p` is
-//! live. (This is done via restrictions, read on.)
-//!
-//! **.**
-//!
-//!
-//! ### The `gather_loans` pass
-//!
-//! We start with the `gather_loans` pass, which walks the AST looking for
-//! borrows.  For each borrow, there are three bits of information: the
-//! lvalue `LV` being borrowed and the mutability `MQ` and lifetime `LT`
-//! of the resulting pointer. Given those, `gather_loans` applies four
-//! validity tests:
-//!
-//! 1. `MUTABILITY(LV, MQ)`: The mutability of the reference is
-//! compatible with the mutability of `LV` (i.e., not borrowing immutable
-//! data as mutable).
-//!
-//! 2. `ALIASABLE(LV, MQ)`: The aliasability of the reference is
-//! compatible with the aliasability of `LV`. The goal is to prevent
-//! `&mut` borrows of aliasability data.
-//!
-//! 3. `LIFETIME(LV, LT, MQ)`: The lifetime of the borrow does not exceed
-//! the lifetime of the value being borrowed.
-//!
-//! 4. `RESTRICTIONS(LV, LT, ACTIONS) = RS`: This pass checks and computes the
-//! restrictions to maintain memory safety. These are the restrictions
-//! that will go into the final loan. We'll discuss in more detail below.
-//!
-//! ## Checking mutability
-//!
-//! Checking mutability is fairly straightforward. We just want to prevent
-//! immutable data from being borrowed as mutable. Note that it is ok to
-//! borrow mutable data as immutable, since that is simply a
-//! freeze. Formally we define a predicate `MUTABLE(LV, MQ)` which, if
-//! defined, means that "borrowing `LV` with mutability `MQ` is ok. The
-//! Rust code corresponding to this predicate is the function
-//! `check_mutability` in `middle::borrowck::gather_loans`.
-//!
-//! ### Checking mutability of variables
-//!
-//! *Code pointer:* Function `check_mutability()` in `gather_loans/mod.rs`,
-//! but also the code in `mem_categorization`.
-//!
-//! Let's begin with the rules for variables, which state that if a
-//! variable is declared as mutable, it may be borrowed any which way, but
-//! otherwise the variable must be borrowed as immutable or const:
-//!
-//! ```text
-//! MUTABILITY(X, MQ)                   // M-Var-Mut
-//!   DECL(X) = mut
-//!
-//! MUTABILITY(X, MQ)                   // M-Var-Imm
-//!   DECL(X) = imm
-//!   MQ = imm | const
-//! ```
-//!
-//! ### Checking mutability of owned content
-//!
-//! Fields and owned pointers inherit their mutability from
-//! their base expressions, so both of their rules basically
-//! delegate the check to the base expression `LV`:
-//!
-//! ```text
-//! MUTABILITY(LV.f, MQ)                // M-Field
-//!   MUTABILITY(LV, MQ)
-//!
-//! MUTABILITY(*LV, MQ)                 // M-Deref-Unique
-//!   TYPE(LV) = Box<Ty>
-//!   MUTABILITY(LV, MQ)
-//! ```
-//!
-//! ### Checking mutability of immutable pointer types
-//!
-//! Immutable pointer types like `&T` can only
-//! be borrowed if MQ is immutable or const:
-//!
-//! ```text
-//! MUTABILITY(*LV, MQ)                // M-Deref-Borrowed-Imm
-//!   TYPE(LV) = &Ty
-//!   MQ == imm | const
-//! ```
-//!
-//! ### Checking mutability of mutable pointer types
-//!
-//! `&mut T` can be frozen, so it is acceptable to borrow it as either imm or mut:
-//!
-//! ```text
-//! MUTABILITY(*LV, MQ)                 // M-Deref-Borrowed-Mut
-//!   TYPE(LV) = &mut Ty
-//! ```
-//!
-//! ## Checking aliasability
-//!
-//! The goal of the aliasability check is to ensure that we never permit
-//! `&mut` borrows of aliasable data. Formally we define a predicate
-//! `ALIASABLE(LV, MQ)` which if defined means that
-//! "borrowing `LV` with mutability `MQ` is ok". The
-//! Rust code corresponding to this predicate is the function
-//! `check_aliasability()` in `middle::borrowck::gather_loans`.
-//!
-//! ### Checking aliasability of variables
-//!
-//! Local variables are never aliasable as they are accessible only within
-//! the stack frame.
-//!
-//! ```text
-//!     ALIASABLE(X, MQ)                   // M-Var-Mut
-//! ```
-//!
-//! ### Checking aliasable of owned content
-//!
-//! Owned content is aliasable if it is found in an aliasable location:
-//!
-//! ```text
-//! ALIASABLE(LV.f, MQ)                // M-Field
-//!   ALIASABLE(LV, MQ)
-//!
-//! ALIASABLE(*LV, MQ)                 // M-Deref-Unique
-//!   ALIASABLE(LV, MQ)
-//! ```
-//!
-//! ### Checking mutability of immutable pointer types
-//!
-//! Immutable pointer types like `&T` are aliasable, and hence can only be
-//! borrowed immutably:
-//!
-//! ```text
-//! ALIASABLE(*LV, imm)                // M-Deref-Borrowed-Imm
-//!   TYPE(LV) = &Ty
-//! ```
-//!
-//! ### Checking mutability of mutable pointer types
-//!
-//! `&mut T` can be frozen, so it is acceptable to borrow it as either imm or mut:
-//!
-//! ```text
-//! ALIASABLE(*LV, MQ)                 // M-Deref-Borrowed-Mut
-//!   TYPE(LV) = &mut Ty
-//! ```
-//!
-//! ## Checking lifetime
-//!
-//! These rules aim to ensure that no data is borrowed for a scope that exceeds
-//! its lifetime. These two computations wind up being intimately related.
-//! Formally, we define a predicate `LIFETIME(LV, LT, MQ)`, which states that
-//! "the lvalue `LV` can be safely borrowed for the lifetime `LT` with mutability
-//! `MQ`". The Rust code corresponding to this predicate is the module
-//! `middle::borrowck::gather_loans::lifetime`.
-//!
-//! ### The Scope function
-//!
-//! Several of the rules refer to a helper function `SCOPE(LV)=LT`.  The
-//! `SCOPE(LV)` yields the lifetime `LT` for which the lvalue `LV` is
-//! guaranteed to exist, presuming that no mutations occur.
-//!
-//! The scope of a local variable is the block where it is declared:
-//!
-//! ```text
-//!   SCOPE(X) = block where X is declared
-//! ```
-//!
-//! The scope of a field is the scope of the struct:
-//!
-//! ```text
-//!   SCOPE(LV.f) = SCOPE(LV)
-//! ```
-//!
-//! The scope of a unique referent is the scope of the pointer, since
-//! (barring mutation or moves) the pointer will not be freed until
-//! the pointer itself `LV` goes out of scope:
-//!
-//! ```text
-//!   SCOPE(*LV) = SCOPE(LV) if LV has type Box<T>
-//! ```
-//!
-//! The scope of a borrowed referent is the scope associated with the
-//! pointer.  This is a conservative approximation, since the data that
-//! the pointer points at may actually live longer:
-//!
-//! ```text
-//!   SCOPE(*LV) = LT if LV has type &'LT T or &'LT mut T
-//! ```
-//!
-//! ### Checking lifetime of variables
-//!
-//! The rule for variables states that a variable can only be borrowed a
-//! lifetime `LT` that is a subregion of the variable's scope:
-//!
-//! ```text
-//! LIFETIME(X, LT, MQ)                 // L-Local
-//!   LT <= SCOPE(X)
-//! ```
-//!
-//! ### Checking lifetime for owned content
-//!
-//! The lifetime of a field or owned pointer is the same as the lifetime
-//! of its owner:
-//!
-//! ```text
-//! LIFETIME(LV.f, LT, MQ)              // L-Field
-//!   LIFETIME(LV, LT, MQ)
-//!
-//! LIFETIME(*LV, LT, MQ)               // L-Deref-Send
-//!   TYPE(LV) = Box<Ty>
-//!   LIFETIME(LV, LT, MQ)
-//! ```
-//!
-//! ### Checking lifetime for derefs of references
-//!
-//! References have a lifetime `LT'` associated with them.  The
-//! data they point at has been guaranteed to be valid for at least this
-//! lifetime. Therefore, the borrow is valid so long as the lifetime `LT`
-//! of the borrow is shorter than the lifetime `LT'` of the pointer
-//! itself:
-//!
-//! ```text
-//! LIFETIME(*LV, LT, MQ)               // L-Deref-Borrowed
-//!   TYPE(LV) = &LT' Ty OR &LT' mut Ty
-//!   LT <= LT'
-//! ```
-//!
-//! ## Computing the restrictions
-//!
-//! The final rules govern the computation of *restrictions*, meaning that
-//! we compute the set of actions that will be illegal for the life of the
-//! loan. The predicate is written `RESTRICTIONS(LV, LT, ACTIONS) =
-//! RESTRICTION*`, which can be read "in order to prevent `ACTIONS` from
-//! occurring on `LV`, the restrictions `RESTRICTION*` must be respected
-//! for the lifetime of the loan".
-//!
-//! Note that there is an initial set of restrictions: these restrictions
-//! are computed based on the kind of borrow:
-//!
-//! ```text
-//! &mut LV =>   RESTRICTIONS(LV, LT, MUTATE|CLAIM|FREEZE)
-//! &LV =>       RESTRICTIONS(LV, LT, MUTATE|CLAIM)
-//! &const LV => RESTRICTIONS(LV, LT, [])
-//! ```
-//!
-//! The reasoning here is that a mutable borrow must be the only writer,
-//! therefore it prevents other writes (`MUTATE`), mutable borrows
-//! (`CLAIM`), and immutable borrows (`FREEZE`). An immutable borrow
-//! permits other immutable borrows but forbids writes and mutable borrows.
-//! Finally, a const borrow just wants to be sure that the value is not
-//! moved out from under it, so no actions are forbidden.
-//!
-//! ### Restrictions for loans of a local variable
-//!
-//! The simplest case is a borrow of a local variable `X`:
-//!
-//! ```text
-//! RESTRICTIONS(X, LT, ACTIONS) = (X, ACTIONS)            // R-Variable
-//! ```
-//!
-//! In such cases we just record the actions that are not permitted.
-//!
-//! ### Restrictions for loans of fields
-//!
-//! Restricting a field is the same as restricting the owner of that
-//! field:
-//!
-//! ```text
-//! RESTRICTIONS(LV.f, LT, ACTIONS) = RS, (LV.f, ACTIONS)  // R-Field
-//!   RESTRICTIONS(LV, LT, ACTIONS) = RS
-//! ```
-//!
-//! The reasoning here is as follows. If the field must not be mutated,
-//! then you must not mutate the owner of the field either, since that
-//! would indirectly modify the field. Similarly, if the field cannot be
-//! frozen or aliased, we cannot allow the owner to be frozen or aliased,
-//! since doing so indirectly freezes/aliases the field. This is the
-//! origin of inherited mutability.
-//!
-//! ### Restrictions for loans of owned referents
-//!
-//! Because the mutability of owned referents is inherited, restricting an
-//! owned referent is similar to restricting a field, in that it implies
-//! restrictions on the pointer. However, owned pointers have an important
-//! twist: if the owner `LV` is mutated, that causes the owned referent
-//! `*LV` to be freed! So whenever an owned referent `*LV` is borrowed, we
-//! must prevent the owned pointer `LV` from being mutated, which means
-//! that we always add `MUTATE` and `CLAIM` to the restriction set imposed
-//! on `LV`:
-//!
-//! ```text
-//! RESTRICTIONS(*LV, LT, ACTIONS) = RS, (*LV, ACTIONS)    // R-Deref-Send-Pointer
-//!   TYPE(LV) = Box<Ty>
-//!   RESTRICTIONS(LV, LT, ACTIONS|MUTATE|CLAIM) = RS
-//! ```
-//!
-//! ### Restrictions for loans of immutable borrowed referents
-//!
-//! Immutable borrowed referents are freely aliasable, meaning that
-//! the compiler does not prevent you from copying the pointer.  This
-//! implies that issuing restrictions is useless. We might prevent the
-//! user from acting on `*LV` itself, but there could be another path
-//! `*LV1` that refers to the exact same memory, and we would not be
-//! restricting that path. Therefore, the rule for `&Ty` pointers
-//! always returns an empty set of restrictions, and it only permits
-//! restricting `MUTATE` and `CLAIM` actions:
-//!
-//! ```text
-//! RESTRICTIONS(*LV, LT, ACTIONS) = []                    // R-Deref-Imm-Borrowed
-//!   TYPE(LV) = &LT' Ty
-//!   LT <= LT'                                            // (1)
-//!   ACTIONS subset of [MUTATE, CLAIM]
-//! ```
-//!
-//! The reason that we can restrict `MUTATE` and `CLAIM` actions even
-//! without a restrictions list is that it is never legal to mutate nor to
-//! borrow mutably the contents of a `&Ty` pointer. In other words,
-//! those restrictions are already inherent in the type.
-//!
-//! Clause (1) in the rule for `&Ty` deserves mention. Here I
-//! specify that the lifetime of the loan must be less than the lifetime
-//! of the `&Ty` pointer. In simple cases, this clause is redundant, since
-//! the `LIFETIME()` function will already enforce the required rule:
-//!
-//! ```
-//! fn foo(point: &'a Point) -> &'static f32 {
-//!     &point.x // Error
-//! }
-//! ```
-//!
-//! The above example fails to compile both because of clause (1) above
-//! but also by the basic `LIFETIME()` check. However, in more advanced
-//! examples involving multiple nested pointers, clause (1) is needed:
-//!
-//! ```
-//! fn foo(point: &'a &'b mut Point) -> &'b f32 {
-//!     &point.x // Error
-//! }
-//! ```
-//!
-//! The `LIFETIME` rule here would accept `'b` because, in fact, the
-//! *memory is* guaranteed to remain valid (i.e., not be freed) for the
-//! lifetime `'b`, since the `&mut` pointer is valid for `'b`. However, we
-//! are returning an immutable reference, so we need the memory to be both
-//! valid and immutable. Even though `point.x` is referenced by an `&mut`
-//! pointer, it can still be considered immutable so long as that `&mut`
-//! pointer is found in an aliased location. That means the memory is
-//! guaranteed to be *immutable* for the lifetime of the `&` pointer,
-//! which is only `'a`, not `'b`. Hence this example yields an error.
-//!
-//! As a final twist, consider the case of two nested *immutable*
-//! pointers, rather than a mutable pointer within an immutable one:
-//!
-//! ```
-//! fn foo(point: &'a &'b Point) -> &'b f32 {
-//!     &point.x // OK
-//! }
-//! ```
-//!
-//! This function is legal. The reason for this is that the inner pointer
-//! (`*point : &'b Point`) is enough to guarantee the memory is immutable
-//! and valid for the lifetime `'b`.  This is reflected in
-//! `RESTRICTIONS()` by the fact that we do not recurse (i.e., we impose
-//! no restrictions on `LV`, which in this particular case is the pointer
-//! `point : &'a &'b Point`).
-//!
-//! #### Why both `LIFETIME()` and `RESTRICTIONS()`?
-//!
-//! Given the previous text, it might seem that `LIFETIME` and
-//! `RESTRICTIONS` should be folded together into one check, but there is
-//! a reason that they are separated. They answer separate concerns.
-//! The rules pertaining to `LIFETIME` exist to ensure that we don't
-//! create a borrowed pointer that outlives the memory it points at. So
-//! `LIFETIME` prevents a function like this:
-//!
-//! ```
-//! fn get_1<'a>() -> &'a int {
-//!     let x = 1;
-//!     &x
-//! }
-//! ```
-//!
-//! Here we would be returning a pointer into the stack. Clearly bad.
-//!
-//! However, the `RESTRICTIONS` rules are more concerned with how memory
-//! is used. The example above doesn't generate an error according to
-//! `RESTRICTIONS` because, for local variables, we don't require that the
-//! loan lifetime be a subset of the local variable lifetime. The idea
-//! here is that we *can* guarantee that `x` is not (e.g.) mutated for the
-//! lifetime `'a`, even though `'a` exceeds the function body and thus
-//! involves unknown code in the caller -- after all, `x` ceases to exist
-//! after we return and hence the remaining code in `'a` cannot possibly
-//! mutate it. This distinction is important for type checking functions
-//! like this one:
-//!
-//! ```
-//! fn inc_and_get<'a>(p: &'a mut Point) -> &'a int {
-//!     p.x += 1;
-//!     &p.x
-//! }
-//! ```
-//!
-//! In this case, we take in a `&mut` and return a frozen borrowed pointer
-//! with the same lifetime. So long as the lifetime of the returned value
-//! doesn't exceed the lifetime of the `&mut` we receive as input, this is
-//! fine, though it may seem surprising at first (it surprised me when I
-//! first worked it through). After all, we're guaranteeing that `*p`
-//! won't be mutated for the lifetime `'a`, even though we can't "see" the
-//! entirety of the code during that lifetime, since some of it occurs in
-//! our caller. But we *do* know that nobody can mutate `*p` except
-//! through `p`. So if we don't mutate `*p` and we don't return `p`, then
-//! we know that the right to mutate `*p` has been lost to our caller --
-//! in terms of capability, the caller passed in the ability to mutate
-//! `*p`, and we never gave it back. (Note that we can't return `p` while
-//! `*p` is borrowed since that would be a move of `p`, as `&mut` pointers
-//! are affine.)
-//!
-//! ### Restrictions for loans of const aliasable referents
-//!
-//! Freeze pointers are read-only. There may be `&mut` or `&` aliases, and
-//! we can not prevent *anything* but moves in that case. So the
-//! `RESTRICTIONS` function is only defined if `ACTIONS` is the empty set.
-//! Because moves from a `&const` lvalue are never legal, it is not
-//! necessary to add any restrictions at all to the final result.
-//!
-//! ```text
-//!     RESTRICTIONS(*LV, LT, []) = []                // R-Deref-Freeze-Borrowed
-//!       TYPE(LV) = &const Ty
-//! ```
-//!
-//! ### Restrictions for loans of mutable borrowed referents
-//!
-//! Mutable borrowed pointers are guaranteed to be the only way to mutate
-//! their referent. This permits us to take greater license with them; for
-//! example, the referent can be frozen simply be ensuring that we do not
-//! use the original pointer to perform mutate. Similarly, we can allow
-//! the referent to be claimed, so long as the original pointer is unused
-//! while the new claimant is live.
-//!
-//! The rule for mutable borrowed pointers is as follows:
-//!
-//! ```text
-//! RESTRICTIONS(*LV, LT, ACTIONS) = RS, (*LV, ACTIONS)    // R-Deref-Mut-Borrowed
-//!   TYPE(LV) = &LT' mut Ty
-//!   LT <= LT'                                            // (1)
-//!   RESTRICTIONS(LV, LT, ACTIONS) = RS                   // (2)
-//! ```
-//!
-//! Let's examine the two numbered clauses:
-//!
-//! Clause (1) specifies that the lifetime of the loan (`LT`) cannot
-//! exceed the lifetime of the `&mut` pointer (`LT'`). The reason for this
-//! is that the `&mut` pointer is guaranteed to be the only legal way to
-//! mutate its referent -- but only for the lifetime `LT'`.  After that
-//! lifetime, the loan on the referent expires and hence the data may be
-//! modified by its owner again. This implies that we are only able to
-//! guarantee that the referent will not be modified or aliased for a
-//! maximum of `LT'`.
-//!
-//! Here is a concrete example of a bug this rule prevents:
-//!
-//! ```
-//! // Test region-reborrow-from-shorter-mut-ref.rs:
-//! fn copy_pointer<'a,'b,T>(x: &'a mut &'b mut T) -> &'b mut T {
-//!     &mut **p // ERROR due to clause (1)
-//! }
-//! fn main() {
-//!     let mut x = 1;
-//!     let mut y = &mut x; // <-'b-----------------------------+
-//!     //      +-'a--------------------+                       |
-//!     //      v                       v                       |
-//!     let z = copy_borrowed_ptr(&mut y); // y is lent         |
-//!     *y += 1; // Here y==z, so both should not be usable...  |
-//!     *z += 1; // ...and yet they would be, but for clause 1. |
-//! } // <------------------------------------------------------+
-//! ```
-//!
-//! Clause (2) propagates the restrictions on the referent to the pointer
-//! itself. This is the same as with an owned pointer, though the
-//! reasoning is mildly different. The basic goal in all cases is to
-//! prevent the user from establishing another route to the same data. To
-//! see what I mean, let's examine various cases of what can go wrong and
-//! show how it is prevented.
-//!
-//! **Example danger 1: Moving the base pointer.** One of the simplest
-//! ways to violate the rules is to move the base pointer to a new name
-//! and access it via that new name, thus bypassing the restrictions on
-//! the old name. Here is an example:
-//!
-//! ```
-//! // src/test/compile-fail/borrowck-move-mut-base-ptr.rs
-//! fn foo(t0: &mut int) {
-//!     let p: &int = &*t0; // Freezes `*t0`
-//!     let t1 = t0;        //~ ERROR cannot move out of `t0`
-//!     *t1 = 22;           // OK, not a write through `*t0`
-//! }
-//! ```
-//!
-//! Remember that `&mut` pointers are linear, and hence `let t1 = t0` is a
-//! move of `t0` -- or would be, if it were legal. Instead, we get an
-//! error, because clause (2) imposes restrictions on `LV` (`t0`, here),
-//! and any restrictions on a path make it impossible to move from that
-//! path.
-//!
-//! **Example danger 2: Claiming the base pointer.** Another possible
-//! danger is to mutably borrow the base path. This can lead to two bad
-//! scenarios. The most obvious is that the mutable borrow itself becomes
-//! another path to access the same data, as shown here:
-//!
-//! ```
-//! // src/test/compile-fail/borrowck-mut-borrow-of-mut-base-ptr.rs
-//! fn foo<'a>(mut t0: &'a mut int,
-//!            mut t1: &'a mut int) {
-//!     let p: &int = &*t0;     // Freezes `*t0`
-//!     let mut t2 = &mut t0;   //~ ERROR cannot borrow `t0`
-//!     **t2 += 1;              // Mutates `*t0`
-//! }
-//! ```
-//!
-//! In this example, `**t2` is the same memory as `*t0`. Because `t2` is
-//! an `&mut` pointer, `**t2` is a unique path and hence it would be
-//! possible to mutate `**t2` even though that memory was supposed to be
-//! frozen by the creation of `p`. However, an error is reported -- the
-//! reason is that the freeze `&*t0` will restrict claims and mutation
-//! against `*t0` which, by clause 2, in turn prevents claims and mutation
-//! of `t0`. Hence the claim `&mut t0` is illegal.
-//!
-//! Another danger with an `&mut` pointer is that we could swap the `t0`
-//! value away to create a new path:
-//!
-//! ```
-//! // src/test/compile-fail/borrowck-swap-mut-base-ptr.rs
-//! fn foo<'a>(mut t0: &'a mut int,
-//!            mut t1: &'a mut int) {
-//!     let p: &int = &*t0;     // Freezes `*t0`
-//!     swap(&mut t0, &mut t1); //~ ERROR cannot borrow `t0`
-//!     *t1 = 22;
-//! }
-//! ```
-//!
-//! This is illegal for the same reason as above. Note that if we added
-//! back a swap operator -- as we used to have -- we would want to be very
-//! careful to ensure this example is still illegal.
-//!
-//! **Example danger 3: Freeze the base pointer.** In the case where the
-//! referent is claimed, even freezing the base pointer can be dangerous,
-//! as shown in the following example:
-//!
-//! ```
-//! // src/test/compile-fail/borrowck-borrow-of-mut-base-ptr.rs
-//! fn foo<'a>(mut t0: &'a mut int,
-//!            mut t1: &'a mut int) {
-//!     let p: &mut int = &mut *t0; // Claims `*t0`
-//!     let mut t2 = &t0;           //~ ERROR cannot borrow `t0`
-//!     let q: &int = &*t2;         // Freezes `*t0` but not through `*p`
-//!     *p += 1;                    // violates type of `*q`
-//! }
-//! ```
-//!
-//! Here the problem is that `*t0` is claimed by `p`, and hence `p` wants
-//! to be the controlling pointer through which mutation or freezes occur.
-//! But `t2` would -- if it were legal -- have the type `& &mut int`, and
-//! hence would be a mutable pointer in an aliasable location, which is
-//! considered frozen (since no one can write to `**t2` as it is not a
-//! unique path). Therefore, we could reasonably create a frozen `&int`
-//! pointer pointing at `*t0` that coexists with the mutable pointer `p`,
-//! which is clearly unsound.
-//!
-//! However, it is not always unsafe to freeze the base pointer. In
-//! particular, if the referent is frozen, there is no harm in it:
-//!
-//! ```
-//! // src/test/run-pass/borrowck-borrow-of-mut-base-ptr-safe.rs
-//! fn foo<'a>(mut t0: &'a mut int,
-//!            mut t1: &'a mut int) {
-//!     let p: &int = &*t0; // Freezes `*t0`
-//!     let mut t2 = &t0;
-//!     let q: &int = &*t2; // Freezes `*t0`, but that's ok...
-//!     let r: &int = &*t0; // ...after all, could do same thing directly.
-//! }
-//! ```
-//!
-//! In this case, creating the alias `t2` of `t0` is safe because the only
-//! thing `t2` can be used for is to further freeze `*t0`, which is
-//! already frozen. In particular, we cannot assign to `*t0` through the
-//! new alias `t2`, as demonstrated in this test case:
-//!
-//! ```
-//! // src/test/run-pass/borrowck-borrow-mut-base-ptr-in-aliasable-loc.rs
-//! fn foo(t0: & &mut int) {
-//!     let t1 = t0;
-//!     let p: &int = &**t0;
-//!     **t1 = 22; //~ ERROR cannot assign
-//! }
-//! ```
-//!
-//! This distinction is reflected in the rules. When doing an `&mut`
-//! borrow -- as in the first example -- the set `ACTIONS` will be
-//! `CLAIM|MUTATE|FREEZE`, because claiming the referent implies that it
-//! cannot be claimed, mutated, or frozen by anyone else. These
-//! restrictions are propagated back to the base path and hence the base
-//! path is considered unfreezable.
-//!
-//! In contrast, when the referent is merely frozen -- as in the second
-//! example -- the set `ACTIONS` will be `CLAIM|MUTATE`, because freezing
-//! the referent implies that it cannot be claimed or mutated but permits
-//! others to freeze. Hence when these restrictions are propagated back to
-//! the base path, it will still be considered freezable.
-//!
-//!
-//!
-//! **FIXME #10520: Restrictions against mutating the base pointer.** When
-//! an `&mut` pointer is frozen or claimed, we currently pass along the
-//! restriction against MUTATE to the base pointer. I do not believe this
-//! restriction is needed. It dates from the days when we had a way to
-//! mutate that preserved the value being mutated (i.e., swap). Nowadays
-//! the only form of mutation is assignment, which destroys the pointer
-//! being mutated -- therefore, a mutation cannot create a new path to the
-//! same data. Rather, it removes an existing path. This implies that not
-//! only can we permit mutation, we can have mutation kill restrictions in
-//! the dataflow sense.
-//!
-//! **WARNING:** We do not currently have `const` borrows in the
-//! language. If they are added back in, we must ensure that they are
-//! consistent with all of these examples. The crucial question will be
-//! what sorts of actions are permitted with a `&const &mut` pointer. I
-//! would suggest that an `&mut` referent found in an `&const` location be
-//! prohibited from both freezes and claims. This would avoid the need to
-//! prevent `const` borrows of the base pointer when the referent is
-//! borrowed.
-//!
-//! # Moves and initialization
-//!
-//! The borrow checker is also in charge of ensuring that:
-//!
-//! - all memory which is accessed is initialized
-//! - immutable local variables are assigned at most once.
-//!
-//! These are two separate dataflow analyses built on the same
-//! framework. Let's look at checking that memory is initialized first;
-//! the checking of immutable local variable assignments works in a very
-//! similar way.
-//!
-//! To track the initialization of memory, we actually track all the
-//! points in the program that *create uninitialized memory*, meaning
-//! moves and the declaration of uninitialized variables. For each of
-//! these points, we create a bit in the dataflow set. Assignments to a
-//! variable `x` or path `a.b.c` kill the move/uninitialization bits for
-//! those paths and any subpaths (e.g., `x`, `x.y`, `a.b.c`, `*a.b.c`).
-//! Bits are unioned when two control-flow paths join. Thus, the
-//! presence of a bit indicates that the move may have occurred without an
-//! intervening assignment to the same memory. At each use of a variable,
-//! we examine the bits in scope, and check that none of them are
-//! moves/uninitializations of the variable that is being used.
-//!
-//! Let's look at a simple example:
-//!
-//! ```
-//! fn foo(a: Box<int>) {
-//!     let b: Box<int>;   // Gen bit 0.
-//!
-//!     if cond {          // Bits: 0
-//!         use(&*a);
-//!         b = a;         // Gen bit 1, kill bit 0.
-//!         use(&*b);
-//!     } else {
-//!                        // Bits: 0
-//!     }
-//!                        // Bits: 0,1
-//!     use(&*a);          // Error.
-//!     use(&*b);          // Error.
-//! }
-//!
-//! fn use(a: &int) { }
-//! ```
-//!
-//! In this example, the variable `b` is created uninitialized. In one
-//! branch of an `if`, we then move the variable `a` into `b`. Once we
-//! exit the `if`, therefore, it is an error to use `a` or `b` since both
-//! are only conditionally initialized. I have annotated the dataflow
-//! state using comments. There are two dataflow bits, with bit 0
-//! corresponding to the creation of `b` without an initializer, and bit 1
-//! corresponding to the move of `a`. The assignment `b = a` both
-//! generates bit 1, because it is a move of `a`, and kills bit 0, because
-//! `b` is now initialized. On the else branch, though, `b` is never
-//! initialized, and so bit 0 remains untouched. When the two flows of
-//! control join, we union the bits from both sides, resulting in both
-//! bits 0 and 1 being set. Thus any attempt to use `a` uncovers the bit 1
-//! from the "then" branch, showing that `a` may be moved, and any attempt
-//! to use `b` uncovers bit 0, from the "else" branch, showing that `b`
-//! may not be initialized.
-//!
-//! ## Initialization of immutable variables
-//!
-//! Initialization of immutable variables works in a very similar way,
-//! except that:
-//!
-//! 1. we generate bits for each assignment to a variable;
-//! 2. the bits are never killed except when the variable goes out of scope.
-//!
-//! Thus the presence of an assignment bit indicates that the assignment
-//! may have occurred. Note that assignments are only killed when the
-//! variable goes out of scope, as it is not relevant whether or not there
-//! has been a move in the meantime. Using these bits, we can declare that
-//! an assignment to an immutable variable is legal iff there is no other
-//! assignment bit to that same variable in scope.
-//!
-//! ## Why is the design made this way?
-//!
-//! It may seem surprising that we assign dataflow bits to *each move*
-//! rather than *each path being moved*. This is somewhat less efficient,
-//! since on each use, we must iterate through all moves and check whether
-//! any of them correspond to the path in question. Similar concerns apply
-//! to the analysis for double assignments to immutable variables. The
-//! main reason to do it this way is that it allows us to print better
-//! error messages, because when a use occurs, we can print out the
-//! precise move that may be in scope, rather than simply having to say
-//! "the variable may not be initialized".
-//!
-//! ## Data structures used in the move analysis
-//!
-//! The move analysis maintains several data structures that enable it to
-//! cross-reference moves and assignments to determine when they may be
-//! moving/assigning the same memory. These are all collected into the
-//! `MoveData` and `FlowedMoveData` structs. The former represents the set
-//! of move paths, moves, and assignments, and the latter adds in the
-//! results of a dataflow computation.
-//!
-//! ### Move paths
-//!
-//! The `MovePath` tree tracks every path that is moved or assigned to.
-//! These paths have the same form as the `LoanPath` data structure, which
-//! in turn is the "real world version of the lvalues `LV` that we
-//! introduced earlier. The difference between a `MovePath` and a `LoanPath`
-//! is that move paths are:
-//!
-//! 1. Canonicalized, so that we have exactly one copy of each, and
-//!    we can refer to move paths by index;
-//! 2. Cross-referenced with other paths into a tree, so that given a move
-//!    path we can efficiently find all parent move paths and all
-//!    extensions (e.g., given the `a.b` move path, we can easily find the
-//!    move path `a` and also the move paths `a.b.c`)
-//! 3. Cross-referenced with moves and assignments, so that we can
-//!    easily find all moves and assignments to a given path.
-//!
-//! The mechanism that we use is to create a `MovePath` record for each
-//! move path. These are arranged in an array and are referenced using
-//! `MovePathIndex` values, which are newtype'd indices. The `MovePath`
-//! structs are arranged into a tree, representing using the standard
-//! Knuth representation where each node has a child 'pointer' and a "next
-//! sibling" 'pointer'. In addition, each `MovePath` has a parent
-//! 'pointer'.  In this case, the 'pointers' are just `MovePathIndex`
-//! values.
-//!
-//! In this way, if we want to find all base paths of a given move path,
-//! we can just iterate up the parent pointers (see `each_base_path()` in
-//! the `move_data` module). If we want to find all extensions, we can
-//! iterate through the subtree (see `each_extending_path()`).
-//!
-//! ### Moves and assignments
-//!
-//! There are structs to represent moves (`Move`) and assignments
-//! (`Assignment`), and these are also placed into arrays and referenced
-//! by index. All moves of a particular path are arranged into a linked
-//! lists, beginning with `MovePath.first_move` and continuing through
-//! `Move.next_move`.
-//!
-//! We distinguish between "var" assignments, which are assignments to a
-//! variable like `x = foo`, and "path" assignments (`x.f = foo`).  This
-//! is because we need to assign dataflows to the former, but not the
-//! latter, so as to check for double initialization of immutable
-//! variables.
-//!
-//! ### Gathering and checking moves
-//!
-//! Like loans, we distinguish two phases. The first, gathering, is where
-//! we uncover all the moves and assignments. As with loans, we do some
-//! basic sanity checking in this phase, so we'll report errors if you
-//! attempt to move out of a borrowed pointer etc. Then we do the dataflow
-//! (see `FlowedMoveData::new`). Finally, in the `check_loans.rs` code, we
-//! walk back over, identify all uses, assignments, and captures, and
-//! check that they are legal given the set of dataflow bits we have
-//! computed for that program point.
-//!
-//! # Drop flags and structural fragments
-//!
-//! In addition to the job of enforcing memory safety, the borrow checker
-//! code is also responsible for identifying the *structural fragments* of
-//! data in the function, to support out-of-band dynamic drop flags
-//! allocated on the stack. (For background, see [RFC PR #320].)
-//!
-//! [RFC PR #320]: https://github.com/rust-lang/rfcs/pull/320
-//!
-//! Semantically, each piece of data that has a destructor may need a
-//! boolean flag to indicate whether or not its destructor has been run
-//! yet. However, in many cases there is no need to actually maintain such
-//! a flag: It can be apparent from the code itself that a given path is
-//! always initialized (or always deinitialized) when control reaches the
-//! end of its owner's scope, and thus we can unconditionally emit (or
-//! not) the destructor invocation for that path.
-//!
-//! A simple example of this is the following:
-//!
-//! ```rust
-//! struct D { p: int }
-//! impl D { fn new(x: int) -> D { ... }
-//! impl Drop for D { ... }
-//!
-//! fn foo(a: D, b: D, t: || -> bool) {
-//!     let c: D;
-//!     let d: D;
-//!     if t() { c = b; }
-//! }
-//! ```
-//!
-//! At the end of the body of `foo`, the compiler knows that `a` is
-//! initialized, introducing a drop obligation (deallocating the boxed
-//! integer) for the end of `a`'s scope that is run unconditionally.
-//! Likewise the compiler knows that `d` is not initialized, and thus it
-//! leave out the drop code for `d`.
-//!
-//! The compiler cannot statically know the drop-state of `b` nor `c` at
-//! the end of their scope, since that depends on the value of
-//! `t`. Therefore, we need to insert boolean flags to track whether we
-//! need to drop `b` and `c`.
-//!
-//! However, the matter is not as simple as just mapping local variables
-//! to their corresponding drop flags when necessary. In particular, in
-//! addition to being able to move data out of local variables, Rust
-//! allows one to move values in and out of structured data.
-//!
-//! Consider the following:
-//!
-//! ```rust
-//! struct S { x: D, y: D, z: D }
-//!
-//! fn foo(a: S, mut b: S, t: || -> bool) {
-//!     let mut c: S;
-//!     let d: S;
-//!     let e: S = a.clone();
-//!     if t() {
-//!         c = b;
-//!         b.x = e.y;
-//!     }
-//!     if t() { c.y = D::new(4); }
-//! }
-//! ```
-//!
-//! As before, the drop obligations of `a` and `d` can be statically
-//! determined, and again the state of `b` and `c` depend on dynamic
-//! state. But additionally, the dynamic drop obligations introduced by
-//! `b` and `c` are not just per-local boolean flags. For example, if the
-//! first call to `t` returns `false` and the second call `true`, then at
-//! the end of their scope, `b` will be completely initialized, but only
-//! `c.y` in `c` will be initialized.  If both calls to `t` return `true`,
-//! then at the end of their scope, `c` will be completely initialized,
-//! but only `b.x` will be initialized in `b`, and only `e.x` and `e.z`
-//! will be initialized in `e`.
-//!
-//! Note that we need to cover the `z` field in each case in some way,
-//! since it may (or may not) need to be dropped, even though `z` is never
-//! directly mentioned in the body of the `foo` function. We call a path
-//! like `b.z` a *fragment sibling* of `b.x`, since the field `z` comes
-//! from the same structure `S` that declared the field `x` in `b.x`.
-//!
-//! In general we need to maintain boolean flags that match the
-//! `S`-structure of both `b` and `c`.  In addition, we need to consult
-//! such a flag when doing an assignment (such as `c.y = D::new(4);`
-//! above), in order to know whether or not there is a previous value that
-//! needs to be dropped before we do the assignment.
-//!
-//! So for any given function, we need to determine what flags are needed
-//! to track its drop obligations. Our strategy for determining the set of
-//! flags is to represent the fragmentation of the structure explicitly:
-//! by starting initially from the paths that are explicitly mentioned in
-//! moves and assignments (such as `b.x` and `c.y` above), and then
-//! traversing the structure of the path's type to identify leftover
-//! *unmoved fragments*: assigning into `c.y` means that `c.x` and `c.z`
-//! are leftover unmoved fragments. Each fragment represents a drop
-//! obligation that may need to be tracked. Paths that are only moved or
-//! assigned in their entirety (like `a` and `d`) are treated as a single
-//! drop obligation.
-//!
-//! The fragment construction process works by piggy-backing on the
-//! existing `move_data` module. We already have callbacks that visit each
-//! direct move and assignment; these form the basis for the sets of
-//! moved_leaf_paths and assigned_leaf_paths. From these leaves, we can
-//! walk up their parent chain to identify all of their parent paths.
-//! We need to identify the parents because of cases like the following:
-//!
-//! ```rust
-//! struct Pair<X,Y>{ x: X, y: Y }
-//! fn foo(dd_d_d: Pair<Pair<Pair<D, D>, D>, D>) {
-//!     other_function(dd_d_d.x.y);
-//! }
-//! ```
-//!
-//! In this code, the move of the path `dd_d.x.y` leaves behind not only
-//! the fragment drop-obligation `dd_d.x.x` but also `dd_d.y` as well.
-//!
-//! Once we have identified the directly-referenced leaves and their
-//! parents, we compute the left-over fragments, in the function
-//! `fragments::add_fragment_siblings`. As of this writing this works by
-//! looking at each directly-moved or assigned path P, and blindly
-//! gathering all sibling fields of P (as well as siblings for the parents
-//! of P, etc). After accumulating all such siblings, we filter out the
-//! entries added as siblings of P that turned out to be
-//! directly-referenced paths (or parents of directly referenced paths)
-//! themselves, thus leaving the never-referenced "left-overs" as the only
-//! thing left from the gathering step.
-//!
-//! ## Array structural fragments
-//!
-//! A special case of the structural fragments discussed above are
-//! the elements of an array that has been passed by value, such as
-//! the following:
-//!
-//! ```rust
-//! fn foo(a: [D; 10], i: uint) -> D {
-//!     a[i]
-//! }
-//! ```
-//!
-//! The above code moves a single element out of the input array `a`.
-//! The remainder of the array still needs to be dropped; i.e., it
-//! is a structural fragment. Note that after performing such a move,
-//! it is not legal to read from the array `a`. There are a number of
-//! ways to deal with this, but the important thing to note is that
-//! the semantics needs to distinguish in some manner between a
-//! fragment that is the *entire* array versus a fragment that represents
-//! all-but-one element of the array.  A place where that distinction
-//! would arise is the following:
-//!
-//! ```rust
-//! fn foo(a: [D; 10], b: [D; 10], i: uint, t: bool) -> D {
-//!     if t {
-//!         a[i]
-//!     } else {
-//!         b[i]
-//!     }
-//!
-//!     // When control exits, we will need either to drop all of `a`
-//!     // and all-but-one of `b`, or to drop all of `b` and all-but-one
-//!     // of `a`.
-//! }
-//! ```
-//!
-//! There are a number of ways that the trans backend could choose to
-//! compile this (e.g. a `[bool; 10]` array for each such moved array;
-//! or an `Option<uint>` for each moved array).  From the viewpoint of the
-//! borrow-checker, the important thing is to record what kind of fragment
-//! is implied by the relevant moves.
-//!
-//! # Future work
-//!
-//! While writing up these docs, I encountered some rules I believe to be
-//! stricter than necessary:
-//!
-//! - I think restricting the `&mut` LV against moves and `ALIAS` is sufficient,
-//!   `MUTATE` and `CLAIM` are overkill. `MUTATE` was necessary when swap was
-//!   a built-in operator, but as it is not, it is implied by `CLAIM`,
-//!   and `CLAIM` is implied by `ALIAS`. The only net effect of this is an
-//!   extra error message in some cases, though.
-//! - I have not described how closures interact. Current code is unsound.
-//!   I am working on describing and implementing the fix.
-//! - If we wish, we can easily extend the move checking to allow finer-grained
-//!   tracking of what is initialized and what is not, enabling code like
-//!   this:
-//!
-//!       a = x.f.g; // x.f.g is now uninitialized
-//!       // here, x and x.f are not usable, but x.f.h *is*
-//!       x.f.g = b; // x.f.g is not initialized
-//!       // now x, x.f, x.f.g, x.f.h are all usable
-//!
-//!   What needs to change here, most likely, is that the `moves` module
-//!   should record not only what paths are moved, but what expressions
-//!   are actual *uses*. For example, the reference to `x` in `x.f.g = b`
-//!   is not a true *use* in the sense that it requires `x` to be fully
-//!   initialized. This is in fact why the above code produces an error
-//!   today: the reference to `x` in `x.f.g = b` is considered illegal
-//!   because `x` is not fully initialized.
-//!
-//! There are also some possible refactorings:
-//!
-//! - It might be nice to replace all loan paths with the MovePath mechanism,
-//!   since they allow lightweight comparison using an integer.
diff --git a/src/librustc_borrowck/borrowck/mod.rs b/src/librustc_borrowck/borrowck/mod.rs
index 7c055bc3118..93d97a054a4 100644
--- a/src/librustc_borrowck/borrowck/mod.rs
+++ b/src/librustc_borrowck/borrowck/mod.rs
@@ -8,7 +8,7 @@
 // option. This file may not be copied, modified, or distributed
 // except according to those terms.
 
-//! See doc.rs for a thorough explanation of the borrow checker
+//! See The Book chapter on the borrow checker for more details.
 
 #![allow(non_camel_case_types)]
 
@@ -41,8 +41,6 @@ use syntax::visit;
 use syntax::visit::{Visitor, FnKind};
 use syntax::ast::{FnDecl, Block, NodeId};
 
-pub mod doc;
-
 pub mod check_loans;
 
 pub mod gather_loans;
diff --git a/src/librustc_trans/trans/cleanup.rs b/src/librustc_trans/trans/cleanup.rs
index f7e37b6f633..1c831090e3e 100644
--- a/src/librustc_trans/trans/cleanup.rs
+++ b/src/librustc_trans/trans/cleanup.rs
@@ -8,8 +8,111 @@
 // option. This file may not be copied, modified, or distributed
 // except according to those terms.
 
-//! Code pertaining to cleanup of temporaries as well as execution of
-//! drop glue. See discussion in `doc.rs` for a high-level summary.
+//! ## The Cleanup module
+//!
+//! The cleanup module tracks what values need to be cleaned up as scopes
+//! are exited, either via panic or just normal control flow. The basic
+//! idea is that the function context maintains a stack of cleanup scopes
+//! that are pushed/popped as we traverse the AST tree. There is typically
+//! at least one cleanup scope per AST node; some AST nodes may introduce
+//! additional temporary scopes.
+//!
+//! Cleanup items can be scheduled into any of the scopes on the stack.
+//! Typically, when a scope is popped, we will also generate the code for
+//! each of its cleanups at that time. This corresponds to a normal exit
+//! from a block (for example, an expression completing evaluation
+//! successfully without panic). However, it is also possible to pop a
+//! block *without* executing its cleanups; this is typically used to
+//! guard intermediate values that must be cleaned up on panic, but not
+//! if everything goes right. See the section on custom scopes below for
+//! more details.
+//!
+//! Cleanup scopes come in three kinds:
+//!
+//! - **AST scopes:** each AST node in a function body has a corresponding
+//!   AST scope. We push the AST scope when we start generate code for an AST
+//!   node and pop it once the AST node has been fully generated.
+//! - **Loop scopes:** loops have an additional cleanup scope. Cleanups are
+//!   never scheduled into loop scopes; instead, they are used to record the
+//!   basic blocks that we should branch to when a `continue` or `break` statement
+//!   is encountered.
+//! - **Custom scopes:** custom scopes are typically used to ensure cleanup
+//!   of intermediate values.
+//!
+//! ### When to schedule cleanup
+//!
+//! Although the cleanup system is intended to *feel* fairly declarative,
+//! it's still important to time calls to `schedule_clean()` correctly.
+//! Basically, you should not schedule cleanup for memory until it has
+//! been initialized, because if an unwind should occur before the memory
+//! is fully initialized, then the cleanup will run and try to free or
+//! drop uninitialized memory. If the initialization itself produces
+//! byproducts that need to be freed, then you should use temporary custom
+//! scopes to ensure that those byproducts will get freed on unwind.  For
+//! example, an expression like `box foo()` will first allocate a box in the
+//! heap and then call `foo()` -- if `foo()` should panic, this box needs
+//! to be *shallowly* freed.
+//!
+//! ### Long-distance jumps
+//!
+//! In addition to popping a scope, which corresponds to normal control
+//! flow exiting the scope, we may also *jump out* of a scope into some
+//! earlier scope on the stack. This can occur in response to a `return`,
+//! `break`, or `continue` statement, but also in response to panic. In
+//! any of these cases, we will generate a series of cleanup blocks for
+//! each of the scopes that is exited. So, if the stack contains scopes A
+//! ... Z, and we break out of a loop whose corresponding cleanup scope is
+//! X, we would generate cleanup blocks for the cleanups in X, Y, and Z.
+//! After cleanup is done we would branch to the exit point for scope X.
+//! But if panic should occur, we would generate cleanups for all the
+//! scopes from A to Z and then resume the unwind process afterwards.
+//!
+//! To avoid generating tons of code, we cache the cleanup blocks that we
+//! create for breaks, returns, unwinds, and other jumps. Whenever a new
+//! cleanup is scheduled, though, we must clear these cached blocks. A
+//! possible improvement would be to keep the cached blocks but simply
+//! generate a new block which performs the additional cleanup and then
+//! branches to the existing cached blocks.
+//!
+//! ### AST and loop cleanup scopes
+//!
+//! AST cleanup scopes are pushed when we begin and end processing an AST
+//! node. They are used to house cleanups related to rvalue temporary that
+//! get referenced (e.g., due to an expression like `&Foo()`). Whenever an
+//! AST scope is popped, we always trans all the cleanups, adding the cleanup
+//! code after the postdominator of the AST node.
+//!
+//! AST nodes that represent breakable loops also push a loop scope; the
+//! loop scope never has any actual cleanups, it's just used to point to
+//! the basic blocks where control should flow after a "continue" or
+//! "break" statement. Popping a loop scope never generates code.
+//!
+//! ### Custom cleanup scopes
+//!
+//! Custom cleanup scopes are used for a variety of purposes. The most
+//! common though is to handle temporary byproducts, where cleanup only
+//! needs to occur on panic. The general strategy is to push a custom
+//! cleanup scope, schedule *shallow* cleanups into the custom scope, and
+//! then pop the custom scope (without transing the cleanups) when
+//! execution succeeds normally. This way the cleanups are only trans'd on
+//! unwind, and only up until the point where execution succeeded, at
+//! which time the complete value should be stored in an lvalue or some
+//! other place where normal cleanup applies.
+//!
+//! To spell it out, here is an example. Imagine an expression `box expr`.
+//! We would basically:
+//!
+//! 1. Push a custom cleanup scope C.
+//! 2. Allocate the box.
+//! 3. Schedule a shallow free in the scope C.
+//! 4. Trans `expr` into the box.
+//! 5. Pop the scope C.
+//! 6. Return the box as an rvalue.
+//!
+//! This way, if a panic occurs while transing `expr`, the custom
+//! cleanup scope C is pushed and hence the box will be freed. The trans
+//! code for `expr` itself is responsible for freeing any other byproducts
+//! that may be in play.
 
 pub use self::ScopeId::*;
 pub use self::CleanupScopeKind::*;
diff --git a/src/librustc_trans/trans/datum.rs b/src/librustc_trans/trans/datum.rs
index 39d17f45ffa..f4a5ba98b67 100644
--- a/src/librustc_trans/trans/datum.rs
+++ b/src/librustc_trans/trans/datum.rs
@@ -8,8 +8,97 @@
 // option. This file may not be copied, modified, or distributed
 // except according to those terms.
 
-//! See the section on datums in `doc.rs` for an overview of what Datums are and how they are
-//! intended to be used.
+//! ## The Datum module
+//!
+//! A `Datum` encapsulates the result of evaluating a Rust expression.  It
+//! contains a `ValueRef` indicating the result, a `Ty` describing
+//! the Rust type, but also a *kind*. The kind indicates whether the datum
+//! has cleanup scheduled (lvalue) or not (rvalue) and -- in the case of
+//! rvalues -- whether or not the value is "by ref" or "by value".
+//!
+//! The datum API is designed to try and help you avoid memory errors like
+//! forgetting to arrange cleanup or duplicating a value. The type of the
+//! datum incorporates the kind, and thus reflects whether it has cleanup
+//! scheduled:
+//!
+//! - `Datum<Lvalue>` -- by ref, cleanup scheduled
+//! - `Datum<Rvalue>` -- by value or by ref, no cleanup scheduled
+//! - `Datum<Expr>` -- either `Datum<Lvalue>` or `Datum<Rvalue>`
+//!
+//! Rvalue and expr datums are noncopyable, and most of the methods on
+//! datums consume the datum itself (with some notable exceptions). This
+//! reflects the fact that datums may represent affine values which ought
+//! to be consumed exactly once, and if you were to try to (for example)
+//! store an affine value multiple times, you would be duplicating it,
+//! which would certainly be a bug.
+//!
+//! Some of the datum methods, however, are designed to work only on
+//! copyable values such as ints or pointers. Those methods may borrow the
+//! datum (`&self`) rather than consume it, but they always include
+//! assertions on the type of the value represented to check that this
+//! makes sense. An example is `shallow_copy()`, which duplicates
+//! a datum value.
+//!
+//! Translating an expression always yields a `Datum<Expr>` result, but
+//! the methods `to_[lr]value_datum()` can be used to coerce a
+//! `Datum<Expr>` into a `Datum<Lvalue>` or `Datum<Rvalue>` as
+//! needed. Coercing to an lvalue is fairly common, and generally occurs
+//! whenever it is necessary to inspect a value and pull out its
+//! subcomponents (for example, a match, or indexing expression). Coercing
+//! to an rvalue is more unusual; it occurs when moving values from place
+//! to place, such as in an assignment expression or parameter passing.
+//!
+//! ### Lvalues in detail
+//!
+//! An lvalue datum is one for which cleanup has been scheduled. Lvalue
+//! datums are always located in memory, and thus the `ValueRef` for an
+//! LLVM value is always a pointer to the actual Rust value. This means
+//! that if the Datum has a Rust type of `int`, then the LLVM type of the
+//! `ValueRef` will be `int*` (pointer to int).
+//!
+//! Because lvalues already have cleanups scheduled, the memory must be
+//! zeroed to prevent the cleanup from taking place (presuming that the
+//! Rust type needs drop in the first place, otherwise it doesn't
+//! matter). The Datum code automatically performs this zeroing when the
+//! value is stored to a new location, for example.
+//!
+//! Lvalues usually result from evaluating lvalue expressions. For
+//! example, evaluating a local variable `x` yields an lvalue, as does a
+//! reference to a field like `x.f` or an index `x[i]`.
+//!
+//! Lvalue datums can also arise by *converting* an rvalue into an lvalue.
+//! This is done with the `to_lvalue_datum` method defined on
+//! `Datum<Expr>`. Basically this method just schedules cleanup if the
+//! datum is an rvalue, possibly storing the value into a stack slot first
+//! if needed. Converting rvalues into lvalues occurs in constructs like
+//! `&foo()` or `match foo() { ref x => ... }`, where the user is
+//! implicitly requesting a temporary.
+//!
+//! Somewhat surprisingly, not all lvalue expressions yield lvalue datums
+//! when trans'd. Ultimately the reason for this is to micro-optimize
+//! the resulting LLVM. For example, consider the following code:
+//!
+//!     fn foo() -> Box<int> { ... }
+//!     let x = *foo();
+//!
+//! The expression `*foo()` is an lvalue, but if you invoke `expr::trans`,
+//! it will return an rvalue datum. See `deref_once` in expr.rs for
+//! more details.
+//!
+//! ### Rvalues in detail
+//!
+//! Rvalues datums are values with no cleanup scheduled. One must be
+//! careful with rvalue datums to ensure that cleanup is properly
+//! arranged, usually by converting to an lvalue datum or by invoking the
+//! `add_clean` method.
+//!
+//! ### Scratch datums
+//!
+//! Sometimes you need some temporary scratch space.  The functions
+//! `[lr]value_scratch_datum()` can be used to get temporary stack
+//! space. As their name suggests, they yield lvalues and rvalues
+//! respectively. That is, the slot from `lvalue_scratch_datum` will have
+//! cleanup arranged, and the slot from `rvalue_scratch_datum` does not.
 
 pub use self::Expr::*;
 pub use self::RvalueMode::*;
diff --git a/src/librustc_trans/trans/doc.rs b/src/librustc_trans/trans/doc.rs
deleted file mode 100644
index c3ab8986372..00000000000
--- a/src/librustc_trans/trans/doc.rs
+++ /dev/null
@@ -1,233 +0,0 @@
-// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-//! # Documentation for the trans module
-//!
-//! This module contains high-level summaries of how the various modules
-//! in trans work. It is a work in progress. For detailed comments,
-//! naturally, you can refer to the individual modules themselves.
-//!
-//! ## The Expr module
-//!
-//! The expr module handles translation of expressions. The most general
-//! translation routine is `trans()`, which will translate an expression
-//! into a datum. `trans_into()` is also available, which will translate
-//! an expression and write the result directly into memory, sometimes
-//! avoiding the need for a temporary stack slot. Finally,
-//! `trans_to_lvalue()` is available if you'd like to ensure that the
-//! result has cleanup scheduled.
-//!
-//! Internally, each of these functions dispatches to various other
-//! expression functions depending on the kind of expression. We divide
-//! up expressions into:
-//!
-//! - **Datum expressions:** Those that most naturally yield values.
-//!   Examples would be `22`, `box x`, or `a + b` (when not overloaded).
-//! - **DPS expressions:** Those that most naturally write into a location
-//!   in memory. Examples would be `foo()` or `Point { x: 3, y: 4 }`.
-//! - **Statement expressions:** That that do not generate a meaningful
-//!   result. Examples would be `while { ... }` or `return 44`.
-//!
-//! ## The Datum module
-//!
-//! A `Datum` encapsulates the result of evaluating a Rust expression.  It
-//! contains a `ValueRef` indicating the result, a `Ty` describing
-//! the Rust type, but also a *kind*. The kind indicates whether the datum
-//! has cleanup scheduled (lvalue) or not (rvalue) and -- in the case of
-//! rvalues -- whether or not the value is "by ref" or "by value".
-//!
-//! The datum API is designed to try and help you avoid memory errors like
-//! forgetting to arrange cleanup or duplicating a value. The type of the
-//! datum incorporates the kind, and thus reflects whether it has cleanup
-//! scheduled:
-//!
-//! - `Datum<Lvalue>` -- by ref, cleanup scheduled
-//! - `Datum<Rvalue>` -- by value or by ref, no cleanup scheduled
-//! - `Datum<Expr>` -- either `Datum<Lvalue>` or `Datum<Rvalue>`
-//!
-//! Rvalue and expr datums are noncopyable, and most of the methods on
-//! datums consume the datum itself (with some notable exceptions). This
-//! reflects the fact that datums may represent affine values which ought
-//! to be consumed exactly once, and if you were to try to (for example)
-//! store an affine value multiple times, you would be duplicating it,
-//! which would certainly be a bug.
-//!
-//! Some of the datum methods, however, are designed to work only on
-//! copyable values such as ints or pointers. Those methods may borrow the
-//! datum (`&self`) rather than consume it, but they always include
-//! assertions on the type of the value represented to check that this
-//! makes sense. An example is `shallow_copy()`, which duplicates
-//! a datum value.
-//!
-//! Translating an expression always yields a `Datum<Expr>` result, but
-//! the methods `to_[lr]value_datum()` can be used to coerce a
-//! `Datum<Expr>` into a `Datum<Lvalue>` or `Datum<Rvalue>` as
-//! needed. Coercing to an lvalue is fairly common, and generally occurs
-//! whenever it is necessary to inspect a value and pull out its
-//! subcomponents (for example, a match, or indexing expression). Coercing
-//! to an rvalue is more unusual; it occurs when moving values from place
-//! to place, such as in an assignment expression or parameter passing.
-//!
-//! ### Lvalues in detail
-//!
-//! An lvalue datum is one for which cleanup has been scheduled. Lvalue
-//! datums are always located in memory, and thus the `ValueRef` for an
-//! LLVM value is always a pointer to the actual Rust value. This means
-//! that if the Datum has a Rust type of `int`, then the LLVM type of the
-//! `ValueRef` will be `int*` (pointer to int).
-//!
-//! Because lvalues already have cleanups scheduled, the memory must be
-//! zeroed to prevent the cleanup from taking place (presuming that the
-//! Rust type needs drop in the first place, otherwise it doesn't
-//! matter). The Datum code automatically performs this zeroing when the
-//! value is stored to a new location, for example.
-//!
-//! Lvalues usually result from evaluating lvalue expressions. For
-//! example, evaluating a local variable `x` yields an lvalue, as does a
-//! reference to a field like `x.f` or an index `x[i]`.
-//!
-//! Lvalue datums can also arise by *converting* an rvalue into an lvalue.
-//! This is done with the `to_lvalue_datum` method defined on
-//! `Datum<Expr>`. Basically this method just schedules cleanup if the
-//! datum is an rvalue, possibly storing the value into a stack slot first
-//! if needed. Converting rvalues into lvalues occurs in constructs like
-//! `&foo()` or `match foo() { ref x => ... }`, where the user is
-//! implicitly requesting a temporary.
-//!
-//! Somewhat surprisingly, not all lvalue expressions yield lvalue datums
-//! when trans'd. Ultimately the reason for this is to micro-optimize
-//! the resulting LLVM. For example, consider the following code:
-//!
-//!     fn foo() -> Box<int> { ... }
-//!     let x = *foo();
-//!
-//! The expression `*foo()` is an lvalue, but if you invoke `expr::trans`,
-//! it will return an rvalue datum. See `deref_once` in expr.rs for
-//! more details.
-//!
-//! ### Rvalues in detail
-//!
-//! Rvalues datums are values with no cleanup scheduled. One must be
-//! careful with rvalue datums to ensure that cleanup is properly
-//! arranged, usually by converting to an lvalue datum or by invoking the
-//! `add_clean` method.
-//!
-//! ### Scratch datums
-//!
-//! Sometimes you need some temporary scratch space.  The functions
-//! `[lr]value_scratch_datum()` can be used to get temporary stack
-//! space. As their name suggests, they yield lvalues and rvalues
-//! respectively. That is, the slot from `lvalue_scratch_datum` will have
-//! cleanup arranged, and the slot from `rvalue_scratch_datum` does not.
-//!
-//! ## The Cleanup module
-//!
-//! The cleanup module tracks what values need to be cleaned up as scopes
-//! are exited, either via panic or just normal control flow. The basic
-//! idea is that the function context maintains a stack of cleanup scopes
-//! that are pushed/popped as we traverse the AST tree. There is typically
-//! at least one cleanup scope per AST node; some AST nodes may introduce
-//! additional temporary scopes.
-//!
-//! Cleanup items can be scheduled into any of the scopes on the stack.
-//! Typically, when a scope is popped, we will also generate the code for
-//! each of its cleanups at that time. This corresponds to a normal exit
-//! from a block (for example, an expression completing evaluation
-//! successfully without panic). However, it is also possible to pop a
-//! block *without* executing its cleanups; this is typically used to
-//! guard intermediate values that must be cleaned up on panic, but not
-//! if everything goes right. See the section on custom scopes below for
-//! more details.
-//!
-//! Cleanup scopes come in three kinds:
-//! - **AST scopes:** each AST node in a function body has a corresponding
-//!   AST scope. We push the AST scope when we start generate code for an AST
-//!   node and pop it once the AST node has been fully generated.
-//! - **Loop scopes:** loops have an additional cleanup scope. Cleanups are
-//!   never scheduled into loop scopes; instead, they are used to record the
-//!   basic blocks that we should branch to when a `continue` or `break` statement
-//!   is encountered.
-//! - **Custom scopes:** custom scopes are typically used to ensure cleanup
-//!   of intermediate values.
-//!
-//! ### When to schedule cleanup
-//!
-//! Although the cleanup system is intended to *feel* fairly declarative,
-//! it's still important to time calls to `schedule_clean()` correctly.
-//! Basically, you should not schedule cleanup for memory until it has
-//! been initialized, because if an unwind should occur before the memory
-//! is fully initialized, then the cleanup will run and try to free or
-//! drop uninitialized memory. If the initialization itself produces
-//! byproducts that need to be freed, then you should use temporary custom
-//! scopes to ensure that those byproducts will get freed on unwind.  For
-//! example, an expression like `box foo()` will first allocate a box in the
-//! heap and then call `foo()` -- if `foo()` should panic, this box needs
-//! to be *shallowly* freed.
-//!
-//! ### Long-distance jumps
-//!
-//! In addition to popping a scope, which corresponds to normal control
-//! flow exiting the scope, we may also *jump out* of a scope into some
-//! earlier scope on the stack. This can occur in response to a `return`,
-//! `break`, or `continue` statement, but also in response to panic. In
-//! any of these cases, we will generate a series of cleanup blocks for
-//! each of the scopes that is exited. So, if the stack contains scopes A
-//! ... Z, and we break out of a loop whose corresponding cleanup scope is
-//! X, we would generate cleanup blocks for the cleanups in X, Y, and Z.
-//! After cleanup is done we would branch to the exit point for scope X.
-//! But if panic should occur, we would generate cleanups for all the
-//! scopes from A to Z and then resume the unwind process afterwards.
-//!
-//! To avoid generating tons of code, we cache the cleanup blocks that we
-//! create for breaks, returns, unwinds, and other jumps. Whenever a new
-//! cleanup is scheduled, though, we must clear these cached blocks. A
-//! possible improvement would be to keep the cached blocks but simply
-//! generate a new block which performs the additional cleanup and then
-//! branches to the existing cached blocks.
-//!
-//! ### AST and loop cleanup scopes
-//!
-//! AST cleanup scopes are pushed when we begin and end processing an AST
-//! node. They are used to house cleanups related to rvalue temporary that
-//! get referenced (e.g., due to an expression like `&Foo()`). Whenever an
-//! AST scope is popped, we always trans all the cleanups, adding the cleanup
-//! code after the postdominator of the AST node.
-//!
-//! AST nodes that represent breakable loops also push a loop scope; the
-//! loop scope never has any actual cleanups, it's just used to point to
-//! the basic blocks where control should flow after a "continue" or
-//! "break" statement. Popping a loop scope never generates code.
-//!
-//! ### Custom cleanup scopes
-//!
-//! Custom cleanup scopes are used for a variety of purposes. The most
-//! common though is to handle temporary byproducts, where cleanup only
-//! needs to occur on panic. The general strategy is to push a custom
-//! cleanup scope, schedule *shallow* cleanups into the custom scope, and
-//! then pop the custom scope (without transing the cleanups) when
-//! execution succeeds normally. This way the cleanups are only trans'd on
-//! unwind, and only up until the point where execution succeeded, at
-//! which time the complete value should be stored in an lvalue or some
-//! other place where normal cleanup applies.
-//!
-//! To spell it out, here is an example. Imagine an expression `box expr`.
-//! We would basically:
-//!
-//! 1. Push a custom cleanup scope C.
-//! 2. Allocate the box.
-//! 3. Schedule a shallow free in the scope C.
-//! 4. Trans `expr` into the box.
-//! 5. Pop the scope C.
-//! 6. Return the box as an rvalue.
-//!
-//! This way, if a panic occurs while transing `expr`, the custom
-//! cleanup scope C is pushed and hence the box will be freed. The trans
-//! code for `expr` itself is responsible for freeing any other byproducts
-//! that may be in play.
diff --git a/src/librustc_trans/trans/expr.rs b/src/librustc_trans/trans/expr.rs
index 9ea7a276d97..44d8457c316 100644
--- a/src/librustc_trans/trans/expr.rs
+++ b/src/librustc_trans/trans/expr.rs
@@ -10,6 +10,25 @@
 
 //! # Translation of Expressions
 //!
+//! The expr module handles translation of expressions. The most general
+//! translation routine is `trans()`, which will translate an expression
+//! into a datum. `trans_into()` is also available, which will translate
+//! an expression and write the result directly into memory, sometimes
+//! avoiding the need for a temporary stack slot. Finally,
+//! `trans_to_lvalue()` is available if you'd like to ensure that the
+//! result has cleanup scheduled.
+//!
+//! Internally, each of these functions dispatches to various other
+//! expression functions depending on the kind of expression. We divide
+//! up expressions into:
+//!
+//! - **Datum expressions:** Those that most naturally yield values.
+//!   Examples would be `22`, `box x`, or `a + b` (when not overloaded).
+//! - **DPS expressions:** Those that most naturally write into a location
+//!   in memory. Examples would be `foo()` or `Point { x: 3, y: 4 }`.
+//! - **Statement expressions:** That that do not generate a meaningful
+//!   result. Examples would be `while { ... }` or `return 44`.
+//!
 //! Public entry points:
 //!
 //! - `trans_into(bcx, expr, dest) -> bcx`: evaluates an expression,
@@ -26,8 +45,6 @@
 //!   creating a temporary stack slot if necessary.
 //!
 //! - `trans_local_var -> Datum`: looks up a local variable or upvar.
-//!
-//! See doc.rs for more comments.
 
 #![allow(non_camel_case_types)]
 
diff --git a/src/librustc_trans/trans/mod.rs b/src/librustc_trans/trans/mod.rs
index 91c6c9a13a3..f7433e6a774 100644
--- a/src/librustc_trans/trans/mod.rs
+++ b/src/librustc_trans/trans/mod.rs
@@ -19,7 +19,6 @@ pub use self::common::gensym_name;
 #[macro_use]
 mod macros;
 
-mod doc;
 mod inline;
 mod monomorphize;
 mod controlflow;
diff --git a/src/librustc_typeck/check/method/README.md b/src/librustc_typeck/check/method/README.md
new file mode 100644
index 00000000000..367273dc635
--- /dev/null
+++ b/src/librustc_typeck/check/method/README.md
@@ -0,0 +1,111 @@
+# Method lookup
+
+Method lookup can be rather complex due to the interaction of a number
+of factors, such as self types, autoderef, trait lookup, etc. This
+file provides an overview of the process. More detailed notes are in
+the code itself, naturally.
+
+One way to think of method lookup is that we convert an expression of
+the form:
+
+    receiver.method(...)
+
+into a more explicit UFCS form:
+
+    Trait::method(ADJ(receiver), ...) // for a trait call
+    ReceiverType::method(ADJ(receiver), ...) // for an inherent method call
+
+Here `ADJ` is some kind of adjustment, which is typically a series of
+autoderefs and then possibly an autoref (e.g., `&**receiver`). However
+we sometimes do other adjustments and coercions along the way, in
+particular unsizing (e.g., converting from `[T, ..n]` to `[T]`).
+
+## The Two Phases
+
+Method lookup is divided into two major phases: probing (`probe.rs`)
+and confirmation (`confirm.rs`). The probe phase is when we decide
+what method to call and how to adjust the receiver. The confirmation
+phase "applies" this selection, updating the side-tables, unifying
+type variables, and otherwise doing side-effectful things.
+
+One reason for this division is to be more amenable to caching.  The
+probe phase produces a "pick" (`probe::Pick`), which is designed to be
+cacheable across method-call sites. Therefore, it does not include
+inference variables or other information.
+
+## Probe phase
+
+The probe phase (`probe.rs`) decides what method is being called and
+how to adjust the receiver.
+
+### Steps
+
+The first thing that the probe phase does is to create a series of
+*steps*. This is done by progressively dereferencing the receiver type
+until it cannot be deref'd anymore, as well as applying an optional
+"unsize" step. So if the receiver has type `Rc<Box<[T; 3]>>`, this
+might yield:
+
+    Rc<Box<[T; 3]>>
+    Box<[T; 3]>
+    [T; 3]
+    [T]
+
+### Candidate assembly
+
+We then search along those steps to create a list of *candidates*. A
+`Candidate` is a method item that might plausibly be the method being
+invoked. For each candidate, we'll derive a "transformed self type"
+that takes into account explicit self.
+
+Candidates are grouped into two kinds, inherent and extension.
+
+**Inherent candidates** are those that are derived from the
+type of the receiver itself.  So, if you have a receiver of some
+nominal type `Foo` (e.g., a struct), any methods defined within an
+impl like `impl Foo` are inherent methods.  Nothing needs to be
+imported to use an inherent method, they are associated with the type
+itself (note that inherent impls can only be defined in the same
+module as the type itself).
+
+FIXME: Inherent candidates are not always derived from impls.  If you
+have a trait object, such as a value of type `Box<ToString>`, then the
+trait methods (`to_string()`, in this case) are inherently associated
+with it. Another case is type parameters, in which case the methods of
+their bounds are inherent. However, this part of the rules is subject
+to change: when DST's "impl Trait for Trait" is complete, trait object
+dispatch could be subsumed into trait matching, and the type parameter
+behavior should be reconsidered in light of where clauses.
+
+**Extension candidates** are derived from imported traits.  If I have
+the trait `ToString` imported, and I call `to_string()` on a value of
+type `T`, then we will go off to find out whether there is an impl of
+`ToString` for `T`.  These kinds of method calls are called "extension
+methods".  They can be defined in any module, not only the one that
+defined `T`.  Furthermore, you must import the trait to call such a
+method.
+
+So, let's continue our example. Imagine that we were calling a method
+`foo` with the receiver `Rc<Box<[T; 3]>>` and there is a trait `Foo`
+that defines it with `&self` for the type `Rc<U>` as well as a method
+on the type `Box` that defines `Foo` but with `&mut self`. Then we
+might have two candidates:
+
+    &Rc<Box<[T; 3]>> from the impl of `Foo` for `Rc<U>` where `U=Box<T; 3]>
+    &mut Box<[T; 3]>> from the inherent impl on `Box<U>` where `U=[T; 3]`
+
+### Candidate search
+
+Finally, to actually pick the method, we will search down the steps,
+trying to match the receiver type against the candidate types. At
+each step, we also consider an auto-ref and auto-mut-ref to see whether
+that makes any of the candidates match. We pick the first step where
+we find a match.
+
+In the case of our example, the first step is `Rc<Box<[T; 3]>>`,
+which does not itself match any candidate. But when we autoref it, we
+get the type `&Rc<Box<[T; 3]>>` which does match. We would then
+recursively consider all where-clauses that appear on the impl: if
+those match (or we cannot rule out that they do), then this is the
+method we would pick. Otherwise, we would continue down the series of
+steps.
diff --git a/src/librustc_typeck/check/method/doc.rs b/src/librustc_typeck/check/method/doc.rs
deleted file mode 100644
index d748266ed2e..00000000000
--- a/src/librustc_typeck/check/method/doc.rs
+++ /dev/null
@@ -1,121 +0,0 @@
-// Copyright 2014 The Rust Project Developers. See the COPYRIGHT
-// file at the top-level directory of this distribution and at
-// http://rust-lang.org/COPYRIGHT.
-//
-// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
-// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
-// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
-// option. This file may not be copied, modified, or distributed
-// except according to those terms.
-
-//! # Method lookup
-//!
-//! Method lookup can be rather complex due to the interaction of a number
-//! of factors, such as self types, autoderef, trait lookup, etc. This
-//! file provides an overview of the process. More detailed notes are in
-//! the code itself, naturally.
-//!
-//! One way to think of method lookup is that we convert an expression of
-//! the form:
-//!
-//!     receiver.method(...)
-//!
-//! into a more explicit UFCS form:
-//!
-//!     Trait::method(ADJ(receiver), ...) // for a trait call
-//!     ReceiverType::method(ADJ(receiver), ...) // for an inherent method call
-//!
-//! Here `ADJ` is some kind of adjustment, which is typically a series of
-//! autoderefs and then possibly an autoref (e.g., `&**receiver`). However
-//! we sometimes do other adjustments and coercions along the way, in
-//! particular unsizing (e.g., converting from `[T, ..n]` to `[T]`).
-//!
-//! ## The Two Phases
-//!
-//! Method lookup is divided into two major phases: probing (`probe.rs`)
-//! and confirmation (`confirm.rs`). The probe phase is when we decide
-//! what method to call and how to adjust the receiver. The confirmation
-//! phase "applies" this selection, updating the side-tables, unifying
-//! type variables, and otherwise doing side-effectful things.
-//!
-//! One reason for this division is to be more amenable to caching.  The
-//! probe phase produces a "pick" (`probe::Pick`), which is designed to be
-//! cacheable across method-call sites. Therefore, it does not include
-//! inference variables or other information.
-//!
-//! ## Probe phase
-//!
-//! The probe phase (`probe.rs`) decides what method is being called and
-//! how to adjust the receiver.
-//!
-//! ### Steps
-//!
-//! The first thing that the probe phase does is to create a series of
-//! *steps*. This is done by progressively dereferencing the receiver type
-//! until it cannot be deref'd anymore, as well as applying an optional
-//! "unsize" step. So if the receiver has type `Rc<Box<[T; 3]>>`, this
-//! might yield:
-//!
-//!     Rc<Box<[T; 3]>>
-//!     Box<[T; 3]>
-//!     [T; 3]
-//!     [T]
-//!
-//! ### Candidate assembly
-//!
-//! We then search along those steps to create a list of *candidates*. A
-//! `Candidate` is a method item that might plausibly be the method being
-//! invoked. For each candidate, we'll derive a "transformed self type"
-//! that takes into account explicit self.
-//!
-//! Candidates are grouped into two kinds, inherent and extension.
-//!
-//! **Inherent candidates** are those that are derived from the
-//! type of the receiver itself.  So, if you have a receiver of some
-//! nominal type `Foo` (e.g., a struct), any methods defined within an
-//! impl like `impl Foo` are inherent methods.  Nothing needs to be
-//! imported to use an inherent method, they are associated with the type
-//! itself (note that inherent impls can only be defined in the same
-//! module as the type itself).
-//!
-//! FIXME: Inherent candidates are not always derived from impls.  If you
-//! have a trait object, such as a value of type `Box<ToString>`, then the
-//! trait methods (`to_string()`, in this case) are inherently associated
-//! with it. Another case is type parameters, in which case the methods of
-//! their bounds are inherent. However, this part of the rules is subject
-//! to change: when DST's "impl Trait for Trait" is complete, trait object
-//! dispatch could be subsumed into trait matching, and the type parameter
-//! behavior should be reconsidered in light of where clauses.
-//!
-//! **Extension candidates** are derived from imported traits.  If I have
-//! the trait `ToString` imported, and I call `to_string()` on a value of
-//! type `T`, then we will go off to find out whether there is an impl of
-//! `ToString` for `T`.  These kinds of method calls are called "extension
-//! methods".  They can be defined in any module, not only the one that
-//! defined `T`.  Furthermore, you must import the trait to call such a
-//! method.
-//!
-//! So, let's continue our example. Imagine that we were calling a method
-//! `foo` with the receiver `Rc<Box<[T; 3]>>` and there is a trait `Foo`
-//! that defines it with `&self` for the type `Rc<U>` as well as a method
-//! on the type `Box` that defines `Foo` but with `&mut self`. Then we
-//! might have two candidates:
-//!
-//!     &Rc<Box<[T; 3]>> from the impl of `Foo` for `Rc<U>` where `U=Box<T; 3]>
-//!     &mut Box<[T; 3]>> from the inherent impl on `Box<U>` where `U=[T; 3]`
-//!
-//! ### Candidate search
-//!
-//! Finally, to actually pick the method, we will search down the steps,
-//! trying to match the receiver type against the candidate types. At
-//! each step, we also consider an auto-ref and auto-mut-ref to see whether
-//! that makes any of the candidates match. We pick the first step where
-//! we find a match.
-//!
-//! In the case of our example, the first step is `Rc<Box<[T; 3]>>`,
-//! which does not itself match any candidate. But when we autoref it, we
-//! get the type `&Rc<Box<[T; 3]>>` which does match. We would then
-//! recursively consider all where-clauses that appear on the impl: if
-//! those match (or we cannot rule out that they do), then this is the
-//! method we would pick. Otherwise, we would continue down the series of
-//! steps.
diff --git a/src/librustc_typeck/check/method/mod.rs b/src/librustc_typeck/check/method/mod.rs
index 55b4dae5b9e..ffbc8ad020a 100644
--- a/src/librustc_typeck/check/method/mod.rs
+++ b/src/librustc_typeck/check/method/mod.rs
@@ -32,7 +32,6 @@ pub use self::CandidateSource::*;
 pub use self::suggest::{report_error, AllTraitsVec};
 
 mod confirm;
-mod doc;
 mod probe;
 mod suggest;