| Age | Commit message (Collapse) | Author | Lines |
|
|
|
Unfortunately the `parking_lot` crate enables the `synchapi` feature of the
`winapi` crate which activates a dependency on `libsynchronization.a`. The MinGW
version of `libsynchronization.a` pulls in a dependency
`api-ms-core-synch-l1-2-0.dll` which causes rustc to not work on Windows 7
(tracked in #49438)
The `parking_lot` crate is not currently used in the compiler unless parallel
queries are enabled. This feature is not enabled by default and not used at all
in the beta/stable compilers. As a result the dependency in this commit was
removed and the CI build which checks parallel queries was disabled.
This isn't a great long-term solution but should hopefully be enough of a patch
for beta to buy us some more time to figure this out.
|
|
This builder is starting to time out frequently causing PRs to bounce and
otherwise doesn't seem to be catching too many bugs, so this commit removes it
entirely. We've had a number of timeouts in the last few weeks related to this
builder:
* https://travis-ci.org/rust-lang/rust/jobs/360947582
* https://travis-ci.org/rust-lang/rust/jobs/360464190
* https://travis-ci.org/rust-lang/rust/jobs/359946975
* https://travis-ci.org/rust-lang/rust/jobs/361213241
* https://travis-ci.org/rust-lang/rust/jobs/362346279
* https://travis-ci.org/rust-lang/rust/jobs/362072331
On a good run this builder takes about 2h15m, which is already too long for
Travis and the variable build times end up pushing it beyond the 3h limit
occasionally.
The timeouts here are somewhat expected in that an incrementally compiled rustc
compiler isn't optimized like a normal rustc, disallowing inlining between
codegen units and losing lots of optimization opportunities.
|
|
|
|
* Bump the bootstrap compiler to the stable release
* Update the channel to beta
|
|
This commit disables building documentation on cross-compiled compilers, for
example ARM/MIPS/PowerPC/etc. Currently I believe we're not getting much use out
of these documentation artifacts and they often take 10-15 minutes total to
build as it requires building rustdoc/rustbook and then also generating all the
documentation, especially for the reference and the book itself.
In an effort to cut down on the amount of work that we're doing on dist CI
builders in light of recent timeouts this was some relatively low hanging fruit
to cut which in theory won't have much impact on the ecosystem in the hopes that
the documentation isn't used too heavily anyway.
While initial analysis in #48827 showed only shaving 5 minutes off local builds
the same 5 minute conclusion was drawn from #48826 which ended up having nearly
a half-hour impact on the bots. In that sense I'm hoping that we can land this
and test out what happens on CI to see how it affects timing.
Note that all tier 1 platforms, Windows, Mac, and Linux, will continue to
generate documentation.
|
|
ci: Don't use Travis caches for docker images
This commit moves away from caching on Travis to our own caching on S3 for
caching docker layers between builds. Unfortunately the Travis caches have over
time had a few critical pain points:
* Caches are only updated for successful builds, meaning that if a build times
out or fails in a different location the sucessfully-created docker images
isn't always cached. While this makes sense as a general rule of caches it
hurts our use cases.
* Caches are per-branch and builder which means that we don't have a separate
cache on each release channel. All our merges go through the `auto` branch
which means that they're all sharing the same cache, even those for merging to
master/beta. This means that PRs which switch between master/beta will keep
rebuilting and having cache misses.
* Caches have historically been invaliated somewhat regularly a little more
aggressively than we'd want (I think).
* We don't always need to update the contents of the cache if the Docker image
didn't change at all, and saving off the docker layers can sometimes be quite
expensive.
For all these reasons this commit drops the usage of Travis's built-in caching
support. Instead our own caching is used by storing blobs to S3. Normally this
would be a very risky endeavour but we're basically priming a cache for a cache
(docker) so if we get this wrong the failure mode is longer builds, not stale
caches. We'll notice that pretty quickly and hopefully fix it!
The logic here is inserted directly into the `src/ci/docker/run.sh` script to
download an image based on a shasum of the `Dockerfile` and other assorted files.
This blob, if found, is loaded into docker and we record what layers were
inserted. After docker finishes the build (hopefully quickly with lots of cache
hits) we then see the sha of the final image. If it's one of the layers we
loaded then there's no need to update the cache. Otherwise we upload our layers
to the global cache, possibly overwriting what we previously just downloaded.
This is hopefully a step towards mitigating #49278 although it doesn't
completely fix it as it means we'll still probably have to retry builds that
bust the cache.
|
|
Host compiler documentation
Fixes #29893. Rust Central Station PR: rust-lang/rust-central-station#40
r? @alexcrichton
|
|
Add a CI job for parallel rustc using x.py check
r? @alexcrichton
|
|
This commit moves away from caching on Travis to our own caching on S3 for
caching docker layers between builds. Unfortunately the Travis caches have over
time had a few critical pain points:
* Caches are only updated for successful builds, meaning that if a build times
out or fails in a different location the sucessfully-created docker images
isn't always cached. While this makes sense as a general rule of caches it
hurts our use cases.
* Caches are per-branch and builder which means that we don't have a separate
cache on each release channel. All our merges go through the `auto` branch
which means that they're all sharing the same cache, even those for merging to
master/beta. This means that PRs which switch between master/beta will keep
rebuilting and having cache misses.
* Caches have historically been invaliated somewhat regularly a little more
aggressively than we'd want (I think).
* We don't always need to update the contents of the cache if the Docker image
didn't change at all, and saving off the docker layers can sometimes be quite
expensive.
For all these reasons this commit drops the usage of Travis's built-in caching
support. Instead our own caching is used by storing blobs to S3. Normally this
would be a very risky endeavour but we're basically priming a cache for a cache
(docker) so if we get this wrong the failure mode is longer builds, not stale
caches. We'll notice that pretty quickly and hopefully fix it!
The logic here is inserted directly into the `src/ci/docker/run.sh` script to
download an image based on a shasum of the `Dockerfile` and other assorted files.
This blob, if found, is loaded into docker and we record what layers were
inserted. After docker finishes the build (hopefully quickly with lots of cache
hits) we then see the sha of the final image. If it's one of the layers we
loaded then there's no need to update the cache. Otherwise we upload our layers
to the global cache, possibly overwriting what we previously just downloaded.
This is hopefully a step towards mitigating #49278 although it doesn't
completely fix it as it means we'll still probably have to retry builds that
bust the cache.
|
|
This commit adds a new attribute to the Rust compiler specific to the wasm
target (and no other targets). The `#[wasm_import_module]` attribute is used to
specify the module that a name is imported from, and is used like so:
#[wasm_import_module = "./foo.js"]
extern {
fn some_js_function();
}
Here the import of the symbol `some_js_function` is tagged with the `./foo.js`
module in the wasm output file. Wasm-the-format includes two fields on all
imports, a module and a field. The field is the symbol name (`some_js_function`
above) and the module has historically unconditionally been `"env"`. I'm not
sure if this `"env"` convention has asm.js or LLVM roots, but regardless we'd
like the ability to configure it!
The proposed ES module integration with wasm (aka a wasm module is "just another
ES module") requires that the import module of wasm imports is interpreted as an
ES module import, meaning that you'll need to encode paths, NPM packages, etc.
As a result, we'll need this to be something other than `"env"`!
Unfortunately neither our version of LLVM nor LLD supports custom import modules
(aka anything not `"env"`). My hope is that by the time LLVM 7 is released both
will have support, but in the meantime this commit adds some primitive
encoding/decoding of wasm files to the compiler. This way rustc postprocesses
the wasm module that LLVM emits to ensure it's got all the imports we'd like to
have in it.
Eventually I'd ideally like to unconditionally require this attribute to be
placed on all `extern { ... }` blocks. For now though it seemed prudent to add
it as an unstable attribute, so for now it's not required (as that'd force usage
of a feature gate). Hopefully it doesn't take too long to "stabilize" this!
cc rust-lang-nursery/rust-wasm#29
|
|
Rollup of 23 pull requests
- Successful merges: #48374, #48596, #48759, #48939, #49029, #49069, #49093, #49109, #49117, #49140, #49158, #49188, #49189, #49209, #49211, #49216, #49225, #49231, #49234, #49242, #49244, #49105, #49038
- Failed merges:
|
|
This reverts commit 9f792e199bc53a75afdad72547a151a0bc86ec5d.
|
|
Update submodules in parallel
|
|
ci: Print out how long each step takes on CI
This commit updates CI configuration to inform rustbuild that it should print
out how long each step takes on CI. This'll hopefully allow us to track the
duration of steps over time and follow regressions a bit more closesly (as well
as have closer analysis of differences between two builds).
cc #48829
|
|
Unfortunately we don't have sufficient time to rebuild the cache *and*
distribute everything in `dist-x86_64-linux alt`, the debug assertions are
really slow.
We will re-enable them after the PR has been successfully merged, thus
successfully updating the cache (freeing up 40 minutes), giving us enough
time to build these tools.
|
|
|
|
The former seems much more stable, in case the cache becomes invalidated.
|
|
|
|
This commit updates CI configuration to inform rustbuild that it should print
out how long each step takes on CI. This'll hopefully allow us to track the
duration of steps over time and follow regressions a bit more closesly (as well
as have closer analysis of differences between two builds).
cc #48829
|
|
|
|
|
|
Many tests run on the asmjs builder like compile-fail, ui, parse-fail, etc,
aren't actually specific to asm.js. Instead of running redundant test suites
this commit changes things up to only run tests that actually emit JS we then
pass to node.
|
|
Since all tests are compiled with LTO effectively in Emscripten this commit
disables optimizations to hopefully squeeze some more time out of the CI
builders.
Closes #48826
|
|
rustbuild: Remove ThinLTO-related configuration
This commit removes some ThinLTO/codegen unit cruft primarily only needed during
the initial phase where we were adding ThinLTO support to rustc itself. The
current bootstrap compiler knows about ThinLTO and has it enabled by default for
multi-CGU builds which are also enabled by default. One CGU builds (aka
disabling ThinLTO) can be achieved by configuring the number of codegen units to
1 for a particular builds.
This also changes the defaults for our dist builders to go back to multiple
CGUs. Unfortunately we're seriously bleeding for cycle time on the bots right
now so we need to recover any time we can.
|
|
|
|
This commit removes some ThinLTO/codegen unit cruft primarily only needed during
the initial phase where we were adding ThinLTO support to rustc itself. The
current bootstrap compiler knows about ThinLTO and has it enabled by default for
multi-CGU builds which are also enabled by default. One CGU builds (aka
disabling ThinLTO) can be achieved by configuring the number of codegen units to
1 for a particular builds.
This also changes the defaults for our dist builders to go back to multiple
CGUs. Unfortunately we're seriously bleeding for cycle time on the bots right
now so we need to recover any time we can.
|
|
Take two more slow builders and split them in two to get them under 2 hrs
|
|
rustbuild: Pass `ccache` to build scripts
This is a re-attempt at #48192 hopefully this time with 100% less randomly
[blocking builds for 20 minutes][block]. To work around #48192 the sccache
server is started in the `run.sh` script very early on in the compilation
process.
[block]: https://github.com/rust-lang/rust/issues/48192
|
|
This commit imports the LLD project from LLVM to serve as the default linker for
the `wasm32-unknown-unknown` target. The `binaryen` submoule is consequently
removed along with "binaryen linker" support in rustc.
Moving to LLD brings with it a number of benefits for wasm code:
* LLD is itself an actual linker, so there's no need to compile all wasm code
with LTO any more. As a result builds should be *much* speedier as LTO is no
longer forcibly enabled for all builds of the wasm target.
* LLD is quickly becoming an "official solution" for linking wasm code together.
This, I believe at least, is intended to be the main supported linker for
native code and wasm moving forward. Picking up support early on should help
ensure that we can help LLD identify bugs and otherwise prove that it works
great for all our use cases!
* Improvements to the wasm toolchain are currently primarily focused around LLVM
and LLD (from what I can tell at least), so it's in general much better to be
on this bandwagon for bugfixes and new features.
* Historical "hacks" like `wasm-gc` will soon no longer be necessary, LLD
will [natively implement][gc] `--gc-sections` (better than `wasm-gc`!) which
means a postprocessor is no longer needed to show off Rust's "small wasm
binary size".
LLD is added in a pretty standard way to rustc right now. A new rustbuild target
was defined for building LLD, and this is executed when a compiler's sysroot is
being assembled. LLD is compiled against the LLVM that we've got in tree, which
means we're currently on the `release_60` branch, but this may get upgraded in
the near future!
LLD is placed into rustc's sysroot in a `bin` directory. This is similar to
where `gcc.exe` can be found on Windows. This directory is automatically added
to `PATH` whenever rustc executes the linker, allowing us to define a `WasmLd`
linker which implements the interface that `wasm-ld`, LLD's frontend, expects.
Like Emscripten the LLD target is currently only enabled for Tier 1 platforms,
notably OSX/Windows/Linux, and will need to be installed manually for compiling
to wasm on other platforms. LLD is by default turned off in rustbuild, and
requires a `config.toml` option to be enabled to turn it on.
Finally the unstable `#![wasm_import_memory]` attribute was also removed as LLD
has a native option for controlling this.
[gc]: https://reviews.llvm.org/D42511
|
|
Remove --host and --target arguments to configure in Dockerfiles
These arguments are passed to the relevant x.py invocation in all cases
anyway. As such, there is no need to separately configure them. x.py
will ignore the configuration when they are passed on the command line
anyway.
r? @alexcrichton
|
|
These arguments are passed to the relevant x.py invocation in all cases
anyway. As such, there is no need to separately configure them. x.py
will ignore the configuration when they are passed on the command line
anyway.
|
|
This is a re-attempt at #48192 hopefully this time with 100% less randomly
[blocking builds for 20 minutes][block]. To work around #48192 the sccache
server is started in the `run.sh` script very early on in the compilation
process.
[block]: https://github.com/rust-lang/rust/issues/48192
|
|
|
|
|
|
fix typos in src/{bootstrap,ci,etc,lib{backtrace,core,fmt_macros}}
via codespell
|
|
|
|
ci: Actually bootstrap on i686 dist
Right now the `--build` option was accidentally omitted, so we're bootstraping
from `x86_64` to `i686`. In addition to being slower (more compiles) that's not
actually bootstrapping!
|
|
The following submodules have been updated for a new version of LLVM:
- `src/llvm`
- `src/libcompiler_builtins` - transitively contains compiler-rt
- `src/dlmalloc`
This also updates the docker container for dist-i686-freebsd as the old 16.04
container is no longer capable of building LLVM. The
compiler-rt/compiler-builtins and dlmalloc updates are pretty routine without
much interesting happening, but the LLVM update here is of particular note.
Unlike previous updates I haven't cherry-picked all existing patches we had on
top of our LLVM branch as we have a [huge amount][patches4] and have at this
point forgotten what most of them are for. Instead I started from the current
`release_60` branch in LLVM and only applied patches that were necessary to get
our tests working and building.
The current set of custom rustc-specific patches included in this LLVM update are:
* rust-lang/llvm@1187443 - this is how we actually implement
`cfg(target_feature)` for now and continues to not be upstreamed. While a
hazard for SIMD stabilization this commit is otherwise keeping the status
quo of a small rustc-specific feature.
* rust-lang/llvm@013f2ec - this is a rustc-specific optimization that we haven't
upstreamed, notably teaching LLVM about our allocation-related routines (which
aren't malloc/free). Once we stabilize the global allocator routines we will
likely want to upstream this patch, but for now it seems reasonable to keep it
on our fork.
* rust-lang/llvm@a65bbfd - I found this necessary to fix compilation of LLVM in
our 32-bit linux container. I'm not really sure why it's necessary but my
guess is that it's because of the absolutely ancient glibc that we're using.
In any case it's only updating pieces we're not actually using in LLVM so I'm
hoping it'll turn out alright. This doesn't seem like something we'll want to
upstream.c
* rust-lang/llvm@77ab1f0 - this is what's actually enabling LLVM to build in our
i686-freebsd container, I'm not really sure what's going on but we for sure
probably don't want to upstream this and otherwise it seems not too bad for
now at least.
* rust-lang/llvm@9eb9267 - we currently suffer on MSVC from an [upstream bug]
which although diagnosed to a particular revision isn't currently fixed
upstream (and the bug itself doesn't seem too active). This commit is a
partial revert of the suspected cause of this regression (found via a
bisection). I'm sort of hoping that this eventually gets fixed upstream with a
similar fix (which we can replace in our branch), but for now I'm also hoping
it's a relatively harmless change to have.
After applying these patches (plus one [backport] which should be [backported
upstream][llvm-back]) I believe we should have all tests working on all
platforms in our current test suite. I'm like 99% sure that we'll need some more
backports as issues are reported for LLVM 6 when this propagates through
nightlies, but that's sort of just par for the course nowadays!
In any case though some extra scrutiny of the patches here would definitely be
welcome, along with scrutiny of the "missing patches" like a [change to pass
manager order](rust-lang/llvm@27174447533), [another change to pass manager
order](rust-lang/llvm@c782febb7b9), some [compile fixes for
sparc](rust-lang/llvm@1a83de63c42), and some [fixes for
solaris](rust-lang/llvm@c2bfe0abb).
[patches4]: https://github.com/rust-lang/llvm/compare/5401fdf23...rust-llvm-release-4-0-1
[backport]: https://github.com/rust-lang/llvm/commit/5c54c252db
[llvm-back]: https://bugs.llvm.org/show_bug.cgi?id=36114
[upstream bug]: https://bugs.llvm.org/show_bug.cgi?id=36096
---
The update to LLVM 6 is desirable for a number of reasons, notably:
* This'll allow us to keep up with the upstream wasm backend, picking up new
features as they start landing.
* Upstream LLVM has fixed a number of SIMD-related compilation errors,
especially around AVX-512 and such.
* There's a few assorted known bugs which are fixed in LLVM 5 and aren't fixed
in the LLVM 4 branch we're using.
* Overall it's not a great idea to stagnate with our codegen backend!
This update is mostly powered by #47730 which is allowing us to update LLVM
*independent* of the version of LLVM that Emscripten is locked to. This means
that when compiling code for Emscripten we'll still be using the old LLVM 4
backend, but when compiling code for any other target we'll be using the new
LLVM 6 target. Once Emscripten updates we may no longer need this distinction,
but we're not sure when that will happen!
Closes #43370
Closes #43418
Closes #47015
Closes #47683
Closes rust-lang-nursery/stdsimd#157
Closes rust-lang-nursery/rust-wasm#3
|
|
Right now the `--build` option was accidentally omitted, so we're bootstraping
from `x86_64` to `i686`. In addition to being slower (more compiles) that's not
actually bootstrapping!
|
|
|
|
Dist builds should always be as fast as we can make them, and since
those run on CI we don't care quite as much for the build being somewhat
slower. As such, we don't automatically enable ThinLTO on builds for the
dist builders.
|
|
This commit introduces a separately compiled backend for Emscripten, avoiding
compiling the `JSBackend` target in the main LLVM codegen backend. This builds
on the foundation provided by #47671 to create a new codegen backend dedicated
solely to Emscripten, removing the `JSBackend` of the main codegen backend in
the process.
A new field was added to each target for this commit which specifies the backend
to use for translation, the default being `llvm` which is the main backend that
we use. The Emscripten targets specify an `emscripten` backend instead of the
main `llvm` one.
There's a whole bunch of consequences of this change, but I'll try to enumerate
them here:
* A *second* LLVM submodule was added in this commit. The main LLVM submodule
will soon start to drift from the Emscripten submodule, but currently they're
both at the same revision.
* Logic was added to rustbuild to *not* build the Emscripten backend by default.
This is gated behind a `--enable-emscripten` flag to the configure script. By
default users should neither check out the emscripten submodule nor compile
it.
* The `init_repo.sh` script was updated to fetch the Emscripten submodule from
GitHub the same way we do the main LLVM submodule (a tarball fetch).
* The Emscripten backend, turned off by default, is still turned on for a number
of targets on CI. We'll only be shipping an Emscripten backend with Tier 1
platforms, though. All cross-compiled platforms will not be receiving an
Emscripten backend yet.
This commit means that when you download the `rustc` package in Rustup for Tier
1 platforms you'll be receiving two trans backends, one for Emscripten and one
that's the general LLVM backend. If you never compile for Emscripten you'll
never use the Emscripten backend, so we may update this one day to only download
the Emscripten backend when you add the Emscripten target. For now though it's
just an extra 10MB gzip'd.
Closes #46819
|
|
Do not assume dynamic linking for musl/mips[el] targets
All musl targets except mips[el] assume static linking by default. This can be [confusing](https://users.rust-lang.org/t/static-cross-compiled-binaries-arent-really-static/6084).
When the musl/mips[el] targets was [added](https://github.com/rust-lang/rust/pull/31298), dynamic linking was chosen because of binary size concerns, and probably also because libunwind [didn't](https://users.rust-lang.org/t/static-cross-compiled-binaries-arent-really-static/6084/8) supported mips.
Now that we have `crt-static` target-feature (the user can choose dynamic link for musl targets), and libunwind [6.0](https://github.com/llvm-mirror/libunwind/commits/release_60) add support to mips, we do not need to assume dynamic linking.
|
|
The i686 problem was fixed upstream:
https://github.com/llvm-mirror/libunwind/commit/aa805e415f19f50ebc6f5e1e1e4bf9bb7f61816b
|
|
Looks like llvm 4.0.0 isn't read for that either.
|
|
First round of LLVM 6.0.0 compatibility
This includes a number of commits for the first round of upgrading to LLVM 6. There are still [lingering bugs](https://github.com/rust-lang/rust/issues/47683) but I believe all of this will nonetheless be necessary!
|
|
Looks like the clang with 16.04 fails to compile LLVM 6, but it looks like clang
in 18.04 can indeed compile LLVM 6.
|
|
Now that the Rust codebase depends on cc 1.0.4, there is no longer any
need to specify a compiler for CloudABI manually. Cargo will
automatically call into the right compiler executable.
|
|
|