| Age | Commit message (Collapse) | Author | Lines |
|
For enum variants, the default alignment for a specific variant might be
lower than the alignment of the enum type itself. In such cases we, for
example, generate memcpy calls with an alignment that's higher than the
alignment of the constant we copy from.
To avoid that, we need to explicitly set the required alignment on
constants.
Fixes #28912.
|
|
A DST value and a fat pointer to it have the same representation, all we
have to do is to adjust the type of the datum holding the pointer.
|
|
By putting an "unreachable" instruction into the default arm of a switch
instruction we can let LLVM know that the match is exhaustive, allowing
for better optimizations.
For example, this match:
```rust
pub enum Enum {
One,
Two,
Three,
}
impl Enum {
pub fn get_disc(self) -> u8 {
match self {
Enum::One => 0,
Enum::Two => 1,
Enum::Three => 2,
}
}
}
```
Currently compiles to this on x86_64:
```asm
.cfi_startproc
movzbl %dil, %ecx
cmpl $1, %ecx
setne %al
testb %cl, %cl
je .LBB0_2
incb %al
movb %al, %dil
.LBB0_2:
movb %dil, %al
retq
.Lfunc_end0:
```
But with this change we get:
```asm
.cfi_startproc
movb %dil, %al
retq
.Lfunc_end0:
```
|
|
This is so LLVM isn't forced to load every byte of it. Also sets the alignment of
the load. Adds a test for the debug script section.
|
|
That allows us to keep using trans_into() in case of adjustments that
may actually be ignored in trans because they are a plain deref/ref pair
with no overloaded deref or unsizing.
Unoptimized(!) benchmarks from servo/servo#7638
Before
```
test goser::bench_clone ... bench: 17,701 ns/iter (+/- 58) = 30 MB/s
test goser::bincode::bench_decoder ... bench: 33,715 ns/iter (+/- 300) = 11 MB/s
test goser::bincode::bench_deserialize ... bench: 36,804 ns/iter (+/- 329) = 9 MB/s
test goser::bincode::bench_encoder ... bench: 34,695 ns/iter (+/- 149) = 11 MB/s
test goser::bincode::bench_populate ... bench: 18,879 ns/iter (+/- 88)
test goser::bincode::bench_serialize ... bench: 31,668 ns/iter (+/- 156) = 11 MB/s
test goser::capnp::bench_deserialize ... bench: 2,049 ns/iter (+/- 87) = 218 MB/s
test goser::capnp::bench_deserialize_packed ... bench: 10,707 ns/iter (+/- 258) = 31 MB/s
test goser::capnp::bench_populate ... bench: 635 ns/iter (+/- 5)
test goser::capnp::bench_serialize ... bench: 35,657 ns/iter (+/- 155) = 12 MB/s
test goser::capnp::bench_serialize_packed ... bench: 37,881 ns/iter (+/- 146) = 8 MB/s
test goser::msgpack::bench_decoder ... bench: 50,634 ns/iter (+/- 307) = 5 MB/s
test goser::msgpack::bench_encoder ... bench: 25,738 ns/iter (+/- 90) = 11 MB/s
test goser::msgpack::bench_populate ... bench: 18,900 ns/iter (+/- 138)
test goser::protobuf::bench_decoder ... bench: 2,791 ns/iter (+/- 29) = 102 MB/s
test goser::protobuf::bench_encoder ... bench: 75,414 ns/iter (+/- 358) = 3 MB/s
test goser::protobuf::bench_populate ... bench: 19,248 ns/iter (+/- 92)
test goser::rustc_serialize_json::bench_decoder ... bench: 109,999 ns/iter (+/- 797) = 5 MB/s
test goser::rustc_serialize_json::bench_encoder ... bench: 58,777 ns/iter (+/- 418) = 10 MB/s
test goser::rustc_serialize_json::bench_populate ... bench: 18,887 ns/iter (+/- 76)
test goser::serde_json::bench_deserializer ... bench: 104,803 ns/iter (+/- 770) = 5 MB/s
test goser::serde_json::bench_populate ... bench: 18,890 ns/iter (+/- 69)
test goser::serde_json::bench_serializer ... bench: 75,046 ns/iter (+/- 435) = 8 MB/s
```
After
```
test goser::bench_clone ... bench: 16,052 ns/iter (+/- 188) = 34 MB/s
test goser::bincode::bench_decoder ... bench: 31,194 ns/iter (+/- 941) = 12 MB/s
test goser::bincode::bench_deserialize ... bench: 33,934 ns/iter (+/- 352) = 10 MB/s
test goser::bincode::bench_encoder ... bench: 30,737 ns/iter (+/- 1,969) = 13 MB/s
test goser::bincode::bench_populate ... bench: 17,234 ns/iter (+/- 176)
test goser::bincode::bench_serialize ... bench: 28,269 ns/iter (+/- 452) = 12 MB/s
test goser::capnp::bench_deserialize ... bench: 2,019 ns/iter (+/- 85) = 221 MB/s
test goser::capnp::bench_deserialize_packed ... bench: 10,662 ns/iter (+/- 527) = 31 MB/s
test goser::capnp::bench_populate ... bench: 607 ns/iter (+/- 2)
test goser::capnp::bench_serialize ... bench: 30,488 ns/iter (+/- 219) = 14 MB/s
test goser::capnp::bench_serialize_packed ... bench: 33,731 ns/iter (+/- 201) = 9 MB/s
test goser::msgpack::bench_decoder ... bench: 46,921 ns/iter (+/- 461) = 6 MB/s
test goser::msgpack::bench_encoder ... bench: 22,315 ns/iter (+/- 96) = 12 MB/s
test goser::msgpack::bench_populate ... bench: 17,268 ns/iter (+/- 73)
test goser::protobuf::bench_decoder ... bench: 2,658 ns/iter (+/- 44) = 107 MB/s
test goser::protobuf::bench_encoder ... bench: 71,024 ns/iter (+/- 359) = 4 MB/s
test goser::protobuf::bench_populate ... bench: 17,704 ns/iter (+/- 104)
test goser::rustc_serialize_json::bench_decoder ... bench: 107,867 ns/iter (+/- 759) = 5 MB/s
test goser::rustc_serialize_json::bench_encoder ... bench: 52,327 ns/iter (+/- 479) = 11 MB/s
test goser::rustc_serialize_json::bench_populate ... bench: 17,262 ns/iter (+/- 68)
test goser::serde_json::bench_deserializer ... bench: 99,156 ns/iter (+/- 657) = 6 MB/s
test goser::serde_json::bench_populate ... bench: 17,264 ns/iter (+/- 77)
test goser::serde_json::bench_serializer ... bench: 66,135 ns/iter (+/- 392) = 9 MB/s
```
|
|
Currently, we're generating adjustments, for example, to get from &[u8]
to &[u8], which is unneeded and kicks us out of trans_into() into
trans() which means an additional stack slot and copy in the unoptimized
code.
|
|
Unwinding across an FFI boundary is undefined behaviour, so we can mark
all external function as nounwind. The obvious exception are those
functions that actually perform the unwinding.
|
|
Intrinsics never have an address, so it doesn't make sense to say that their
address is unnamed.
|
|
|
|
Fixes #27467.
|
|
This has a number of advantages compared to creating a copy in memory
and passing a pointer. The obvious one is that we don't have to put the
data into memory but can keep it in registers. Since we're currently
passing a pointer anyway (instead of using e.g. a known offset on the
stack, which is what the `byval` attribute would achieve), we only use a
single additional register for each fat pointer, but save at least two
pointers worth of stack in exchange (sometimes more because more than
one copy gets eliminated). On archs that pass arguments on the stack, we
save a pointer worth of stack even without considering the omitted
copies.
Additionally, LLVM can optimize the code a lot better, to a large degree
due to the fact that lots of copies are gone or can be optimized away.
Additionally, we can now emit attributes like nonnull on the data and/or
vtable pointers contained in the fat pointer, potentially allowing for
even more optimizations.
This results in LLVM passes being about 3-7% faster (depending on the
crate), and the resulting code is also a few percent smaller, for
example:
text data filename
5671479 3941461 before/librustc-d8ace771.so
5447663 3905745 after/librustc-d8ace771.so
1944425 2394024 before/libstd-d8ace771.so
1896769 2387610 after/libstd-d8ace771.so
I had to remove a call in the backtrace-debuginfo test, because LLVM can
now merge the tails of some blocks when optimizations are turned on,
which can't correctly preserve line info.
Fixes #22924
Cc #22891 (at least for fat pointers the code is good now)
|
|
Unlike coercing from reference to unsafe pointer, coercing between two
unsafe pointers doesn't need an AutoDerefRef, because there is no region
that regionck would need to know about.
In unoptimized libcore, this reduces the number of "auto_deref" allocas
from 174 to 4.
|
|
This commit updates the LLVM submodule in use to the current HEAD of the LLVM
repository. This is primarily being done to start picking up unwinding support
for MSVC, which is currently unimplemented in the revision of LLVM we are using.
Along the way a few changes had to be made:
* As usual, lots of C++ debuginfo bindings in LLVM changed, so there were some
significant changes to our RustWrapper.cpp
* As usual, some pass management changed in LLVM, so clang was re-scrutinized to
ensure that we're doing the same thing as clang.
* Some optimization options are now passed directly into the
`PassManagerBuilder` instead of through CLI switches to LLVM.
* The `NoFramePointerElim` option was removed from LLVM, favoring instead the
`no-frame-pointer-elim` function attribute instead.
Additionally, LLVM has picked up some new optimizations which required fixing an
existing soundness hole in the IR we generate. It appears that the current LLVM
we use does not expose this hole. When an enum is moved, the previous slot in
memory is overwritten with a bit pattern corresponding to "dropped". When the
drop glue for this slot is run, however, the switch on the discriminant can
often start executing the `unreachable` block of the switch due to the
discriminant now being outside the normal range. This was patched over locally
for now by having the `unreachable` block just change to a `ret void`.
|
|
Unlike coercing from reference to unsafe pointer, coercing between two
unsafe pointers doesn't need an AutoDerefRef, because there is no region
that regionck would need to know about.
In unoptimized libcore, this reduces the number of "auto_deref" allocas
from 174 to 4.
|
|
The current codegen tests only compare IR line counts between similar
rust and C programs, the latter getting compiled with clang. That looked
like a good idea back then, but actually things like lifetime intrinsics
mean that less IR isn't always better, so the metric isn't really
helpful.
Instead, we can start doing tests that check specific aspects of the
generated IR, like attributes or metadata. To do that, we can use LLVM's
FileCheck tool which has a number of useful features for such tests.
To start off, I created some tests for a few things that were recently
added and/or broken.
|
|
Now that support has been removed, all lingering use cases are renamed.
|
|
This breaks code like:
struct Foo {
...
}
pub fn make_foo() -> Foo {
...
}
Change this code to:
pub struct Foo { // note `pub`
...
}
pub fn make_foo() -> Foo {
...
}
The `visible_private_types` lint has been removed, since it is now an
error to attempt to expose a private type in a public API. In its place
a `#[feature(visible_private_types)]` gate has been added.
Closes #16463.
RFC #48.
[breaking-change]
|
|
Otherwise the test function is internalized and LLVM will most likely optimize
it out.
|
|
|
|
When there is only a single store to the ret slot that dominates the
load that gets the value for the "ret" instruction, we can elide the
ret slot and directly return the operand of the dominating store
instruction. This is the same thing that clang does, except for a
special case that doesn't seem to affect us.
Fixes #8238
|
|
|
|
|
|
|