| Age | Commit message (Collapse) | Author | Lines |
|
spell out units used for the `offset` argument, so that callers do not
try to scale to byte units themselves.
|
|
|
|
mut_offset)
|
|
Also move Void to std::any, move drop to std::mem and reexport in
prelude.
|
|
It unsafe assumptions that any impl of RawPtr is for actual pointers,
that they can be copied by memcpy. Removing it is easy, so I don't
think it's solving a real problem.
|
|
|
|
|
|
|
|
There's no need for the restrictions of a closure with the above methods.
|
|
not try to scale to units of bytes themselves.
|
|
|
|
compile-fail tests, run-fail tests, and run-pass tests.
|
|
Closes #10579
|
|
|
|
|
|
This moves the per-architecture difference into the compiler.
|
|
|
|
Who doesn't like a massive renaming?
|
|
|
|
|
|
This is mostly for consistency, as you can now compare raw pointers in
constant expressions or without the standard library.
It also reduces the number of `ptrtoint` instructions in the IR, making
tracking down culprits of what's usually an anti-pattern easier.
|
|
The trait will keep the `Iterator` naming, but a more concise module
name makes using the free functions less verbose. The module will define
iterables in addition to iterators, as it deals with iteration in
general.
|
|
|
|
|
|
They are still present as part of the borrow check.
|
|
This resolves issue #908.
Notable changes:
- On Windows, LLVM integrated assembler emits bad stack unwind tables when segmented stacks are enabled. However, unwind info directives in the assembly output are correct, so we generate assembly first and then run it through an external assembler, just like it is already done for Android builds.
- Linker is invoked via "g++" command instead of "gcc": g++ passes the appropriate magic parameters to the linker, which ensure correct registration of stack unwind tables in dynamic libraries.
|
|
|
|
cc #3678
|
|
(This doesn't add/remove `u`s or change `ize` to `ise`, or anything like that.)
|
|
|
|
Implement interior null checking in `.to_c_str()`, among other changes.
|
|
|
|
|
|
.with_c_str() is a replacement for the old .as_c_str(), to avoid
unnecessary boilerplate.
Replace all usages of .to_c_str().with_ref() with .with_c_str().
|
|
Closes #5495
|
|
|
|
remove-str-trailing-nulls
|
|
This allows LLVM to optimize vector iterators to an `getelementptr` and
`icmp` pair, instead of `getelementptr` and *two* comparisons.
Code snippet:
~~~
fn foo(xs: &mut [f64]) {
for x in xs.mut_iter() {
*x += 10.0;
}
}
~~~
LLVM IR at stage0:
~~~
; Function Attrs: noinline uwtable
define void @"_ZN3foo17_68e1b25bca131dba7_0$x2e0E"({ i64, %tydesc*, i8*, i8*, i8 }* nocapture, { double*, i64 }* nocapture) #1 {
"function top level":
%2 = getelementptr inbounds { double*, i64 }* %1, i64 0, i32 0
%3 = load double** %2, align 8
%4 = getelementptr inbounds { double*, i64 }* %1, i64 0, i32 1
%5 = load i64* %4, align 8
%6 = ptrtoint double* %3 to i64
%7 = and i64 %5, -8
%8 = add i64 %7, %6
%9 = inttoptr i64 %8 to double*
%10 = icmp eq double* %3, %9
%11 = icmp eq double* %3, null
%or.cond6 = or i1 %10, %11
br i1 %or.cond6, label %match_case, label %match_else
match_else: ; preds = %"function top level", %match_else
%12 = phi double* [ %13, %match_else ], [ %3, %"function top level" ]
%13 = getelementptr double* %12, i64 1
%14 = load double* %12, align 8
%15 = fadd double %14, 1.000000e+01
store double %15, double* %12, align 8
%16 = icmp eq double* %13, %9
%17 = icmp eq double* %13, null
%or.cond = or i1 %16, %17
br i1 %or.cond, label %match_case, label %match_else
match_case: ; preds = %match_else, %"function top level"
ret void
}
~~~
Optimized LLVM IR at stage1/stage2:
~~~
; Function Attrs: noinline uwtable
define void @"_ZN3foo17_68e1b25bca131dba7_0$x2e0E"({ i64, %tydesc*, i8*, i8*, i8 }* nocapture, { double*, i64 }* nocapture) #1 {
"function top level":
%2 = getelementptr inbounds { double*, i64 }* %1, i64 0, i32 0
%3 = load double** %2, align 8
%4 = getelementptr inbounds { double*, i64 }* %1, i64 0, i32 1
%5 = load i64* %4, align 8
%6 = lshr i64 %5, 3
%7 = getelementptr inbounds double* %3, i64 %6
%8 = icmp eq i64 %6, 0
%9 = icmp eq double* %3, null
%or.cond6 = or i1 %8, %9
br i1 %or.cond6, label %match_case, label %match_else
match_else: ; preds = %"function top level", %match_else
%.sroa.0.0.in7 = phi double* [ %10, %match_else ], [ %3, %"function top level" ]
%10 = getelementptr inbounds double* %.sroa.0.0.in7, i64 1
%11 = load double* %.sroa.0.0.in7, align 8
%12 = fadd double %11, 1.000000e+01
store double %12, double* %.sroa.0.0.in7, align 8
%13 = icmp eq double* %10, %7
br i1 %13, label %match_case, label %match_else
match_case: ; preds = %match_else, %"function top level"
ret void
}
~~~
|
|
|
|
remove-str-trailing-nulls
|
|
|
|
|
|
When strings lose their trailing null, this pattern will become dangerous:
let foo = "bar";
let foo_ptr: *u8 = &foo[0];
Instead we should use c_strs to handle this correctly.
|
|
|
|
|
|
Closes #8118, #7136
~~~rust
extern mod extra;
use std::vec;
use std::ptr;
fn bench_from_elem(b: &mut extra::test::BenchHarness) {
do b.iter {
let v: ~[u8] = vec::from_elem(1024, 0u8);
}
}
fn bench_set_memory(b: &mut extra::test::BenchHarness) {
do b.iter {
let mut v: ~[u8] = vec::with_capacity(1024);
unsafe {
let vp = vec::raw::to_mut_ptr(v);
ptr::set_memory(vp, 0, 1024);
vec::raw::set_len(&mut v, 1024);
}
}
}
fn bench_vec_repeat(b: &mut extra::test::BenchHarness) {
do b.iter {
let v: ~[u8] = ~[0u8, ..1024];
}
}
~~~
Before:
test bench_from_elem ... bench: 415 ns/iter (+/- 17)
test bench_set_memory ... bench: 85 ns/iter (+/- 4)
test bench_vec_repeat ... bench: 83 ns/iter (+/- 3)
After:
test bench_from_elem ... bench: 84 ns/iter (+/- 2)
test bench_set_memory ... bench: 84 ns/iter (+/- 5)
test bench_vec_repeat ... bench: 84 ns/iter (+/- 3)
|
|
|
|
Fix warnings that only show up when compiling the tests for libstd, libextra and one in librusti. Only trivial changes.
|
|
|
|
|