about summary refs log tree commit diff
path: root/library/std/src/sys
AgeCommit message (Collapse)AuthorLines
2020-10-31fix aliasing issue in unix sleep functionRalf Jung-1/+2
2020-10-26Rollup merge of #74477 - chansuke:sys-wasm-unsafe-op-in-unsafe-fn, ↵Dylan DPC-15/+38
r=Mark-Simulacrum `#[deny(unsafe_op_in_unsafe_fn)]` in sys/wasm This is part of #73904. This encloses unsafe operations in unsafe fn in `libstd/sys/wasm`. @rustbot modify labels: F-unsafe-block-in-unsafe-fn
2020-10-24Rollup merge of #77610 - hermitcore:dtors, r=m-ou-seJonas Schievink-163/+182
revise Hermit's mutex interface to support the behaviour of StaticMutex rust-lang/rust#77147 simplifies things by splitting this Mutex type into two types matching the two use cases: StaticMutex and MovableMutex. To support the new behavior of StaticMutex, we move part of the mutex implementation into libstd. The interface to the OS changed. Consequently, I removed a few functions, which aren't longer needed.
2020-10-24Rollup merge of #75115 - chansuke:sys-cloudabi-unsafe, r=KodrAusJonas Schievink-64/+75
`#[deny(unsafe_op_in_unsafe_fn)]` in sys/cloudabi Partial fix of #73904. This encloses unsafe operations in unsafe fn in sys/cloudabi.
2020-10-24Remove unnecessary unsafe block from condvar_atomics & mutex_atomicschansuke-3/+3
2020-10-24Fix unsafe operation of wasm32::memory_atomic_notifychansuke-1/+2
2020-10-24Add documents for DLMALLOCchansuke-4/+8
2020-10-24Add some description for (malloc/calloc/free/realloc)chansuke-0/+4
2020-10-24`#[deny(unsafe_op_in_unsafe_fn)]` in sys/wasmchansuke-18/+32
2020-10-20Check that pthread mutex initialization succeededTomasz Miąsko-22/+27
If pthread mutex initialization fails, the failure will go unnoticed unless debug assertions are enabled. Any subsequent use of mutex will also silently fail, since return values from lock & unlock operations are similarly checked only through debug assertions. In some implementations the mutex initialization requires a memory allocation and so it does fail in practice. Check that initialization succeeds to ensure that mutex guarantees mutual exclusion.
2020-10-18Remove redundant 'static from library cratesest31-8/+8
2020-10-18`#[deny(unsafe_op_in_unsafe_fn)]` in sys/cloudabichansuke-64/+75
2020-10-17Auto merge of #77455 - asm89:faster-spawn, r=kennytmbors-1/+7
Use posix_spawn() on unix if program is a path Previously `Command::spawn` would fall back to the non-posix_spawn based implementation if the `PATH` environment variable was possibly changed. On systems with a modern (g)libc `posix_spawn()` can be significantly faster. If program is a path itself the `PATH` environment variable is not used for the lookup and it should be safe to use the `posix_spawnp()` method. [1] We found this, because we have a cli application that effectively runs a lot of subprocesses. It would sometimes noticeably hang while printing output. Profiling showed that the process was spending the majority of time in the kernel's `copy_page_range` function while spawning subprocesses. During this time the process is completely blocked from running, explaining why users were reporting the cli app hanging. Through this we discovered that `std::process::Command` has a fast and slow path for process execution. The fast path is backed by `posix_spawnp()` and the slow path by fork/exec syscalls being called explicitly. Using fork for process creation is supposed to be fast, but it slows down as your process uses more memory. It's not because the kernel copies the actual memory from the parent, but it does need to copy the references to it (see `copy_page_range` above!). We ended up using the slow path, because the command spawn implementation in falls back to the slow path if it suspects the PATH environment variable was changed. Here is a smallish program demonstrating the slowdown before this code change: ``` use std::process::Command; use std::time::Instant; fn main() { let mut args = std::env::args().skip(1); if let Some(size) = args.next() { // Allocate some memory let _xs: Vec<_> = std::iter::repeat(0) .take(size.parse().expect("valid number")) .collect(); let mut command = Command::new("/bin/sh"); command .arg("-c") .arg("echo hello"); if args.next().is_some() { println!("Overriding PATH"); command.env("PATH", std::env::var("PATH").expect("PATH env var")); } let now = Instant::now(); let child = command .spawn() .expect("failed to execute process"); println!("Spawn took: {:?}", now.elapsed()); let output = child.wait_with_output().expect("failed to wait on process"); println!("Output: {:?}", output); } else { eprintln!("Usage: prog [size]"); std::process::exit(1); } () } ``` Running it and passing different amounts of elements to use to allocate memory shows that the time taken for `spawn()` can differ quite significantly. In latter case the `posix_spawnp()` implementation is 30x faster: ``` $ cargo run --release 10000000 ... Spawn took: 324.275µs hello $ cargo run --release 10000000 changepath ... Overriding PATH Spawn took: 2.346809ms hello $ cargo run --release 100000000 ... Spawn took: 387.842µs hello $ cargo run --release 100000000 changepath ... Overriding PATH Spawn took: 13.434677ms hello ``` [1]: https://github.com/bminor/glibc/blob/5f72f9800b250410cad3abfeeb09469ef12b2438/posix/execvpe.c#L81
2020-10-17Rollup merge of #77900 - Thomasdezeeuw:fdatasync, r=dtolnayYuki Okushi-2/+16
Use fdatasync for File::sync_data on more OSes Add support for the following OSes: * Android * FreeBSD: https://www.freebsd.org/cgi/man.cgi?query=fdatasync&sektion=2 * OpenBSD: https://man.openbsd.org/OpenBSD-5.8/fsync.2 * NetBSD: https://man.netbsd.org/fdatasync.2 * illumos: https://illumos.org/man/3c/fdatasync
2020-10-16Take some of sys/vxworks/process/* from sys/unix instead.Mara Bos-407/+77
2020-10-16Take sys/vxworks/{os,path,pipe} from sys/unix instead.Mara Bos-446/+33
2020-10-16Take sys/vxworks/{fd,fs,io} from sys/unix instead.Mara Bos-909/+63
2020-10-16Take sys/vxworks/cmath from sys/unix instead.Mara Bos-32/+1
2020-10-16Take sys/vxworks/args from sys/unix instead.Mara Bos-96/+3
2020-10-16Take sys/vxworks/memchar from sys/unix instead.Mara Bos-21/+1
2020-10-16Take sys/vxworks/net from sys/unix instead.Mara Bos-360/+9
2020-10-16Take sys/vxworks/ext/* from sys/unix instead.Mara Bos-1321/+1
2020-10-16Add weak macro to vxworks.Mara Bos-0/+4
2020-10-16Take sys/vxworks/alloc from sys/unix instead.Mara Bos-49/+1
2020-10-16Take sys/vxworks/thread_local_key from sys/unix instead.Mara Bos-34/+1
2020-10-16Take sys/vxworks/stdio from sys/unix instead.Mara Bos-69/+1
2020-10-16Take sys/vxworks/thread from sys/unix instead.Mara Bos-158/+7
2020-10-16Take sys/vxworks/stack_overflow from sys/unix instead.Mara Bos-39/+2
2020-10-16Take sys/vxworks/time from sys/unix instead.Mara Bos-197/+1
2020-10-16Take sys/vxworks/rwlock from sys/unix instead.Mara Bos-114/+1
2020-10-16Take sys/vxworks/condvar from sys/unix instead.Mara Bos-91/+1
2020-10-16Take sys/vxworks/mutex from sys/unix instead.Mara Bos-133/+1
2020-10-16Rollup merge of #77657 - fusion-engineering-forks:cleanup-cloudabi-sync, ↵Dylan DPC-77/+65
r=dtolnay Cleanup cloudabi mutexes and condvars This gets rid of lots of unnecessary unsafety. All the AtomicU32s were wrapped in UnsafeCell or UnsafeCell<MaybeUninit>, and raw pointers were used to get to the AtomicU32 inside. This change cleans that up by using AtomicU32 directly. Also replaces a UnsafeCell<u32> by a safer Cell<u32>. @rustbot modify labels: +C-cleanup
2020-10-16Rollup merge of #77648 - fusion-engineering-forks:static-mutex, r=dtolnayDylan DPC-2/+2
Static mutex is static StaticMutex is only ever used with as a static (as the name already suggests). So it doesn't have to be generic over a lifetime, but can simply assume 'static. This 'static lifetime guarantees the object is never moved, so this is no longer a manually checked requirement for unsafe calls to lock(). @rustbot modify labels: +T-libs +A-concurrency +C-cleanup
2020-10-16Rollup merge of #77619 - fusion-engineering-forks:wasm-parker, r=dtolnayDylan DPC-0/+19
Use futex-based thread-parker for Wasm32. This uses the existing `sys_common/thread_parker/futex.rs` futex-based thread parker (that was already used for Linux) for wasm32 as well (if the wasm32 atomics target feature is enabled, which is not the case by default). Wasm32 provides the basic futex operations as instructions: https://webassembly.github.io/threads/syntax/instructions.html These are now exposed from `sys::futex::{futex_wait, futex_wake}`, just like on Linux. So, `thread_parker/futex.rs` stays completely unmodified.
2020-10-14Remove lifetime from StaticMutex and assume 'static.Mara Bos-2/+2
StaticMutex is only ever used with as a static (as the name already suggests). So it doesn't have to be generic over a lifetime, but can simply assume 'static. This 'static lifetime guarantees the object is never moved, so this is no longer a manually checked requirement for unsafe calls to lock().
2020-10-13box mutex to get a movable mutexStefan Lankes-1/+1
the commit avoid an alignement issue in Mutex implementation
2020-10-14Rollup merge of #77722 - fusion-engineering-forks:safe-unsupported-locks, ↵Yuki Okushi-37/+34
r=Mark-Simulacrum Remove unsafety from sys/unsupported and add deny(unsafe_op_in_unsafe_fn). Replacing `UnsafeCell`s by a `Cell`s simplifies things and makes the mutex and rwlock implementations safe. Other than that, only unsafety in strlen() contained unsafe code. @rustbot modify labels: +F-unsafe-block-in-unsafe-fn +C-cleanup
2020-10-14Rollup merge of #77719 - ↵Yuki Okushi-1/+0
fusion-engineering-forks:const-new-mutex-attr-cleanup, r=Mark-Simulacrum Remove unnecessary rustc_const_stable attributes. These attributes were added in https://github.com/rust-lang/rust/pull/74033#discussion_r450593156 because of [std::io::lazy::Lazy::new](https://github.com/rust-lang/rust/blob/0c03aee8b81185d65b5821518661c30ecdb42de5/src/libstd/io/lazy.rs#L21-L23). But [std::io::lazy::Lazy is gone now](https://github.com/rust-lang/rust/pull/77154), so this can be cleaned up. @rustbot modify labels: +T-libs +C-cleanup
2020-10-13Deny unsafe_op_in_unsafe_fn for unsupported/common.rs through sys/wasm too.Mara Bos-0/+2
2020-10-13Use fdatasync for File::sync_data on more OSesThomas de Zeeuw-2/+16
Add support for the following OSes: * Android * FreeBSD: https://www.freebsd.org/cgi/man.cgi?query=fdatasync&sektion=2 * OpenBSD: https://man.openbsd.org/OpenBSD-5.8/fsync.2 * NetBSD: https://man.netbsd.org/fdatasync.2 * illumos: https://illumos.org/man/3c/fdatasync
2020-10-13Add note about using cells in the locks on the 'unsupported' platform.Mara Bos-0/+2
2020-10-13Rollup merge of #77724 - sunfishcode:stdinlock-asrawfd, r=alexcrichtonYuki Okushi-0/+18
Implement `AsRawFd` for `StdinLock` etc. on WASI. WASI implements `AsRawFd` for `Stdin`, `Stdout`, and `Stderr`, so implement it for `StdinLock`, `StdoutLock`, and `StderrLock` as well. r? @alexcrichton
2020-10-12define required type 'MovableMutex'Stefan Lankes-0/+2
2020-10-12reuse implementation of the system provider "unsupported"Stefan Lankes-0/+1
2020-10-12remove obsolete function divergeStefan Lankes-153/+0
2020-10-11Auto merge of #77727 - thomcc:mach-info-order, r=Amanieubors-42/+50
Avoid SeqCst or static mut in mach_timebase_info and QueryPerformanceFrequency caches This patch went through a couple iterations but the end result is replacing a pattern where an `AtomicUsize` (updated with many SeqCst ops) guards a `static mut` with a single `AtomicU64` that is known to use 0 as a value indicating that it is not initialized. The code in both places exists to cache values used in the conversion of Instants to Durations on macOS, iOS, and Windows. I have no numbers to prove that this improves performance (It seems a little futile to benchmark something like this), but it's much simpler, safer, and in practice we'd expect it to be faster everywhere where Relaxed operations on AtomicU64 are cheaper than SeqCst operations on AtomicUsize, which is a lot of places. Anyway, it also removes a bunch of unsafe code and greatly simplifies the logic, so IMO that alone would be worth it unless it was a regression. If you want to take a look at the assembly output though, see https://godbolt.org/z/rbr6vn for x86_64, https://godbolt.org/z/cqcbqv for aarch64 (Note that this just the output of the mac side, but i'd expect the windows part to be the same and don't feel like doing another godbolt for it). There are several versions of this function in the godbolt: - `info_new`: version in the current patch - `info_less_new`: version in initial PR - `info_original`: version currently in the tree - `info_orig_but_better_orderings`: a version that just tries to change the original code's orderings from SeqCst to the (probably) minimal orderings required for soundness/correctness. The biggest concern I have here is if we can use AtomicU64, or if there are targets that dont have it that this code supports. AFAICT: no. (If that changes in the future, it's easy enough to do something different for them) r? `@Amanieu` because he caught a couple issues last time I tried to do a patch reducing orderings 😅 --- <details> <summary>I rewrote this whole message so the original is inside here</summary> I happened to notice the code we use for caching the result of mach_timebase_info uses SeqCst exclusively. However, thinking a little more, it's actually pretty easy to avoid the static mut by packing the timebase info into an AtomicU64. This entirely avoids needing to do the compare_exchange. The AtomicU64 can be read/written using Relaxed ops, which on current macos/ios platforms (x86_64/aarch64) have no overhead compared to direct loads/stores. This simplifies the code and makes it a lot safer too. I have no numbers to prove that this improves performance (It seems a little futile to benchmark something like this), although it should do that on both targets it applies to. That said, it also removes a bunch of unsafe code and simplifies the logic (arguably at least — there are only two states now, initialized or not), so I think it's a net win even without concrete numbers. If you want to take a look at the assembly output though, see below. It has the new version, the original, and a version of the original with lower Orderings (which is still worse than the version in this PR) - godbolt.org/z/obfqf9 x86_64-apple-darwin - godbolt.org/z/Wz5cWc aarch64-unknown-linux-gnu (godbolt can't do aarch64-apple-ios but that doesn't matter here) A different (and more efficient) option than this would be to just use the AtomicU64 and use the knowledge that after initialization the denominator should be nonzero... That felt like it's relying on too many things I'm not confident in, so I didn't want to do that. </details>
2020-10-11add hermit to the list of omit OSStefan Lankes-0/+1
2020-10-11revise code to pass the format checkStefan Lankes-3/+3
2020-10-11fix typos in new methodStefan Lankes-1/+5