| Age | Commit message (Collapse) | Author | Lines |
|
This commit removes usage of `Once` from the internal implementation of
time utilities on OSX and Windows. It turns out that we accidentally hit
a deadlock today (#59020) via events that look like:
* A thread invokes `park_timeout`
* Internally, only on OSX, `park_timeout` calls `Instant::elapsed`
* Inside of `Instant::elapsed` on OSX we enter a `Once` to initialize
global timer data
* Inside of `Once`, it attempts to `park`
This means on the same stack frame, when there's contention, we're
calling `park` from inside `park_timeout`, causing a deadlock!
The solution implemented in this commit was to remove usage of `Once`
and instead just do a small dance with atomics. There's no real need we
need to guarantee that the global information is only learned once, only
that it's only *stored* once. This implementation may have multiple
threads invoke `mach_timebase_info`, but only one will store the global
information which will amortize the cost for all other threads.
A similar fix has been applied to windows to be uniform across our
implementations, but looking at the code on Windows no deadlock was
possible. This is purely just a consistency update for Windows and in
theory a slightly leaner implementation.
Closes #59020
|
|
|
|
|
|
|
|
|
|
|
|
Per review comments, this commit switches out the backing
type for Instant on windows to a Duration. Tests all pass,
and the code's a lot simpler (plus it should be portable now,
with the exception of the QueryPerformanceWhatever functions).
|
|
Right now we do unit conversions between PerfCounter measurements
and nanoseconds for every add/sub we do between Durations and Instants
on Windows machines. This leads to goofy behavior, like this snippet
failing:
```
let now = Instant::now();
let offset = Duration::from_millis(5);
assert_eq!((now + offset) - now, (now - now) + offset);
```
with precision problems like this:
```
thread 'main' panicked at 'assertion failed: `(left == right)`
left: `4.999914ms`,
right: `5ms`', src\main.rs:6:5
```
To fix it, this changeset does the unit conversion once, when we
measure the clock, and all the subsequent math in u64 nanoseconds.
It also adds an exact associativity test to the `sys/time.rs`
test suite to make sure we don't regress on this in the future.
|
|
This commit is an attempt to force `Instant::now` to be monotonic
through any means possible. We tried relying on OS/hardware/clock
implementations, but those seem buggy enough that we can't rely on them
in practice. This commit implements the same hammer Firefox recently
implemented (noted in #56612) which is to just keep whatever the lastest
`Instant::now()` return value was in memory, returning that instead of
the OS looks like it's moving backwards.
Closes #48514
Closes #49281
cc #51648
cc #56560
Closes #56612
Closes #56940
|
|
|
|
|
|
|
|
Implement checked_add_duration for SystemTime
[Original discussion on the rust user forum](https://users.rust-lang.org/t/std-systemtime-misses-a-checked-add-function/21785)
Since `SystemTime` is opaque there is no way to check if the result of an addition will be in bounds. That makes the `Add<Duration>` trait completely unusable with untrusted data. This is a big problem because adding a `Duration` to `UNIX_EPOCH` is the standard way of constructing a `SystemTime` from a unix timestamp.
This PR implements `checked_add_duration(&self, &Duration) -> Option<SystemTime>` for `std::time::SystemTime` and as a prerequisite also for all platform specific time structs. This also led to the refactoring of many `add_duration(&self, &Duration) -> SystemTime` functions to avoid redundancy (they now unwrap the result of `checked_add_duration`).
Some basic unit tests for the newly introduced function were added too.
I wasn't sure which stabilization attribute to add to the newly introduced function, so I just chose `#[stable(feature = "time_checked_add", since = "1.32.0")]` for now to make it compile. Please let me know how I should change it or if I violated any other conventions.
P.S.: I could only test on Linux so far, so I don't necessarily expect it to compile for all platforms.
|
|
Since SystemTime is opaque there is no way to check if the result
of an addition will be in bounds. That makes the Add<Duration>
trait completely unusable with untrusted data. This is a big problem
because adding a Duration to UNIX_EPOCH is the standard way of
constructing a SystemTime from a unix timestamp.
This commit implements checked_add_duration(&self, &Duration) -> Option<SystemTime>
for std::time::SystemTime and as a prerequisite also for all platform
specific time structs. This also led to the refactoring of many
add_duration(&self, &Duration) -> SystemTime functions to avoid
redundancy (they now unwrap the result of checked_add_duration).
Some basic unit tests for the newly introduced function were added
too.
|
|
|
|
Closes #46670.
|
|
Avoid unchecked cast from `u64` to `i64`. Use `try_into()` for checked
cast. (On Unix, cast to `time_t` instead of `i64`.)
|
|
These accessors are used to get at the last modification, last access, and
creation time of the underlying file. Currently not all platforms provide the
creation time, so that currently returns `Option`.
|
|
This commit is an implementation of [RFC 1288][rfc] which adds two new unstable
types to the `std::time` module. The `Instant` type is used to represent
measurements of a monotonically increasing clock suitable for measuring time
withing a process for operations such as benchmarks or just the elapsed time to
do something. An `Instant` favors panicking when bugs are found as the bugs are
programmer errors rather than typical errors that can be encountered.
[rfc]: https://github.com/rust-lang/rfcs/pull/1288
The `SystemTime` type is used to represent a system timestamp and is not
monotonic. Very few guarantees are provided about this measurement of the system
clock, but a fixed point in time (`UNIX_EPOCH`) is provided to learn about the
relative distance from this point for any particular time stamp.
This PR takes the same implementation strategy as the `time` crate on crates.io,
namely:
| Platform | Instant | SystemTime |
|------------|--------------------------|--------------------------|
| Windows | QueryPerformanceCounter | GetSystemTimeAsFileTime |
| OSX | mach_absolute_time | gettimeofday |
| Unix | CLOCK_MONOTONIC | CLOCK_REALTIME |
These implementations can perhaps be refined over time, but they currently
satisfy the requirements of the `Instant` and `SystemTime` types while also
being portable across implementations and revisions of each platform.
|
|
|
|
* Delete `sys::unix::{c, sync}` as these are now all folded into libc itself
* Update all references to use `libc` as a result.
* Update all references to the new flat namespace.
* Moves all windows bindings into sys::c
|
|
|
|
This commit is an implementation of [RFC 1040][rfc] which is a redesign of the
currently-unstable `Duration` type. The API of the type has been scaled back to
be more conservative and it also no longer supports negative durations.
[rfc]: https://github.com/rust-lang/rfcs/blob/master/text/1040-duration-reform.md
The inner `duration` module of the `time` module has now been hidden (as
`Duration` is reexported) and the feature name for this type has changed from
`std_misc` to `duration`. All APIs accepting durations have also been audited to
take a more flavorful feature name instead of `std_misc`.
Closes #24874
|
|
which starts happening after ~2 hours of machine uptime.
|
|
**The implementation is a direct adaptation of libcxx's
condition_variable implementation.**
pthread_cond_timedwait uses the non-monotonic system clock. It's
possible to change the clock to a monotonic via pthread_cond_attr, but
this is incompatible with static initialization. To deal with this, we
calculate the timeout using the system clock, and maintain a separate
record of the start and end times with a monotonic clock to be used for
calculation of the return value.
|