diff options
| author | bors <bors@rust-lang.org> | 2013-06-15 04:07:03 -0700 |
|---|---|---|
| committer | bors <bors@rust-lang.org> | 2013-06-15 04:07:03 -0700 |
| commit | 6df66c194d01f34ae319a6b79e45125d4d18be01 (patch) | |
| tree | 11f7772a9f2841b6e9bcd770a6ade1c287ce1625 /src/libstd | |
| parent | da42e6b7a0fcadcca819d221738894dcb6c4b76d (diff) | |
| parent | 2ef8774ac5b56ae264224d46ffa0078f5d39ce6c (diff) | |
| download | rust-6df66c194d01f34ae319a6b79e45125d4d18be01.tar.gz rust-6df66c194d01f34ae319a6b79e45125d4d18be01.zip | |
auto merge of #7109 : bblum/rust/rwlocks, r=brson
r? @brson
links to issues: #7065 the race that's fixed; #7066 the perf improvement I added. There are also some minor cleanup commits here.
To measure the performance improvement from replacing the exclusive with an atomic uint, I edited the ```msgsend-ring-rw-arcs``` bench test to do a ```write_downgrade``` instead of just a ```write```, so that it stressed the code paths that accessed ```read_count```. (At first I was still using ```write``` and saw no performance difference whatsoever, whoooops.)
The bench test measures how long it takes to send 1,000,000 messages by using rwarcs to emulate pipes. I also measured the performance difference imposed by the fix to the ```access_lock``` race (which involves taking an extra semaphore in the ```cond.wait()``` path). The net result is that fixing the race imposes a 4% to 5% slowdown, but doing the atomic uint optimization gives a 6% to 8% speedup.
Note that this speedup will be most visible in read- or downgrade-heavy workloads. If an RWARC's only users are writers, the optimization doesn't matter. All the same, I think this more than justifies the extra complexity I mentioned in #7066.
The raw numbers are:
```
with xadd read count
before write_cond fix
4.18 to 4.26 us/message
with write_cond fix
4.35 to 4.39 us/message
with exclusive read count
before write_cond fix
4.41 to 4.47 us/message
with write_cond fix
4.65 to 4.76 us/message
```
Diffstat (limited to 'src/libstd')
| -rw-r--r-- | src/libstd/unstable/atomics.rs | 6 | ||||
| -rw-r--r-- | src/libstd/util.rs | 6 |
2 files changed, 8 insertions, 4 deletions
diff --git a/src/libstd/unstable/atomics.rs b/src/libstd/unstable/atomics.rs index 856f4e6e3c1..aa70897ad48 100644 --- a/src/libstd/unstable/atomics.rs +++ b/src/libstd/unstable/atomics.rs @@ -155,11 +155,13 @@ impl AtomicInt { unsafe { atomic_compare_and_swap(&mut self.v, old, new, order) } } + /// Returns the old value (like __sync_fetch_and_add). #[inline(always)] pub fn fetch_add(&mut self, val: int, order: Ordering) -> int { unsafe { atomic_add(&mut self.v, val, order) } } + /// Returns the old value (like __sync_fetch_and_sub). #[inline(always)] pub fn fetch_sub(&mut self, val: int, order: Ordering) -> int { unsafe { atomic_sub(&mut self.v, val, order) } @@ -191,11 +193,13 @@ impl AtomicUint { unsafe { atomic_compare_and_swap(&mut self.v, old, new, order) } } + /// Returns the old value (like __sync_fetch_and_add). #[inline(always)] pub fn fetch_add(&mut self, val: uint, order: Ordering) -> uint { unsafe { atomic_add(&mut self.v, val, order) } } + /// Returns the old value (like __sync_fetch_and_sub).. #[inline(always)] pub fn fetch_sub(&mut self, val: uint, order: Ordering) -> uint { unsafe { atomic_sub(&mut self.v, val, order) } @@ -315,6 +319,7 @@ pub unsafe fn atomic_swap<T>(dst: &mut T, val: T, order: Ordering) -> T { }) } +/// Returns the old value (like __sync_fetch_and_add). #[inline(always)] pub unsafe fn atomic_add<T>(dst: &mut T, val: T, order: Ordering) -> T { let dst = cast::transmute(dst); @@ -327,6 +332,7 @@ pub unsafe fn atomic_add<T>(dst: &mut T, val: T, order: Ordering) -> T { }) } +/// Returns the old value (like __sync_fetch_and_sub). #[inline(always)] pub unsafe fn atomic_sub<T>(dst: &mut T, val: T, order: Ordering) -> T { let dst = cast::transmute(dst); diff --git a/src/libstd/util.rs b/src/libstd/util.rs index 376ead608bc..61c284f580c 100644 --- a/src/libstd/util.rs +++ b/src/libstd/util.rs @@ -75,13 +75,11 @@ pub fn replace<T>(dest: &mut T, mut src: T) -> T { } /// A non-copyable dummy type. -pub struct NonCopyable { - priv i: (), -} +pub struct NonCopyable; impl NonCopyable { /// Creates a dummy non-copyable structure and returns it for use. - pub fn new() -> NonCopyable { NonCopyable { i: () } } + pub fn new() -> NonCopyable { NonCopyable } } impl Drop for NonCopyable { |
