about summary refs log tree commit diff
path: root/compiler/rustc_codegen_llvm/src
diff options
context:
space:
mode:
authorAli Saidi <alisaidi@amazon.com>2021-09-04 15:11:26 -0500
committerAli Saidi <alisaidi@amazon.com>2021-09-04 15:11:26 -0500
commitce450f893d551e25123e0bdb27acc9a85d15cb7f (patch)
tree11d0266bdadd0c4e536cbd7666f2634141ae20de /compiler/rustc_codegen_llvm/src
parent72a51c39c69256c8a8256e775f2764a1983048d4 (diff)
downloadrust-ce450f893d551e25123e0bdb27acc9a85d15cb7f.tar.gz
rust-ce450f893d551e25123e0bdb27acc9a85d15cb7f.zip
Use the 64b inner:monotonize() implementation not the 128b one for aarch64
aarch64 prior to v8.4 (FEAT_LSE2) doesn't have an instruction that guarantees
untorn 128b reads except for completing a 128b load/store exclusive pair
(ldxp/stxp) or compare-and-swap (casp) successfully. The requirement to
complete a 128b read+write atomic is actually more expensive and more unfair
than the previous implementation of monotonize() which used a Mutex on aarch64,
especially at large core counts.  For aarch64 switch to the 64b atomic
implementation which is about 13x faster for a benchmark that involves many
calls to Instant::now().
Diffstat (limited to 'compiler/rustc_codegen_llvm/src')
0 files changed, 0 insertions, 0 deletions