about summary refs log tree commit diff
path: root/compiler/rustc_codegen_llvm/src
diff options
context:
space:
mode:
authorManish Goregaokar <manishsmail@gmail.com>2021-10-04 23:56:17 -0700
committerGitHub <noreply@github.com>2021-10-04 23:56:17 -0700
commitdd223d5c6da0cfa822151dd706bb14dc1476e4dd (patch)
treedded31dd3b09e1ff1fd88e6c9f15244056ff7129 /compiler/rustc_codegen_llvm/src
parent7a09755148c4b1543f25862e785e04834feb8b61 (diff)
parentce450f893d551e25123e0bdb27acc9a85d15cb7f (diff)
downloadrust-dd223d5c6da0cfa822151dd706bb14dc1476e4dd.tar.gz
rust-dd223d5c6da0cfa822151dd706bb14dc1476e4dd.zip
Rollup merge of #88651 - AGSaidi:monotonize-inner-64b-aarch64, r=dtolnay
Use the 64b inner:monotonize() implementation not the 128b one for aarch64

aarch64 prior to v8.4 (FEAT_LSE2) doesn't have an instruction that guarantees
untorn 128b reads except for completing a 128b load/store exclusive pair
(ldxp/stxp) or compare-and-swap (casp) successfully. The requirement to
complete a 128b read+write atomic is actually more expensive and more unfair
than the previous implementation of monotonize() which used a Mutex on aarch64,
especially at large core counts.  For aarch64 switch to the 64b atomic
implementation which is about 13x faster for a benchmark that involves many
calls to Instant::now().
Diffstat (limited to 'compiler/rustc_codegen_llvm/src')
0 files changed, 0 insertions, 0 deletions