summary refs log tree commit diff
diff options
context:
space:
mode:
authorbors <bors@rust-lang.org>2023-04-18 03:11:18 +0000
committerbors <bors@rust-lang.org>2023-04-18 03:11:18 +0000
commit386025117a6b7cd9e7f7c96946793db2ec8aa24c (patch)
tree5dcd9f994eae1d84603e81d686c6db9db389d5d1
parente279f902f31af1e111f2a951781c9eed82f8c360 (diff)
parentad8d304163a8c0e8a20d4a1d9783734586273f4a (diff)
downloadrust-386025117a6b7cd9e7f7c96946793db2ec8aa24c.tar.gz
rust-386025117a6b7cd9e7f7c96946793db2ec8aa24c.zip
Auto merge of #110410 - saethlin:hash-u128-as-u64s, r=oli-obk
Implement StableHasher::write_u128 via write_u64

In https://github.com/rust-lang/rust/pull/110367#issuecomment-1510114777 the cachegrind diffs indicate that nearly all the regression is from this:
```
22,892,558  ???:<rustc_data_structures::sip128::SipHasher128>::slice_write_process_buffer
-9,502,262  ???:<rustc_data_structures::sip128::SipHasher128>::short_write_process_buffer::<8>
```
Which happens because the diff for that perf run swaps a `Hash::hash` of a `u64` to a `u128`. But `slice_write_process_buffer` is a `#[cold]` function, and is for handling hashes of arbitrary-length byte arrays.

Using the much more optimizer-friendly `u64` path twice to hash a `u128` provides a nice perf boost in some benchmarks.
-rw-r--r--compiler/rustc_data_structures/src/stable_hasher.rs3
1 files changed, 2 insertions, 1 deletions
diff --git a/compiler/rustc_data_structures/src/stable_hasher.rs b/compiler/rustc_data_structures/src/stable_hasher.rs
index 3ed1de1bc3c..1608009809f 100644
--- a/compiler/rustc_data_structures/src/stable_hasher.rs
+++ b/compiler/rustc_data_structures/src/stable_hasher.rs
@@ -107,7 +107,8 @@ impl Hasher for StableHasher {
 
     #[inline]
     fn write_u128(&mut self, i: u128) {
-        self.state.write(&i.to_le_bytes());
+        self.write_u64(i as u64);
+        self.write_u64((i >> 64) as u64);
     }
 
     #[inline]