diff options
| author | Ulrik Sverdrup <bluss@users.noreply.github.com> | 2015-07-25 11:55:26 +0200 |
|---|---|---|
| committer | Ulrik Sverdrup <bluss@users.noreply.github.com> | 2015-07-25 12:26:18 +0200 |
| commit | f910d27f87419e17cc59034265f6795db5247dfa (patch) | |
| tree | 3788c2facf00f062af3de5164250d5a401d6f2b1 /src/libstd/sys/unix/stack_overflow.rs | |
| parent | 381d2ed70d3f3c2913e19a950dee0da0149dae1d (diff) | |
| download | rust-f910d27f87419e17cc59034265f6795db5247dfa.tar.gz rust-f910d27f87419e17cc59034265f6795db5247dfa.zip | |
siphash: Use ptr::copy_nonoverlapping for efficient data loading
Use `ptr::copy_nonoverlapping` (aka memcpy) to load an u64 from the byte stream. This is correct for any alignment, and the compiler will use the appropriate instruction to load the data. Use unchecked indexing. This results in a large improvement of throughput (hashed bytes / second) for long data. Maximum improvement benches at a 70% increase in throughput for large values (> 256 bytes) but already values of 16 bytes or larger improve. Introducing unchecked indexing is motivated to reach as good throughput as possible. Using ptr::copy_nonoverlapping without unchecked indexing would land the improvement some 20-30 pct units lower. We use a debug assertion so that the test suite checks our use of unchecked indexing.
Diffstat (limited to 'src/libstd/sys/unix/stack_overflow.rs')
0 files changed, 0 insertions, 0 deletions
