about summary refs log tree commit diff
path: root/src/libstd/sys/unix/stack_overflow.rs
diff options
context:
space:
mode:
authorNicholas Nethercote <nnethercote@mozilla.com>2020-02-10 18:40:02 +1100
committerNicholas Nethercote <nnethercote@mozilla.com>2020-02-11 19:07:43 +1100
commitad7802f9d45b884dad58931c7a8bec91d196ad0e (patch)
treea75abe2791323a45fd0b243d4d7fdf0913eea2aa /src/libstd/sys/unix/stack_overflow.rs
parenta19edd6b161521a4f66716b3b45b8cf4d3f03f3a (diff)
downloadrust-ad7802f9d45b884dad58931c7a8bec91d196ad0e.tar.gz
rust-ad7802f9d45b884dad58931c7a8bec91d196ad0e.zip
Micro-optimize the heck out of LEB128 reading and writing.
This commit makes the following writing improvements:
- Removes the unnecessary `write_to_vec` function.
- Reduces the number of conditions per loop from 2 to 1.
- Avoids a mask and a shift on the final byte.

And the following reading improvements:
- Removes an unnecessary type annotation.
- Fixes a dangerous unchecked slice access. Imagine a slice `[0x80]` --
  the current code will read past the end of the slice some number of
  bytes. The bounds check at the end will subsequently trigger, unless
  something bad (like a crash) happens first. The cost of doing bounds
  check in the loop body is negligible.
- Avoids a mask on the final byte.

And the following improvements for both reading and writing:
- Changes `for` to `loop` for the loops, avoiding an unnecessary
  condition on each iteration. This also removes the need for
  `leb128_size`.

All of these changes give significant perf wins, up to 5%.
Diffstat (limited to 'src/libstd/sys/unix/stack_overflow.rs')
0 files changed, 0 insertions, 0 deletions