diff options
| author | Dylan DPC <dylan.dpc@gmail.com> | 2020-02-13 02:52:54 +0100 |
|---|---|---|
| committer | GitHub <noreply@github.com> | 2020-02-13 02:52:54 +0100 |
| commit | a50a8967552d886a91b20fcd33310ad0f28857fd (patch) | |
| tree | 757999dfe462003e4675a39d3344c6fdbff32793 /src/libstd/sys/unix/stack_overflow.rs | |
| parent | 53a790c58a9fb055b58ae471a6e57ece81c55b50 (diff) | |
| parent | ad7802f9d45b884dad58931c7a8bec91d196ad0e (diff) | |
| download | rust-a50a8967552d886a91b20fcd33310ad0f28857fd.tar.gz rust-a50a8967552d886a91b20fcd33310ad0f28857fd.zip | |
Rollup merge of #69050 - nnethercote:micro-optimize-leb128, r=michaelwoerister
Micro-optimize the heck out of LEB128 reading and writing. This commit makes the following writing improvements: - Removes the unnecessary `write_to_vec` function. - Reduces the number of conditions per loop from 2 to 1. - Avoids a mask and a shift on the final byte. And the following reading improvements: - Removes an unnecessary type annotation. - Fixes a dangerous unchecked slice access. Imagine a slice `[0x80]` -- the current code will read past the end of the slice some number of bytes. The bounds check at the end will subsequently trigger, unless something bad (like a crash) happens first. The cost of doing bounds check in the loop body is negligible. - Avoids a mask on the final byte. And the following improvements for both reading and writing: - Changes `for` to `loop` for the loops, avoiding an unnecessary condition on each iteration. This also removes the need for `leb128_size`. All of these changes give significant perf wins, up to 5%. r? @michaelwoerister
Diffstat (limited to 'src/libstd/sys/unix/stack_overflow.rs')
0 files changed, 0 insertions, 0 deletions
