about summary refs log tree commit diff
path: root/src/libstd/path
diff options
context:
space:
mode:
authorbors <bors@rust-lang.org>2014-01-17 00:01:56 -0800
committerbors <bors@rust-lang.org>2014-01-17 00:01:56 -0800
commit93fb12e3d0644f6a8ddfa2ac1d6b0a1d8341e287 (patch)
tree172385992ea44c1eeb621c3905f51a7838401f2a /src/libstd/path
parent5fdc81262a5d44f10e335384b5d69b938d6d729c (diff)
parentf4c9ed42aa1f5b83aa2f0ee3fbb5a89919d208d4 (diff)
downloadrust-93fb12e3d0644f6a8ddfa2ac1d6b0a1d8341e287.tar.gz
rust-93fb12e3d0644f6a8ddfa2ac1d6b0a1d8341e287.zip
auto merge of #11498 : c-a/rust/optimize_vuint_at, r=alexcrichton
Use a lookup table, SHIFT_MASK_TABLE, that for every possible four
bit prefix holds the number of times the value should be right shifted and what
the right shifted value should be masked with. This way we can get rid of the
branches which in my testing gives approximately a 2x speedup.

Timings on Intel(R) Core(TM) i5-3570K CPU @ 3.40GHz

-- Before --
running 5 tests
test ebml::tests::test_vuint_at ... ok
test ebml::bench::vuint_at_A_aligned          ... bench:       494 ns/iter (+/- 3)
test ebml::bench::vuint_at_A_unaligned        ... bench:       494 ns/iter (+/- 4)
test ebml::bench::vuint_at_D_aligned          ... bench:       467 ns/iter (+/- 5)
test ebml::bench::vuint_at_D_unaligned        ... bench:       467 ns/iter (+/- 5)

-- After --
running 5 tests
test ebml::tests::test_vuint_at ... ok
test ebml::bench::vuint_at_A_aligned ... bench: 181 ns/iter (+/- 2)
test ebml::bench::vuint_at_A_unaligned ... bench: 192 ns/iter (+/- 1)
test ebml::bench::vuint_at_D_aligned ... bench: 181 ns/iter (+/- 3)
test ebml::bench::vuint_at_D_unaligned ... bench: 197 ns/iter (+/- 6)

Diffstat (limited to 'src/libstd/path')
0 files changed, 0 insertions, 0 deletions