about summary refs log tree commit diff
path: root/compiler/rustc_llvm/llvm-wrapper/CoverageMappingWrapper.cpp
diff options
context:
space:
mode:
authorMara Bos <m-ou.se@m-ou.se>2021-01-17 12:24:42 +0000
committerGitHub <noreply@github.com>2021-01-17 12:24:42 +0000
commit152f425dcb118f5e92a060d259d3b5236aa59333 (patch)
treef36f806e8684d82fcb8622573c0c202b437e0452 /compiler/rustc_llvm/llvm-wrapper/CoverageMappingWrapper.cpp
parent3d5e7e0f477515b2ce680e712a15a61d11da69a7 (diff)
parent4e27ed3af19e604d7b65e130145fcecdc69fba7a (diff)
downloadrust-152f425dcb118f5e92a060d259d3b5236aa59333.tar.gz
rust-152f425dcb118f5e92a060d259d3b5236aa59333.zip
Rollup merge of #80201 - saethlin:bufreader-read-exact, r=KodrAus
Add benchmark and fast path for BufReader::read_exact

At work, we have a wrapper type that implements this optimization. It would be nice if the standard library were faster.

Before:
```
test io::buffered::tests::bench_buffered_reader_small_reads       ... bench:       7,670 ns/iter (+/- 45)
```
After:
```
test io::buffered::tests::bench_buffered_reader_small_reads       ... bench:       4,457 ns/iter (+/- 41)
```
Diffstat (limited to 'compiler/rustc_llvm/llvm-wrapper/CoverageMappingWrapper.cpp')
0 files changed, 0 insertions, 0 deletions