about summary refs log tree commit diff
path: root/src/rustllvm/RustWrapper.cpp
diff options
context:
space:
mode:
authorbors <bors@rust-lang.org>2014-08-03 03:06:12 +0000
committerbors <bors@rust-lang.org>2014-08-03 03:06:12 +0000
commit46d1ee72ac0292adb3db66de23216b57a40a52b2 (patch)
tree6af501d5c1c76395556c269f3a2ffffe195b4f04 /src/rustllvm/RustWrapper.cpp
parent12306da80ca4b881d784d6590f7d3fee36cf97ac (diff)
parentfd69365ead71e6ee0ca2990a926a163df5076f2d (diff)
downloadrust-46d1ee72ac0292adb3db66de23216b57a40a52b2.tar.gz
rust-46d1ee72ac0292adb3db66de23216b57a40a52b2.zip
auto merge of #16191 : DaGenix/rust/fix-aligned-access, r=alexcrichton
When I originally wrote the read_u32v_be() and write_u32_be() functions, I didn't consider memory alignment requirements of various architectures. Unfortunately, the current implementations may result in unaligned reads and writes. This doesn't impact x86 / x86_64, but it can cause a compiler crash on ARM. This pull requests rewrites those functions to make sure that all memory access is always correctly aligned.

This fix is a little bit academic - due to the way that LLVM aligns the structures that are passed as arguments to these functions, I believe that the end result is that all memory access happens to be aligned anyway. However, there is nothing in that code that actually enforces that, at least not explicitly. The new implementations are definitely slower than the existing ones. However, I don't believe that these functions are all that significant when looking at the overall performance of the compiler. I think getting rid of some unsafe code and removing a potential portability landmine justifies a very slight decrease in raw performance.
Diffstat (limited to 'src/rustllvm/RustWrapper.cpp')
0 files changed, 0 insertions, 0 deletions