about summary refs log tree commit diff
path: root/src/librustc_codegen_llvm/common.rs
AgeCommit message (Collapse)AuthorLines
2020-08-30mv compiler to compiler/mark-341/+0
2020-08-17rust_ast::ast => rustc_astUjjwal Sharma-1/+1
2020-08-06Incorporate tracing cratebishtpawan-1/+1
2020-07-22polymorphize GlobalAlloc::FunctionBastian Kauschke-1/+1
2020-07-22[AVR] Correctly set the pointer address space when constructing pointers to ↵Dylan McKay-6/+9
functions This patch extends the existing `type_i8p` method so that it requires an explicit address space to be specified. Before this patch, the `type_i8p` method implcitily assumed the default address space, which is not a safe transformation on all targets, namely AVR. The Rust compiler already has support for tracking the "instruction address space" on a per-target basis. This patch extends the code generation routines so that an address space must always be specified. In my estimation, around 15% of the callers of `type_i8p` produced invalid code on AVR due to the loss of address space prior to LLVM final code generation. This would lead to unavoidable assertion errors relating to invalid bitcasts. With this patch, the address space is always either 1) explicitly set to the instruction address space because the logic is dealing with functions which must be placed there, or 2) explicitly set to the default address space 0 because the logic can only operate on data space pointers and thus we keep the existing semantics of assuming the default, "data" address space.
2020-06-19Rollup merge of #72689 - lcnr:common_str, r=estebankManish Goregaokar-1/+1
add str to common types I already expected this to be the case and it may slightly improve perf. Afaict if we ever want to change str into a lang item this would have to get reverted. As that would be fairly simple I don't believe this to cause any problems in the future.
2020-05-30Make TLS accesses explicit in MIROliver Scherer-0/+1
2020-05-28add str to common typesBastian Kauschke-1/+1
2020-05-08Create a convenience wrapper for `get_global_alloc(id).unwrap()`Oliver Scherer-5/+4
2020-05-08Simplify the `tcx.alloc_map` APIOliver Scherer-2/+1
2020-04-02nix rustc_target::abi::* reexport in ty::layoutMazdak Farrokhzad-19/+12
2020-03-30rustc -> rustc_middle part 2Mazdak Farrokhzad-3/+3
2020-03-27Rename TyLayout to TyAndLayout.Ana-Maria Mihalache-2/+2
2020-03-20remove redundant returns (clippy::needless_return)Matthias Krüger-7/+3
2020-03-13Auto merge of #69986 - JohnTitor:rollup-h0809mf, r=JohnTitorbors-2/+2
Rollup of 12 pull requests Successful merges: - #69403 (Implement `Copy` for `IoSlice`) - #69460 (Move some `build-pass` tests to `check-pass`) - #69723 (Added doc on keyword Pub.) - #69802 (fix more clippy findings) - #69809 (remove lifetimes that can be elided (clippy::needless_lifetimes)) - #69947 (Clean up E0423 explanation) - #69949 (triagebot.toml: add ping aliases) - #69954 (rename panic_if_ intrinsics to assert_) - #69960 (miri engine: fix treatment of abort intrinsic) - #69966 (Add more regression tests) - #69973 (Update stable-since version for const_int_conversion) - #69974 (Clean up E0434 explanation) Failed merges: r? @ghost
2020-03-13Auto merge of #69155 - chrissimpkins:llvm-globals, r=eddybbors-4/+7
Add support for LLVM globals corresponding to miri allocations should be named alloc123 Adds support for this request from @eddyb in #69134: > That is, if -Zfewer-names is false (usually only because of --emit=llvm-ir), we should use the same name for LLVM globals we generate out of miri allocs as #67133 does in MIR output (allocN). > >This way, we can easily see the mapping between MIR and LLVM IR (and it shouldn't be any costlier for regular compilation, which would continue to use unnamed globals). r? @eddyb cc @oli-obk
2020-03-12remove lifetimes that can be elided (clippy::needless_lifetimes)Matthias Krüger-2/+2
2020-03-12support LLVM globals corresponding to miri allocationsChris Simpkins-4/+7
2020-02-29Rename `syntax` to `rustc_ast` in source codeVadim Petrochenkov-1/+1
2020-02-03rustc_codegen_ssa: split declare_local into create_dbg_var and dbg_var_addr.Eduard-Mihai Burtescu-0/+1
2020-01-02Normalize `syntax::symbol` imports.Mazdak Farrokhzad-1/+1
2019-12-22Format the worldMark Rousskov-77/+50
2019-12-201. ast::Mutability::{Mutable -> Mut, Immutable -> Not}.Mazdak Farrokhzad-1/+1
2. mir::Mutability -> ast::Mutability.
2019-12-11rustc: Link LLVM directly into rustc againAlex Crichton-0/+2
This commit builds on #65501 continue to simplify the build system and compiler now that we no longer have multiple LLVM backends to ship by default. Here this switches the compiler back to what it once was long long ago, which is linking LLVM directly to the compiler rather than dynamically loading it at runtime. The `codegen-backends` directory of the sysroot no longer exists and all relevant support in the build system is removed. Note that `rustc` still supports a dynamically loaded codegen backend as it did previously, it just no longer supports dynamically loaded codegen backends in its own sysroot. Additionally as part of this the `librustc_codegen_llvm` crate now once again explicitly depends on all of its crates instead of implicitly loading them through the sysroot. This involved filling out its `Cargo.toml` and deleting all the now-unnecessary `extern crate` annotations in the header of the crate. (this in turn required adding a number of imports for names of macros too). The end results of this change are: * Rustbuild's build process for the compiler as all the "oh don't forget the codegen backend" checks can be easily removed. * Building `rustc_codegen_llvm` is much simpler since it's simply another compiler crate. * Managing the dependencies of `rustc_codegen_llvm` is much simpler since it's "just another `Cargo.toml` to edit" * The build process should be a smidge faster because there's more parallelism in the main rustc build step rather than splitting `librustc_codegen_llvm` out to its own step. * The compiler is expected to be slightly faster by default because the codegen backend does not need to be dynamically loaded. * Disabling LLVM as part of rustbuild is still supported, supporting multiple codegen backends is still supported, and dynamic loading of a codegen backend is still supported.
2019-12-06Rename to `then_some` and `then`varkor-1/+1
2019-12-06Use `to_option` in various placesvarkor-5/+1
2019-10-27Panicking infra uses &core::panic::Location.Adam Perry-18/+0
This allows us to remove `static_panic_msg` from the SSA<->LLVM boundary, along with its fat pointer representation for &str. Also changes the signature of PanicInfo::internal_contructor to avoid copying. Closes #65856.
2019-10-27Implement core::intrinsics::caller_location.Adam Perry-0/+7
Returns a `&core::panic::Location` corresponding to where it was called, also making `Location` a lang item.
2019-10-13Improve type safetybjorn3-21/+13
2019-10-13Remove MiscMethods::instancesbjorn3-1/+1
2019-10-13s/FuncId/Functionbjorn3-1/+1
2019-10-13Remove is_const_integral method from ConstMethodsbjorn3-10/+14
2019-10-13Introduce FuncId backend typebjorn3-0/+2
2019-09-05Rollup merge of #64003 - Dante-Broggi:place-align-in-layout, r=matthewjasperMazdak Farrokhzad-1/+1
place: Passing `align` = `layout.align.abi`, when also passing `layout` Of the calls changed: 7/12 use `align` = `layout.align.abi`. `from_const_alloc` uses `alloc.align`, but that is `assert_eq!` to `layout.align.abi`. only 4/11 use something interesting for `align`.
2019-09-04Remove `LocalInternedString` uses from `librustc_codegen_llvm`.Nicholas Nethercote-6/+7
2019-08-29`new_sized` is mostly used without alignDante-Broggi-1/+1
so rename it `new_sized_aligned`. 6/11 use `align` = `layout.align.abi`. `from_const_alloc` uses `alloc.align`, but that is `assert_eq!` to `layout.align.abi`. only 4/11 use something interesting for `align`.
2019-08-17Cast only where necessaryOliver Scherer-4/+5
2019-08-16Do not generate allocations for zero sized allocationsOliver Scherer-8/+13
2019-08-02assert consistencyRalf Jung-3/+3
2019-08-02CTFE: simplify Value type by not checking for alignmentRalf Jung-2/+2
2019-07-20Remove vector fadd/fmul reduction workaroundsNikita Popov-19/+0
The bugs that this was working around have been fixed in LLVM 9.
2019-07-09Fix float add/mul reduction codegenNikita Popov-0/+4
The accumulator is now respected for unordered reductions.
2019-07-06Remove use of mem::uninitialized in code_gen crateLzu Tao-2/+1
2019-07-04Permit use of mem::uninitialized via allow(deprecated)Mark Rousskov-0/+1
2019-06-19Weave the alignment through `ByRef`Oliver Scherer-3/+4
2019-05-30light refactoring of global AllocMapRalf Jung-4/+4
* rename AllocKind -> GlobalAlloc. This stores the allocation itself, not just its kind. * rename the methods that allocate stuff to have consistent names.
2019-05-26rename Scalar::Bits to Scalar::Raw and bits field to dataRalf Jung-3/+3
2019-04-21Change return type of `TyCtxt::is_static` to boolVadim Petrochenkov-1/+1
Add `TyCtxt::is_mutable_static`
2019-03-29Remove const_{cstr,str_slice,get_elt,get_real} and is_const_real methods ↵bjorn3-66/+66
from cg_ssa This introduces the static_panic_msg trait method to StaticBuilderMethods.
2019-03-29Remove const_{fat_ptr,array,vector,bytes} from cg_ssabjorn3-26/+28