about summary refs log tree commit diff
path: root/compiler/rustc_codegen_gcc
AgeCommit message (Collapse)AuthorLines
2025-06-03Remove type_test from IntrinsicCallBuilderMethodsbjorn3-5/+0
It is only used within cg_llvm.
2025-06-03Remove get_dbg_loc from DebugInfoBuilderMethodsbjorn3-4/+0
It is only used within cg_llvm.
2025-05-30Directly use from_immediate for handling boolbjorn3-6/+3
2025-05-30Avoid computing function type for intrinsic instancesbjorn3-8/+3
2025-05-30Use layout field of OperandRef in generic_simd_intrinsicbjorn3-47/+41
2025-05-30Use layout field of OperandRef and PlaceRef in codegen_intrinsic_callbjorn3-10/+11
This avoids having to get the function signature.
2025-05-30Rollup merge of #141507 - RalfJung:atomic-intrinsics, r=bjorn3Matthias Krüger-5/+5
atomic_load intrinsic: use const generic parameter for ordering We have a gazillion intrinsics for the atomics because we encode the ordering into the intrinsic name rather than making it a parameter. This is particularly bad for those operations that take two orderings. Let's fix that! This PR only converts `load`, to see if there's any feedback that would fundamentally change the strategy we pursue for the const generic intrinsics. The first two commits are preparation and could be a separate PR if you prefer. `@BoxyUwU` -- I hope this is a use of const generics that is unlikely to explode? All we need is a const generic of enum type. We could funnel it through an integer if we had to but an enum is obviously nicer... `@bjorn3` it seems like the cranelift backend entirely ignores the ordering?
2025-05-28get rid of rustc_codegen_ssa::common::AtomicOrderingRalf Jung-5/+5
2025-05-28Remove unused arg_memory_ty methodbjorn3-15/+3
2025-05-28Mark all optimize methods and the codegen method as safebjorn3-3/+3
There is no safety contract and I don't think any of them can actually cause UB in more ways than passing malicious source code to rustc can. While LtoModuleCodegen::optimize says that the returned ModuleCodegen points into the LTO module, the LTO module has already been dropped by the time this function returns, so if the returned ModuleCodegen indeed points into the LTO module, we would have seen crashes on every LTO compilation, which we don't. As such the comment is outdated.
2025-05-28Remove methods from StaticCodegenMethods that are not called in cg_ssa itselfbjorn3-8/+3
2025-05-28Make predefine methods take &mut selfbjorn3-3/+3
2025-05-28Remove a couple of uses of interior mutability around staticsbjorn3-3/+3
2025-05-28Remove codegen_unit from MiscCodegenMethodsbjorn3-7/+8
2025-05-26Remove usage of FnAbi in codegen_intrinsic_callbjorn3-21/+11
2025-05-26Pass PlaceRef rather than Bx::Value to codegen_intrinsic_callbjorn3-12/+9
2025-05-24Cleanup CodegenFnAttrFlagsNoratrieb-1/+1
- Rename `USED` to `USED_COMPILER` to better reflect its behavior. - Reorder some items to group the used and allocator flags together - Renumber them without gaps
2025-05-19Rollup merge of #140874 - mejrs:rads, r=WaffleLapkinStuart Cook-4/+4
make `rustc_attr_parsing` less dominant in the rustc crate graph It has/had a glob re-export of `rustc_attr_data_structures`, which is a crate much lower in the graph, and a lot of crates were using it *just* (or *mostly*) for that re-export, while they can rely on `rustc_attr_data_structures` directly. Previous graph: ![graph_1](https://github.com/user-attachments/assets/f4a5f13c-4222-4903-b56d-28c83511fcbd) Graph with this PR: ![graph_2](https://github.com/user-attachments/assets/1e053d9c-75cc-402b-84df-86229c98277a) The first commit keeps the re-export, and just changes the dependency if possible. The second commit is the "breaking change" which removes the re-export, and "explicitly" adds the `rustc_attr_data_structures` dependency where needed. It also switches over some src/tools/*. The second commit is actually a lot more involved than I expected. Please let me know if it's a better idea to back it out and just keep the first commit.
2025-05-18Remove rustc_attr_data_structures re-export from rustc_attr_parsingmejrs-4/+4
2025-05-14Update gcc version used in rustc_codegen_versionGuillaume Gomez-1/+1
2025-05-14Merge commit '6ba33f5e1189a5ae58fb96ce3546e76b13d090f5' into ↵Guillaume Gomez-248/+910
subtree-update_cg_gcc_2025-05-14
2025-05-11Rollup merge of #140792 - Urgau:minimum-maximum-intrinsics, ↵León Orell Valerian Liehr-0/+36
r=scottmcm,traviscross,tgross35 Use intrinsics for `{f16,f32,f64,f128}::{minimum,maximum}` operations This PR creates intrinsics for `{f16,f32,f64,f64}::{minimum,maximum}` operations. This wasn't done when those operations were added as the LLVM support was too weak but now that LLVM has libcalls for unsupported platforms we can finally use them. Cranelift and GCC[^1] support are partial, Cranelift doesn't support `f16` and `f128`, while GCC doesn't support `f16`. r? `@tgross35` try-job: aarch64-gnu try-job: dist-various-1 try-job: dist-various-2 [^1]: https://www.gnu.org/software///gnulib/manual/html_node/Functions-in-_003cmath_002eh_003e.html
2025-05-10Rollup merge of #140660 - RalfJung:more-order, r=WaffleLapkinMatthias Krüger-1/+0
remove 'unordered' atomic intrinsics As their doc comment already indicates, these operations do not currently have a place in our memory model. The intrinsics were introduced to support a hack in compiler-builtins, but that hack recently got removed (see https://github.com/rust-lang/compiler-builtins/issues/788).
2025-05-09remove 'unordered' atomic intrinsicsRalf Jung-1/+0
2025-05-09Use intrinsics for `{f16,f32,f64,f128}::{minimum,maximum}` operationsUrgau-0/+36
2025-05-05Rename Instance::new to Instance::new_raw and add a note that it is rawMichael Goulet-1/+1
2025-05-04Initial support for dynamically linked cratesBryanskiy-1/+5
2025-04-30Rollup merge of #134232 - bjorn3:naked_asm_improvements, r=wesleywiserMatthias Krüger-5/+5
Share the naked asm impl between cg_ssa and cg_clif This was introduced in https://github.com/rust-lang/rust/pull/128004.
2025-04-27Implement the internal feature `cfg_target_has_reliable_f16_f128`Trevor Gross-9/+15
Support for `f16` and `f128` is varied across targets, backends, and backend versions. Eventually we would like to reach a point where all backends support these approximately equally, but until then we have to work around some of these nuances of support being observable. Introduce the `cfg_target_has_reliable_f16_f128` internal feature, which provides the following new configuration gates: * `cfg(target_has_reliable_f16)` * `cfg(target_has_reliable_f16_math)` * `cfg(target_has_reliable_f128)` * `cfg(target_has_reliable_f128_math)` `reliable_f16` and `reliable_f128` indicate that basic arithmetic for the type works correctly. The `_math` versions indicate that anything relying on `libm` works correctly, since sometimes this hits a separate class of codegen bugs. These options match configuration set by the build script at [1]. The logic for LLVM support is duplicated as-is from the same script. There are a few possible updates that will come as a follow up. The config introduced here is not planned to ever become stable, it is only intended to replace the build scripts for `std` tests and `compiler-builtins` that don't have any way to configure based on the codegen backend. MCP: https://github.com/rust-lang/compiler-team/issues/866 Closes: https://github.com/rust-lang/compiler-team/issues/866 [1]: https://github.com/rust-lang/rust/blob/555e1d0386f024a8359645c3217f4b3eae9be042/library/std/build.rs#L84-L186
2025-04-25Merge commit '4f83a4258deb99f3288a7122c0d5a78200931c61' into ↵Antoni Boucher-39/+46
subtree-update_cg_gcc_2025-04-25
2025-04-24Suggest {to,from}_ne_bytes for transmutations between arrays and integers, etcbendn-6/+2
2025-04-20Rollup merge of #137953 - RalfJung:simd-intrinsic-masks, r=WaffleLapkinChris Denton-14/+11
simd intrinsics with mask: accept unsigned integer masks, and fix some of the errors It's not clear at all why the mask would have to be signed, it is anyway interpreted bitwise. The backend should just make sure that works no matter the surface-level type; our LLVM backend already does this correctly. The note of "the mask may be widened, which only has the correct behavior for signed integers" explains... nothing? Why can't the code do the widening correctly? If necessary, just cast to the signed type first... Also while we are at it, fix the errors. For simd_masked_load/store, the errors talked about the "third argument" but they meant the first argument (the mask is the first argument there). They also used the wrong type for `expected_element`. I have extremely low confidence in the GCC part of this PR. See [discussion on Zulip](https://rust-lang.zulipchat.com/#narrow/channel/257879-project-portable-simd/topic/On.20the.20sign.20of.20masks)
2025-04-20simd intrinsics with mask: accept unsigned integer masksRalf Jung-14/+11
2025-04-19Fix importGuillaume Gomez-1/+2
2025-04-18Fix compilation error in GCC backendGuillaume Gomez-1/+1
2025-04-18Fix `rustc_codegen_gcc/tests/run/return-tuple.rs` testGuillaume Gomez-6/+0
2025-04-18Merge commit 'db1a31c243a649e1fe20f5466ba181da5be35c14' into ↵Guillaume Gomez-1244/+962
subtree-update_cg_gcc_2025-04-18
2025-04-14Pass &mut self to codegen_global_asmbjorn3-4/+4
2025-04-14Pass MonoItemData to MonoItem::definebjorn3-2/+2
2025-04-07Prepend temp files with a string per invocation of rustcMichael Goulet-5/+21
2025-04-07Simplify temp path creation a bitMichael Goulet-8/+5
2025-04-04Rollup merge of #138949 - madsmtm:rename-to-darwin, r=WaffleLapkinMatthias Krüger-1/+1
Rename `is_like_osx` to `is_like_darwin` Replace `is_like_osx` with `is_like_darwin`, which more closely describes reality (OS X is the pre-2016 name for macOS, and is by now quite outdated; Darwin is the overall name for the OS underlying Apple's macOS, iOS, etc.). ``@rustbot`` label O-apple r? compiler
2025-03-28Auto merge of #138503 - bjorn3:string_merging, r=tmiaskobors-0/+1
Avoid wrapping constant allocations in packed structs when not necessary This way LLVM will set the string merging flag if the alloc is a nul terminated string, reducing binary sizes. try-job: armhf-gnu
2025-03-28Avoid wrapping constant allocations in packed structs when not necessarybjorn3-0/+1
This way LLVM will set the string merging flag if the alloc is a nul terminated string, reducing binary sizes.
2025-03-25Rename `is_like_osx` to `is_like_darwin`Mads Marquart-1/+1
2025-03-17Remove implicit #[no_mangle] for #[rustc_std_internal_symbol]bjorn3-6/+8
2025-03-12intrinsics: remove unnecessary leading underscore from argument namesRalf Jung-8/+8
2025-03-11Auto merge of #137586 - nnethercote:SetImpliedBits, r=bjorn3bors-32/+38
Speed up target feature computation The LLVM backend calls `LLVMRustHasFeature` twice for every feature. In short-running rustc invocations, this accounts for a surprising amount of work. r? `@bjorn3`
2025-03-09Rollup merge of #138040 - thaliaarchi:use-prelude-size-of.compiler, ↵Matthias Krüger-5/+1
r=compiler-errors compiler: Use `size_of` from the prelude instead of imported Use `std::mem::{size_of, size_of_val, align_of, align_of_val}` from the prelude instead of importing or qualifying them. Apply this change across the compiler. These functions were added to all preludes in Rust 1.80. r? ``@compiler-errors``
2025-03-07compiler: Use size_of from the prelude instead of importedThalia Archibald-5/+1
Use `std::mem::{size_of, size_of_val, align_of, align_of_val}` from the prelude instead of importing or qualifying them. These functions were added to all preludes in Rust 1.80.