about summary refs log tree commit diff
path: root/compiler/rustc_codegen_llvm/src/back
AgeCommit message (Collapse)AuthorLines
2025-08-19Rollup merge of #145484 - Zalathar:archive-builder, r=bjorn3Stuart Cook-177/+6
Remove `LlvmArchiveBuilder` and supporting code/bindings Switching over to the newer Rust-based `ArArchiveBuilder` happened in rust-lang/rust#128936, a year ago. Per the comment in `new_archive_builder`, that seems like enough time to justify removing the older, unused `LlvmArchiveBuilder` implementation and its associated bindings. Fixes rust-lang/rust#128955.
2025-08-19Rollup merge of #145432 - Zalathar:target-machine, r=wesleywiserStuart Cook-8/+12
cg_llvm: Small cleanups to `owned_target_machine` This PR contains a few tiny cleanups to the `owned_target_machine` code. Each individual commit should be fairly straightforward.
2025-08-16Remove `LlvmArchiveBuilder` and supporting code/bindingsZalathar-177/+6
2025-08-15Simplify the `args_cstr_buff` assertionZalathar-5/+4
2025-08-15Avoid an unnecessary intermediate `&mut` referenceZalathar-1/+1
The `NonNull::as_mut` method returns a mut *reference*, rather than the mut *pointer* that is intended here.
2025-08-15Avoid an explicit cast from `*const c_uchar` to `*const c_char`Zalathar-2/+2
As noted in the `ffi` module docs, passing pointer/length byte strings from Rust to C++ is easier if we declare them as `*const c_uchar` on the Rust side, but `const char *` (possibly signed) on the C++ side. This is allowed because both pointer types are ABI-compatible, regardless of char signedness.
2025-08-15Declare module `rustc_codegen_llvm::back` in the normal wayZalathar-0/+5
Declaring these submodules directly in `lib.rs` was needlessly confusing.
2025-08-15Rollup merge of #145004 - bjorn3:remove_unused_fields, r=WaffleLapkinStuart Cook-5/+6
Couple of minor cleanups
2025-08-14Remove lto inline logicMarcelo Domínguez-27/+1
2025-08-08Remove bitcode_llvm_cmdlinebjorn3-5/+6
It used to be necessary on Apple platforms to ship with the App Store, but XCode 15 has stopped embedding LLVM bitcode and the App Store no longer accepts apps with bitcode embedded.
2025-07-28Auto merge of #144562 - matthiaskrgr:rollup-mlvn7qo, r=matthiaskrgrbors-46/+8
Rollup of 7 pull requests Successful merges: - rust-lang/rust#144072 (update `Atomic*::from_ptr` and `Atomic*::as_ptr` docs) - rust-lang/rust#144151 (`tests/ui/issues/`: The Issues Strike Back [1/N]) - rust-lang/rust#144300 (Clippy fixes for miropt-test-tools) - rust-lang/rust#144399 (Add a ratchet for moving all standard library tests to separate packages) - rust-lang/rust#144472 (str: Mark unstable `round_char_boundary` feature functions as const) - rust-lang/rust#144503 (Various refactors to the codegen coordinator code (part 3)) - rust-lang/rust#144530 (coverage: Infer `instances_used` from `pgo_func_name_var_map`) r? `@ghost` `@rustbot` modify labels: rollup
2025-07-28use let chains in ast, borrowck, codegen, const_evalKivooeo-4/+4
2025-07-26Remove support for -Zcombine-cgubjorn3-23/+0
Nobody seems to actually use this, while still adding some extra complexity to the already rather complex codegen coordinator code. It is also not supported by any backend other than the LLVM backend.
2025-07-25Use the object crate rather than LLVM for extracting bitcode sectionsbjorn3-23/+8
2025-07-24Auto merge of #144062 - bjorn3:lto_refactors2, r=davidtwcobors-91/+23
Various refactors to the LTO handling code (part 2) Continuing from https://github.com/rust-lang/rust/pull/143388 this removes a bit of dead code and moves the LTO symbol export calculation from individual backends to cg_ssa.
2025-07-22Rollup merge of #142097 - ZuseZ4:offload-host1, r=oli-obk许杰友 Jieyou Xu (Joe)-0/+7
gpu offload host code generation r? ghost This will generate most of the host side code to use llvm's offload feature. The first PR will only handle automatic mem-transfers to and from the device. So if a user calls a kernel, we will copy inputs back and forth, but we won't do the actual kernel launch. Before merging, we will use LLVM's Info infrastructure to verify that the memcopies match what openmp offloa generates in C++. `LIBOMPTARGET_INFO=-1 ./my_rust_binary` should print that a memcpy to and later from the device is happening. A follow-up PR will generate the actual device-side kernel which will then do computations on the GPU. A third PR will implement manual host2device and device2host functionality, but the goal is to minimize cases where a user has to overwrite our default handling due to performance issues. I'm trying to get a full MVP out first, so this just recognizes GPU functions based on magic names. The final frontend will obviously move this over to use proper macros, like I'm already doing it for the autodiff work. This work will also be compatible with std::autodiff, so one can differentiate GPU kernels. Tracking: - https://github.com/rust-lang/rust/issues/131513
2025-07-21Remove each_linked_rlib_for_lto from CodegenContextbjorn3-4/+7
2025-07-21Move exported_symbols_for_lto out of CodegenContextbjorn3-4/+8
2025-07-21Merge exported_symbols computation into exported_symbols_for_ltobjorn3-6/+5
And move exported_symbols_for_lto call from backends to cg_ssa.
2025-07-21Move LTO symbol export calculation from backends to cg_ssabjorn3-77/+14
2025-07-21Merge modules and cached_modules for fat LTObjorn3-12/+1
The modules vec can already contain serialized modules and there is no need to distinguish between cached and non-cached cgus at LTO time.
2025-07-18gpu host code generationManuel Drehwald-0/+1
2025-07-18add -Zoffload=Enable flag behind -Zunstable-options, to enable gpu (host) ↵Manuel Drehwald-0/+6
code generation
2025-07-18Pass wasm exception model to TargetOptionsNikita Popov-0/+6
This is no longer implied by -wasm-enable-eh.
2025-07-17Rollup merge of #143388 - bjorn3:lto_refactors, r=compiler-errorsLeón Orell Valerian Liehr-11/+10
Various refactors to the LTO handling code In particular reducing the sharing of code paths between fat and thin-LTO and making the fat LTO implementation more self-contained. This also moves some autodiff handling out of cg_ssa into cg_llvm given that Enzyme only works with LLVM anyway and an implementation for another backend may do things entirely differently. This will also make it a bit easier to split LTO handling out of the coordinator thread main loop into a separate loop, which should reduce the complexity of the coordinator thread.
2025-07-14Avoid a bunch of unnecessary `unsafe` blocks in cg_llvmOli Scherer-41/+36
2025-07-11Rollup merge of #143633 - dillona:noinline-assert, r=fee1-deadMatthias Krüger-1/+1
fix: correct assertion to check for 'noinline' attribute presence before removal
2025-07-10Make some "safe" llvm ops actually soundOli Scherer-1/+1
2025-07-08fix: correct assertion to check for 'noinline' attribute presence before removalDillon Amburgey-1/+1
2025-07-03Move dcx creation into WriteBackendMethods::codegenbjorn3-1/+3
2025-07-03Remove LtoModuleCodegenbjorn3-10/+7
Most uses of it either contain a fat or thin lto module. Only WorkItem::LTO could contain both, but splitting that enum variant doesn't complicate things much.
2025-06-25added PrintTAFn flag for autodiffKaran Janthe-1/+5
Signed-off-by: Karan Janthe <karanjanthe@gmail.com>
2025-05-28Mark all optimize methods and the codegen method as safebjorn3-3/+3
There is no safety contract and I don't think any of them can actually cause UB in more ways than passing malicious source code to rustc can. While LtoModuleCodegen::optimize says that the returned ModuleCodegen points into the LTO module, the LTO module has already been dropped by the time this function returns, so if the returned ModuleCodegen indeed points into the LTO module, we would have seen crashes on every LTO compilation, which we don't. As such the comment is outdated.
2025-05-11Add a safe wrapper for `LLVMAppendModuleInlineAsm`Zalathar-2/+2
This patch also changes the Rust-side declaration to take `*const c_uchar` instead of `*const c_char`, to avoid the need for `AsCCharPtr`.
2025-05-04Initial support for dynamically linked cratesBryanskiy-1/+2
2025-04-28remove noinline attribute and add alwaysinline after AD passbit-aloo-1/+27
2025-04-24Rollup merge of #139700 - EnzymeAD:autodiff-flags, r=oli-obkMatthias Krüger-19/+37
Autodiff flags Interestingly, it seems that some other projects have conflicts with exactly the same LLVM optimization passes as autodiff. At least `LLVMRustOptimize` has exactly the flags that we need to disable problematic opt passes. This PR enables us to compile code where users differentiate two identical functions in the same module. This has been especially common in test cases, but it's not impossible to encounter in the wild. It also enables two new flags for testing/debugging. I consider writing an MCP to upgrade PrintPasses to be a standalone -Z flag, since it is *not* the same as `-Z print-llvm-passes`, which IMHO gives less useful output. A discussion can be found here: [#t-compiler/llvm > Print llvm passes. @ 💬](https://rust-lang.zulipchat.com/#narrow/channel/187780-t-compiler.2Fllvm/topic/Print.20llvm.20passes.2E/near/511533038) Finally, it improves `PrintModBefore` and `PrintModAfter`. They used to work reliable, but now we just schedule enzyme as part of an existing ModulePassManager (MPM). Since Enzyme is last in the MPM scheduling, PrintModBefore became very inaccurate. It used to print the input module, which we gave to the Enzyme and was great to create llvm-ir reproducer. However, lately the MPM would run the whole `default<O3>` pipeline, which heavily modifies the llvm module, before we pass it to Enzyme. That made it impossible to use the flag to create llvm-ir reproducers for Enzyme bugs. We now schedule a PrintModule pass just before Enzyme, solving this problem. Based on the PrintPass output, it also _seems_ like changing `registerEnzymeAndPassPipeline(PB, true);` to `registerEnzymeAndPassPipeline(PB, false);` has no effect. In theory, the bool should tell Enzyme to schedule some helpful passes in the PassBuilder. However, since it doesn't do anything and I'm not 100% sure anymore on whether we really need it, I'll just disable it for now and postpone investigations. r? ``@oli-obk`` closes #139471 Tracking: - https://github.com/rust-lang/rust/issues/124509
2025-04-12update documentationManuel Drehwald-0/+5
2025-04-12fix "could not find source function" error by preventing function merging ↵Manuel Drehwald-1/+4
before AD
2025-04-12fix LooseTypes flag and PrintMod behaviour, add debug helperManuel Drehwald-18/+28
2025-04-07Prepend temp files with a string per invocation of rustcMichael Goulet-12/+45
2025-04-07Simplify temp path creation a bitMichael Goulet-16/+11
2025-04-06Remove LLVM 18 inline ASM span fallbackbeetrees-5/+2
2025-04-05Rollup merge of #137880 - EnzymeAD:autodiff-batching, r=oli-obkStuart Cook-2/+10
Autodiff batching Enzyme supports batching, which is especially known from the ML side when training neural networks. There we would normally have a training loop, where in each iteration we would pass in some data (e.g. an image), and a target vector. Based on how close we are with our prediction we compute our loss, and then use backpropagation to compute the gradients and update our weights. That's quite inefficient, so what you normally do is passing in a batch of 8/16/.. images and targets, and compute the gradients for those all at once, allowing better optimizations. Enzyme supports batching in two ways, the first one (which I implemented here) just accepts a Batch size, and then each Dual/Duplicated argument has not one, but N shadow arguments. So instead of ```rs for i in 0..100 { df(x[i], y[i], 1234); } ``` You can now do ```rs for i in 0..100.step_by(4) { df(x[i+0],x[i+1],x[i+2],x[i+3], y[i+0], y[i+1], y[i+2], y[i+3], 1234); } ``` which will give the same results, but allows better compiler optimizations. See the testcase for details. There is a second variant, where we can mark certain arguments and instead of having to pass in N shadow arguments, Enzyme assumes that the argument is N times longer. I.e. instead of accepting 4 slices with 12 floats each, we would accept one slice with 48 floats. I'll implement this over the next days. I will also add more tests for both modes. For any one preferring some more interactive explanation, here's a video of Tim's llvm dev talk, where he presents his work. https://www.youtube.com/watch?v=edvaLAL5RqU I'll also add some other docs to the dev guide and user docs in another PR. r? ghost Tracking: - https://github.com/rust-lang/rust/issues/124509 - https://github.com/rust-lang/rust/issues/135283
2025-04-04add new flag to print the module post-AD, before optsManuel Drehwald-2/+10
2025-04-04Rollup merge of #138949 - madsmtm:rename-to-darwin, r=WaffleLapkinMatthias Krüger-3/+3
Rename `is_like_osx` to `is_like_darwin` Replace `is_like_osx` with `is_like_darwin`, which more closely describes reality (OS X is the pre-2016 name for macOS, and is by now quite outdated; Darwin is the overall name for the OS underlying Apple's macOS, iOS, etc.). ``@rustbot`` label O-apple r? compiler
2025-03-25Rename `is_like_osx` to `is_like_darwin`Mads Marquart-3/+3
2025-03-25Reduce visibility of most items in `rustc_codegen_llvm`Daniel Paoliello-1/+1
2025-03-01Auto merge of #133250 - DianQK:embed-bitcode-pgo, r=nikicbors-55/+98
The embedded bitcode should always be prepared for LTO/ThinLTO Fixes #115344. Fixes #117220. There are currently two methods for generating bitcode that used for LTO. One method involves using `-C linker-plugin-lto` to emit object files as bitcode, which is the typical setting used by cargo. The other method is through `-C embed-bitcode=yes`. When using with `-C embed-bitcode=yes -C lto=no`, we run a complete non-LTO LLVM pipeline to obtain bitcode, then the bitcode is used for LTO. We run the Call Graph Profile Pass twice on the same module. This PR is doing something similar to LLVM's `buildFatLTODefaultPipeline`, obtaining the bitcode for embedding after running `buildThinLTOPreLinkDefaultPipeline`. r? nikic
2025-02-28Rollup merge of #137017 - bjorn3:ignore_invalid_bitcode, r=oli-obk许杰友 Jieyou Xu (Joe)-3/+22
Don't error when adding a staticlib with bitcode files compiled by newer LLVM cc https://github.com/rust-lang/rust/issues/128955#issuecomment-2657811196