diff options
| author | bors <bors@rust-lang.org> | 2024-05-29 11:57:13 +0000 |
|---|---|---|
| committer | bors <bors@rust-lang.org> | 2024-05-29 11:57:13 +0000 |
| commit | f2e1a3a80ae54734e1a3d306f31c2caebb05de9b (patch) | |
| tree | d2c0f2cc38fec93e1f792592bd48c2bd968c69da /compiler/rustc_abi/src/layout.rs | |
| parent | 4cf5723dbe471ef0a32857b968b91498551f5e38 (diff) | |
| parent | 37aeb75eb6c68dfee49e9ecdbd0c46e137b28bc7 (diff) | |
| download | rust-f2e1a3a80ae54734e1a3d306f31c2caebb05de9b.tar.gz rust-f2e1a3a80ae54734e1a3d306f31c2caebb05de9b.zip | |
Auto merge of #125360 - RalfJung:packed-field-reorder, r=fmease
don't inhibit random field reordering on repr(packed(1)) `inhibit_struct_field_reordering_opt` being false means we exclude this type from random field shuffling. However, `packed(1)` types can still be shuffled! The logic was added in https://github.com/rust-lang/rust/pull/48528 since it's pointless to reorder fields in packed(1) types (there's no padding that could be saved) -- but that shouldn't inhibit `-Zrandomize-layout` (which did not exist at the time). We could add an optimization elsewhere to not bother sorting the fields for `repr(packed)` types, but I don't think that's worth the effort. This *does* change the behavior in that we may now reorder fields of `packed(1)` structs (e.g. if there are niches, we'll try to move them to the start/end, according to `NicheBias`). We were always allowed to do that but so far we didn't. Quoting the [reference](https://doc.rust-lang.org/reference/type-layout.html): > On their own, align and packed do not provide guarantees about the order of fields in the layout of a struct or the layout of an enum variant, although they may be combined with representations (such as C) which do provide such guarantees.
Diffstat (limited to 'compiler/rustc_abi/src/layout.rs')
| -rw-r--r-- | compiler/rustc_abi/src/layout.rs | 10 |
1 files changed, 6 insertions, 4 deletions
diff --git a/compiler/rustc_abi/src/layout.rs b/compiler/rustc_abi/src/layout.rs index a95ef4c460f..9165b4f2df3 100644 --- a/compiler/rustc_abi/src/layout.rs +++ b/compiler/rustc_abi/src/layout.rs @@ -970,7 +970,7 @@ fn univariant< let mut align = if pack.is_some() { dl.i8_align } else { dl.aggregate_align }; let mut max_repr_align = repr.align; let mut inverse_memory_index: IndexVec<u32, FieldIdx> = fields.indices().collect(); - let optimize = !repr.inhibit_struct_field_reordering_opt(); + let optimize = !repr.inhibit_struct_field_reordering(); if optimize && fields.len() > 1 { let end = if let StructKind::MaybeUnsized = kind { fields.len() - 1 } else { fields.len() }; let optimizing = &mut inverse_memory_index.raw[..end]; @@ -1007,13 +1007,15 @@ fn univariant< // Calculates a sort key to group fields by their alignment or possibly some // size-derived pseudo-alignment. let alignment_group_key = |layout: &F| { + // The two branches here return values that cannot be meaningfully compared with + // each other. However, we know that consistently for all executions of + // `alignment_group_key`, one or the other branch will be taken, so this is okay. if let Some(pack) = pack { // Return the packed alignment in bytes. layout.align.abi.min(pack).bytes() } else { - // Returns `log2(effective-align)`. This is ok since `pack` applies to all - // fields equally. The calculation assumes that size is an integer multiple of - // align, except for ZSTs. + // Returns `log2(effective-align)`. The calculation assumes that size is an + // integer multiple of align, except for ZSTs. let align = layout.align.abi.bytes(); let size = layout.size.bytes(); let niche_size = layout.largest_niche.map(|n| n.available(dl)).unwrap_or(0); |
