diff options
| author | Yuki Okushi <huyuumi.dev@gmail.com> | 2020-10-11 03:19:07 +0900 |
|---|---|---|
| committer | GitHub <noreply@github.com> | 2020-10-11 03:19:07 +0900 |
| commit | c14c9bafcd984fba40114917334e60fac6939683 (patch) | |
| tree | fbfb907aff6d48e0432096d1841fd2e5bcc3c722 /src/test/codegen/src-hash-algorithm/src-hash-algorithm-md5.rs | |
| parent | 1b134430ef110f99f0a280c9cf604dd54275b543 (diff) | |
| parent | bd49ded308f7243d1ba3170ea1bd0d5855d0544b (diff) | |
| download | rust-c14c9bafcd984fba40114917334e60fac6939683.tar.gz rust-c14c9bafcd984fba40114917334e60fac6939683.zip | |
Rollup merge of #77629 - Julian-Wollersberger:recomputeRawStrError, r=varkor
Cleanup of `eat_while()` in lexer The size of a lexer Token was inflated by the largest `TokenKind` variants `LiteralKind::RawStr` and `RawByteStr`, because * it used `usize` although `u32` is sufficient in rustc, since crates must be smaller than 4GB, * and it stored the 20 bytes big `RawStrError` enum for error reporting. If a raw string is invalid, it now needs to be reparsed to get the `RawStrError` data, but that is a very cold code path. Technically this breaks other tools that depend on rustc_lexer because they are now also restricted to a max file size of 4GB. But this shouldn't matter in practice, and rustc_lexer isn't stable anyway. Can I also get a perf run? Edit: This makes no difference in performance. The PR now only contains a small cleanup.
Diffstat (limited to 'src/test/codegen/src-hash-algorithm/src-hash-algorithm-md5.rs')
0 files changed, 0 insertions, 0 deletions
