| Age | Commit message (Collapse) | Author | Lines |
|
|
|
|
|
The indices are encoded as `u32`s in the range of invalid `char`s, so
that we know that if any mapping fails to parse as a `char` we should
use the value for lookup in the multi-table.
This avoids the second binary search in cases where a multi-`char`
mapping is needed.
Idea from @nikic
|
|
The majority of char case replacements are single char replacements,
so storing them as [char; 3] wastes a lot of space.
This commit splits the replacement tables for both `to_lower` and
`to_upper` into two separate tables, one with single-character mappings
and one with multi-character mappings.
This reduces the binary size for programs using all of these tables
with roughly 24K bytes.
|
|
Since ascii chars are already handled by a special case in the
`to_lower` and `to_upper` functions, there's no need to waste space on
them in the LUTs.
|
|
|
|
Implements #101400.
|
|
Avoid cloning a collection only to iterate over it
`@rustbot` label: +C-cleanup
|
|
Smaller improvements of tidy and the unicode generator
|
|
|
|
Reduce duplication by moving fetching logic into a dedicated function.
|
|
|
|
|
|
|
|
|
|
This updates the standard library's documentation to use the new syntax. The
documentation is worthwhile to update as it should be more idiomatic
(particularly for features like this, which are nice for users to get acquainted
with). The general codebase is likely more hassle than benefit to update: it'll
hurt git blame, and generally updates can be done by folks updating the code if
(and when) that makes things more readable with the new format.
A few places in the compiler and library code are updated (mostly just due to
already having been done when this commit was first authored).
|
|
The "Alphabetic" property in Unicode 14 grew too big for the bitset
representation, panicking "cannot pack 264 into 8 bits". However, we
were already choosing the skiplist for that anyway, so this doesn't need
to be a hard failure. That panic is now a returned `Err`, and then in
`emit_codepoints` we automatically defer to skiplist.
|
|
|
|
|
|
Since RFC 3052 soft deprecated the authors field anyway, hiding it from
crates.io, docs.rs, and making Cargo not add it by default, and it is
not generally up to date/useful information, we should remove it from
crates in this repo.
|
|
clippy::useless_format, clippy:::for_kv_map
|
|
|
|
|
|
|
|
Remove the UnicodeVersion struct containing
major, minor and update fields and replace it with
a 3-tuple containing the version number.
As the value of each field is limited to 255
use u8 to store them.
|
|
|
|
In practice, for the two data sets that still use the bitset encoding (uppercase
and lowercase) this is not a significant win, so just drop it entirely. It costs
us about 5 bytes, and the complexity is nontrivial.
|
|
This arranges for the sparser sets (everything except lower and uppercase) to be
encoded in a significantly smaller context. However, it is also a performance
trade-off (roughly 3x slower than the bitset encoding). The 40% size reduction
is deemed to be sufficiently important to merit this performance loss,
particularly as it is unlikely that this code is hot anywhere (and if it is,
paying the memory cost for a bitset that directly represents the data seems
worthwhile).
Alphabetic : 1599 bytes (- 937 bytes)
Case_Ignorable : 949 bytes (- 822 bytes)
Cased : 359 bytes (- 429 bytes)
Cc : 9 bytes (- 15 bytes)
Grapheme_Extend: 813 bytes (- 675 bytes)
Lowercase : 863 bytes
N : 419 bytes (- 619 bytes)
Uppercase : 776 bytes
White_Space : 37 bytes (- 46 bytes)
Total table sizes: 5824 bytes (-3543 bytes)
|
|
|
|
LLVM seems to at least sometimes optimize better when the length comes directly
from the `len()` of the array vs. an equivalent integer.
Also, this allows easier copy/pasting of the function into compiler explorer for
experimentation.
|
|
We find that it is common for large ranges of chars to be false -- and that
means that it is plausibly common for us to ask about a word that is entirely
empty. Therefore, we should make sure that we do not need to rotate bits or
otherwise perform some operation to map to the zero word; canonicalize it first
if possible.
|
|
This optimizes slightly better.
Alphabetic : 2536 bytes
Case_Ignorable : 1771 bytes
Cased : 788 bytes
Cc : 24 bytes
Grapheme_Extend: 1488 bytes
Lowercase : 863 bytes
N : 1038 bytes
Uppercase : 776 bytes
White_Space : 83 bytes
Total table sizes: 9367 bytes (-18 bytes; 2 bytes per set)
|
|
This ensures that what we test is what we get for final results as well.
|
|
This saves less bytes - by far - and is likely not the best operator to choose.
But for now, it works -- a better choice may arise later.
Alphabetic : 2538 bytes (- 84 bytes)
Case_Ignorable : 1773 bytes (- 30 bytes)
Cased : 790 bytes (- 18 bytes)
Cc : 26 bytes (- 6 bytes)
Grapheme_Extend: 1490 bytes (- 18 bytes)
Lowercase : 865 bytes (- 36 bytes)
N : 1040 bytes (- 24 bytes)
Uppercase : 778 bytes (- 60 bytes)
White_Space : 85 bytes (- 6 bytes)
Total table sizes: 9385 bytes (-282 bytes)
|
|
Previously, all words in the (deduplicated) bitset would be stored raw -- a full
64 bits (8 bytes). Now, those words that are equivalent to others through a
specific mapping are stored separately and "mapped" to the original when
loading; this shrinks the table sizes significantly, as each mapped word is
stored in 2 bytes (a 4x decrease from the previous).
The new encoding is also potentially non-optimal: the "mapped" byte is
frequently repeated, as in practice many mapped words use the same base word.
Currently we only support two forms of mapping: rotation and inversion. Note
that these are both guaranteed to map transitively if at all, and supporting
mappings for which this is not true may require a more interesting algorithm for
choosing the optimal pairing.
Updated sizes:
Alphabetic : 2622 bytes (- 414 bytes)
Case_Ignorable : 1803 bytes (- 330 bytes)
Cased : 808 bytes (- 126 bytes)
Cc : 32 bytes
Grapheme_Extend: 1508 bytes (- 252 bytes)
Lowercase : 901 bytes (- 84 bytes)
N : 1064 bytes (- 156 bytes)
Uppercase : 838 bytes (- 96 bytes)
White_Space : 91 bytes (- 6 bytes)
Total table sizes: 9667 bytes (-1,464 bytes)
|
|
This avoids wasting a small amount of space for some of the data sets.
The chunk resizing is caused by but not directly related to changes in this
commit.
Alphabetic : 3036 bytes
Case_Ignorable : 2133 bytes (- 3 bytes)
Cased : 934 bytes
Cc : 32 bytes
Grapheme_Extend: 1760 bytes (-14 bytes)
Lowercase : 985 bytes
N : 1220 bytes (- 5 bytes)
Uppercase : 934 bytes
White_Space : 97 bytes
Total table sizes: 11131 bytes (-22 bytes)
|
|
Currently the test file takes a while to compile -- 30 seconds or so -- but
since it's not going to be committed, and is just for local testing, that seems
fine.
|
|
Try chunk sizes between 1 and 64, selecting the one which minimizes the number
of bytes used. 16, the previous constant, turned out to be a rather good choice,
with 5/9 of the datasets still using it.
Alphabetic : 3036 bytes (- 19 bytes)
Case_Ignorable : 2136 bytes
Cased : 934 bytes
Cc : 32 bytes (- 11 bytes)
Grapheme_Extend: 1774 bytes
Lowercase : 985 bytes
N : 1225 bytes (- 41 bytes)
Uppercase : 934 bytes
White_Space : 97 bytes (- 43 bytes)
Total table sizes: 11153 bytes (-114 bytes)
|
|
If the unicode-downloads folder already exists, we likely just fetched the data,
so don't make any further network requests. Unicode versions are released rarely
enough that this doesn't matter much in practice.
|
|
|
|
|