| Age | Commit message (Collapse) | Author | Lines |
|
|
|
This replaces many instances chars being casted to u8 with byte literals.
|
|
This commit stabilizes the `std::sync::atomics` module, renaming it to
`std::sync::atomic` to match library precedent elsewhere, and tightening
up behavior around incorrect memory ordering annotations.
The vast majority of the module is now `stable`. However, the
`AtomicOption` type has been deprecated, since it is essentially unused
and is not truly a primitive atomic type. It will eventually be replaced
by a higher-level abstraction like MVars.
Due to deprecations, this is a:
[breaking-change]
|
|
|
|
Apparently the units are in milliseconds, not in seconds!
|
|
This commit stabilizes the `std::sync::atomics` module, renaming it to
`std::sync::atomic` to match library precedent elsewhere, and tightening
up behavior around incorrect memory ordering annotations.
The vast majority of the module is now `stable`. However, the
`AtomicOption` type has been deprecated, since it is essentially unused
and is not truly a primitive atomic type. It will eventually be replaced
by a higher-level abstraction like MVars.
Due to deprecations, this is a:
[breaking-change]
|
|
|
|
Apparently the units are in milliseconds, not in seconds!
|
|
|
|
This was motivated by a desire to remove allocation in the common
pattern of
let old = key.replace(None)
do_something();
key.replace(old);
This also switched the map representation from a Vec to a TreeMap. A Vec
may be reasonable if there's only a couple TLD keys, but a TreeMap
provides better behavior as the number of keys increases.
Like the Vec, this TreeMap implementation does not shrink the container
when a value is removed. Unlike Vec, this TreeMap implementation cannot
reuse an empty node for a different key. Therefore any key that has been
inserted into the TLD at least once will continue to take up space in
the Map until the task ends. The expectation is that the majority of
keys that are inserted into TLD will be expected to have a value for
most of the rest of the task's lifetime. If this assumption is wrong,
there are two reasonable ways to fix this that could be implemented in
the future:
1. Provide an API call to either remove a specific key from the TLD and
destruct its node (e.g. `remove()`), or instead to explicitly clean
up all currently-empty nodes in the map (e.g. `compact()`). This is
simple, but requires the user to explicitly call it.
2. Keep track of the number of empty nodes in the map and when the map
is mutated (via `replace()`), if the number of empty nodes passes
some threshold, compact it automatically. Alternatively, whenever a
new key is inserted that hasn't been used before, compact the map at
that point.
---
Benchmarks:
I ran 3 benchmarks. tld_replace_none just replaces the tld key with None
repeatedly. tld_replace_some replaces it with Some repeatedly. And
tld_replace_none_some simulates the common behavior of replacing with
None, then replacing with the previous value again (which was a Some).
Old implementation:
test tld_replace_none ... bench: 20 ns/iter (+/- 0)
test tld_replace_none_some ... bench: 77 ns/iter (+/- 4)
test tld_replace_some ... bench: 57 ns/iter (+/- 2)
New implementation:
test tld_replace_none ... bench: 11 ns/iter (+/- 0)
test tld_replace_none_some ... bench: 23 ns/iter (+/- 0)
test tld_replace_some ... bench: 12 ns/iter (+/- 0)
|
|
Errors can be printed with {}, printing with {:?} does not work very
well.
Not actually related to this PR, but it came up when running the tests
and now is as good a time to fix it as any.
|
|
I wanted to add an implementation of `Default` inside the bitflags macro, but `Default` isn't in the prelude, which means anyone who wants to use `bitflags!` needs to import it. This seems not nice, so I've just implemented for `FilePermission` instead.
|
|
|
|
In order to prevent users from having to manually implement Hash and Ord for
bitflags types, this commit derives these traits automatically.
This breaks code that has manually implemented any of these traits for types
created by the bitflags! macro. Change this code by removing implementations
of these traits.
[breaking-change]
|
|
std: rename MemWriter to SeekableMemWriter, add seekless MemWriter
Not all users of MemWriter need to seek, but having MemWriter seekable adds between 3-29% in overhead in certain circumstances. This fixes that performance gap by making a non-seekable MemWriter, and creating a new SeekableMemWriter for those circumstances when that functionality is actually needed.
```
test io::mem::test::bench_buf_reader ... bench: 682 ns/iter (+/- 85)
test io::mem::test::bench_buf_writer ... bench: 580 ns/iter (+/- 57)
test io::mem::test::bench_mem_reader ... bench: 793 ns/iter (+/- 99)
test io::mem::test::bench_mem_writer_001_0000 ... bench: 48 ns/iter (+/- 27)
test io::mem::test::bench_mem_writer_001_0010 ... bench: 65 ns/iter (+/- 27) = 153 MB/s
test io::mem::test::bench_mem_writer_001_0100 ... bench: 132 ns/iter (+/- 12) = 757 MB/s
test io::mem::test::bench_mem_writer_001_1000 ... bench: 802 ns/iter (+/- 151) = 1246 MB/s
test io::mem::test::bench_mem_writer_100_0000 ... bench: 481 ns/iter (+/- 28)
test io::mem::test::bench_mem_writer_100_0010 ... bench: 1957 ns/iter (+/- 126) = 510 MB/s
test io::mem::test::bench_mem_writer_100_0100 ... bench: 8222 ns/iter (+/- 434) = 1216 MB/s
test io::mem::test::bench_mem_writer_100_1000 ... bench: 82496 ns/iter (+/- 11191) = 1212 MB/s
test io::mem::test::bench_seekable_mem_writer_001_0000 ... bench: 48 ns/iter (+/- 2)
test io::mem::test::bench_seekable_mem_writer_001_0010 ... bench: 64 ns/iter (+/- 2) = 156 MB/s
test io::mem::test::bench_seekable_mem_writer_001_0100 ... bench: 129 ns/iter (+/- 7) = 775 MB/s
test io::mem::test::bench_seekable_mem_writer_001_1000 ... bench: 801 ns/iter (+/- 159) = 1248 MB/s
test io::mem::test::bench_seekable_mem_writer_100_0000 ... bench: 711 ns/iter (+/- 51)
test io::mem::test::bench_seekable_mem_writer_100_0010 ... bench: 2532 ns/iter (+/- 227) = 394 MB/s
test io::mem::test::bench_seekable_mem_writer_100_0100 ... bench: 8962 ns/iter (+/- 947) = 1115 MB/s
test io::mem::test::bench_seekable_mem_writer_100_1000 ... bench: 85086 ns/iter (+/- 11555) = 1175 MB/s
```
|
|
|
|
Not all users of MemWriter need to seek, but having MemWriter
seekable adds between 3-29% in overhead in certain circumstances.
This fixes that performance gap by making a non-seekable MemWriter,
and creating a new SeekableMemWriter for those circumstances when
that functionality is actually needed.
```
test io::mem::test::bench_buf_reader ... bench: 682 ns/iter (+/- 85)
test io::mem::test::bench_buf_writer ... bench: 580 ns/iter (+/- 57)
test io::mem::test::bench_mem_reader ... bench: 793 ns/iter (+/- 99)
test io::mem::test::bench_mem_writer_001_0000 ... bench: 48 ns/iter (+/- 27)
test io::mem::test::bench_mem_writer_001_0010 ... bench: 65 ns/iter (+/- 27) = 153 MB/s
test io::mem::test::bench_mem_writer_001_0100 ... bench: 132 ns/iter (+/- 12) = 757 MB/s
test io::mem::test::bench_mem_writer_001_1000 ... bench: 802 ns/iter (+/- 151) = 1246 MB/s
test io::mem::test::bench_mem_writer_100_0000 ... bench: 481 ns/iter (+/- 28)
test io::mem::test::bench_mem_writer_100_0010 ... bench: 1957 ns/iter (+/- 126) = 510 MB/s
test io::mem::test::bench_mem_writer_100_0100 ... bench: 8222 ns/iter (+/- 434) = 1216 MB/s
test io::mem::test::bench_mem_writer_100_1000 ... bench: 82496 ns/iter (+/- 11191) = 1212 MB/s
test io::mem::test::bench_seekable_mem_writer_001_0000 ... bench: 48 ns/iter (+/- 2)
test io::mem::test::bench_seekable_mem_writer_001_0010 ... bench: 64 ns/iter (+/- 2) = 156 MB/s
test io::mem::test::bench_seekable_mem_writer_001_0100 ... bench: 129 ns/iter (+/- 7) = 775 MB/s
test io::mem::test::bench_seekable_mem_writer_001_1000 ... bench: 801 ns/iter (+/- 159) = 1248 MB/s
test io::mem::test::bench_seekable_mem_writer_100_0000 ... bench: 711 ns/iter (+/- 51)
test io::mem::test::bench_seekable_mem_writer_100_0010 ... bench: 2532 ns/iter (+/- 227) = 394 MB/s
test io::mem::test::bench_seekable_mem_writer_100_0100 ... bench: 8962 ns/iter (+/- 947) = 1115 MB/s
test io::mem::test::bench_seekable_mem_writer_100_1000 ... bench: 85086 ns/iter (+/- 11555) = 1175 MB/s
```
[breaking-change]
|
|
|
|
|
|
This makes edge cases in which the `Iterator` trait was not in scope
and/or `Option` or its variants were not in scope work properly.
This breaks code that looks like:
struct MyStruct { ... }
impl MyStruct {
fn next(&mut self) -> Option<int> { ... }
}
for x in MyStruct { ... } { ... }
Change ad-hoc `next` methods like the above to implementations of the
`Iterator` trait. For example:
impl Iterator<int> for MyStruct {
fn next(&mut self) -> Option<int> { ... }
}
Closes #15392.
[breaking-change]
|
|
|
|
Implement for Vec, DList, RingBuf. Add MutableSeq to the prelude.
Since the collections traits are in the prelude most consumers of
these methods will continue to work without change.
[breaking-change]
|
|
|
|
|
|
- `width()` computes the displayed width of a string, ignoring the width of control characters.
- arguably we might do *something* else for control characters, but the question is, what?
- users who want to do something else can iterate over chars()
- `graphemes()` returns a `Graphemes` struct, which implements an iterator over the grapheme clusters of a &str.
- fully compliant with [UAX#29](http://www.unicode.org/reports/tr29/#Grapheme_Cluster_Boundaries)
- passes all [Unicode-supplied tests](http://www.unicode.org/reports/tr41/tr41-15.html#Tests29)
- added code to generate additionial categories in `unicode.py`
- `Cn` aka `Not_Assigned`
- categories necessary for grapheme cluster breaking
- tidied up the exports from libunicode
- all exports are exposed through a module rather than directly at crate root.
- std::prelude imports UnicodeChar and UnicodeStrSlice from std::char and std::str rather than directly from libunicode
closes #7043
|
|
|
|
Use `String::from_utf8_lossy` instead
[breaking-change]
|
|
Use `String::from_utf8` instead
[breaking-change]
|
|
- Graphemes and GraphemeIndices structs implement iterators over
grapheme clusters analogous to the Chars and CharOffsets for chars in
a string. Iterator and DoubleEndedIterator are available for both.
- tidied up the exports for libunicode. crate root exports are now moved
into more appropriate module locations:
- UnicodeStrSlice, Words, Graphemes, GraphemeIndices are in str module
- UnicodeChar exported from char instead of crate root
- canonical_combining_class is exported from str rather than crate root
Since libunicode's exports have changed, programs that previously relied
on the old export locations will need to change their `use` statements
to reflect the new ones. See above for more information on where the new
exports live.
closes #7043
[breaking-change]
|
|
Currently when a read-only file has unlink() invoked on it on windows, the call
will fail. On unix, however, the call will succeed. In order to have a more
consistent behavior across platforms, this error is recognized on windows and
the file is changed to read-write before removal is attempted.
|
|
|
|
This PR is the outcome of the library stabilization meeting for the
`liballoc::owned` and `libcore::cell` modules.
Aside from the stability attributes, there are a few breaking changes:
* The `owned` modules is now named `boxed`, to better represent its
contents. (`box` was unavailable, since it's a keyword.) This will
help avoid the misconception that `Box` plays a special role wrt
ownership.
* The `AnyOwnExt` extension trait is renamed to `BoxAny`, and its `move`
method is renamed to `downcast`, in both cases to improve clarity.
* The recently-added `AnySendOwnExt` extension trait is removed; it was
not being used and is unnecessary.
[breaking-change]
|
|
This PR is the outcome of the library stabilization meeting for the
`liballoc::owned` and `libcore::cell` modules.
Aside from the stability attributes, there are a few breaking changes:
* The `owned` modules is now named `boxed`, to better represent its
contents. (`box` was unavailable, since it's a keyword.) This will
help avoid the misconception that `Box` plays a special role wrt
ownership.
* The `AnyOwnExt` extension trait is renamed to `BoxAny`, and its `move`
method is renamed to `downcast`, in both cases to improve clarity.
* The recently-added `AnySendOwnExt` extension trait is removed; it was
not being used and is unnecessary.
[breaking-change]
|
|
* Don't warn about `#[crate_name]` if `--crate-name` is specified
* Don't warn about non camel case identifiers on `#[repr(C)]` structs
* Switch `mode` to `mode_t` in libc.
|
|
First condition is not needed and just prevents 0 length writes
Fixes #15583
|
|
While I'm at it, export O_SYNC with the other flags that are exported.
Closes #15582
|
|
|
|
This commit changes the `io::process::Command` API to provide
fine-grained control over the environment:
* The `env` method now inserts/updates a key/value pair.
* The `env_remove` method removes a key from the environment.
* The old `env` method, which sets the entire environment in one shot,
is renamed to `env_set_all`. It can be used in conjunction with the
finer-grained methods. This renaming is a breaking change.
To support these new methods, the internal `env` representation for
`Command` has been changed to an optional `HashMap` holding owned
`CString`s (to support non-utf8 data). The `HashMap` is only
materialized if the environment is updated. The implementation does not
try hard to avoid allocation, since the cost of launching a process will
dwarf any allocation cost.
This patch also adds `PartialOrd`, `Eq`, and `Hash` implementations for
`CString`.
[breaking-change]
|
|
|
|
llvm is currently not able to conver `Vec::extend` into a memcpy for `Copy` types, which results in methods like `Vec::push_all` to run twice as slow as it should be running. This patch takes the unsafe `Vec::clone` optimization to speed up all the operations that are cloning a slice into a `Vec`.
before:
```
test vec::tests::bench_clone_from_0000_0000 ... bench: 12 ns/iter (+/- 2)
test vec::tests::bench_clone_from_0000_0010 ... bench: 125 ns/iter (+/- 4) = 80 MB/s
test vec::tests::bench_clone_from_0000_0100 ... bench: 360 ns/iter (+/- 33) = 277 MB/s
test vec::tests::bench_clone_from_0000_1000 ... bench: 2601 ns/iter (+/- 175) = 384 MB/s
test vec::tests::bench_clone_from_0010_0000 ... bench: 12 ns/iter (+/- 2)
test vec::tests::bench_clone_from_0010_0010 ... bench: 125 ns/iter (+/- 10) = 80 MB/s
test vec::tests::bench_clone_from_0010_0100 ... bench: 361 ns/iter (+/- 28) = 277 MB/s
test vec::tests::bench_clone_from_0100_0010 ... bench: 131 ns/iter (+/- 13) = 76 MB/s
test vec::tests::bench_clone_from_0100_0100 ... bench: 360 ns/iter (+/- 9) = 277 MB/s
test vec::tests::bench_clone_from_0100_1000 ... bench: 2575 ns/iter (+/- 168) = 388 MB/s
test vec::tests::bench_clone_from_1000_0100 ... bench: 356 ns/iter (+/- 20) = 280 MB/s
test vec::tests::bench_clone_from_1000_1000 ... bench: 2605 ns/iter (+/- 167) = 383 MB/s
test vec::tests::bench_from_slice_0000 ... bench: 11 ns/iter (+/- 0)
test vec::tests::bench_from_slice_0010 ... bench: 115 ns/iter (+/- 5) = 86 MB/s
test vec::tests::bench_from_slice_0100 ... bench: 309 ns/iter (+/- 170) = 323 MB/s
test vec::tests::bench_from_slice_1000 ... bench: 2065 ns/iter (+/- 198) = 484 MB/s
test vec::tests::bench_push_all_0000_0000 ... bench: 7 ns/iter (+/- 0)
test vec::tests::bench_push_all_0000_0010 ... bench: 79 ns/iter (+/- 7) = 126 MB/s
test vec::tests::bench_push_all_0000_0100 ... bench: 342 ns/iter (+/- 18) = 292 MB/s
test vec::tests::bench_push_all_0000_1000 ... bench: 2873 ns/iter (+/- 75) = 348 MB/s
test vec::tests::bench_push_all_0010_0010 ... bench: 154 ns/iter (+/- 8) = 64 MB/s
test vec::tests::bench_push_all_0100_0100 ... bench: 518 ns/iter (+/- 18) = 193 MB/s
test vec::tests::bench_push_all_1000_1000 ... bench: 4490 ns/iter (+/- 223) = 222 MB/s
```
after:
```
test vec::tests::bench_clone_from_0000_0000 ... bench: 12 ns/iter (+/- 1)
test vec::tests::bench_clone_from_0000_0010 ... bench: 123 ns/iter (+/- 5) = 81 MB/s
test vec::tests::bench_clone_from_0000_0100 ... bench: 367 ns/iter (+/- 23) = 272 MB/s
test vec::tests::bench_clone_from_0000_1000 ... bench: 2618 ns/iter (+/- 252) = 381 MB/s
test vec::tests::bench_clone_from_0010_0000 ... bench: 12 ns/iter (+/- 1)
test vec::tests::bench_clone_from_0010_0010 ... bench: 124 ns/iter (+/- 7) = 80 MB/s
test vec::tests::bench_clone_from_0010_0100 ... bench: 369 ns/iter (+/- 34) = 271 MB/s
test vec::tests::bench_clone_from_0100_0010 ... bench: 123 ns/iter (+/- 6) = 81 MB/s
test vec::tests::bench_clone_from_0100_0100 ... bench: 371 ns/iter (+/- 25) = 269 MB/s
test vec::tests::bench_clone_from_0100_1000 ... bench: 2713 ns/iter (+/- 532) = 368 MB/s
test vec::tests::bench_clone_from_1000_0100 ... bench: 369 ns/iter (+/- 14) = 271 MB/s
test vec::tests::bench_clone_from_1000_1000 ... bench: 2611 ns/iter (+/- 194) = 382 MB/s
test vec::tests::bench_from_slice_0000 ... bench: 7 ns/iter (+/- 0)
test vec::tests::bench_from_slice_0010 ... bench: 108 ns/iter (+/- 4) = 92 MB/s
test vec::tests::bench_from_slice_0100 ... bench: 235 ns/iter (+/- 24) = 425 MB/s
test vec::tests::bench_from_slice_1000 ... bench: 1318 ns/iter (+/- 96) = 758 MB/s
test vec::tests::bench_push_all_0000_0000 ... bench: 7 ns/iter (+/- 0)
test vec::tests::bench_push_all_0000_0010 ... bench: 70 ns/iter (+/- 4) = 142 MB/s
test vec::tests::bench_push_all_0000_0100 ... bench: 176 ns/iter (+/- 16) = 568 MB/s
test vec::tests::bench_push_all_0000_1000 ... bench: 1125 ns/iter (+/- 94) = 888 MB/s
test vec::tests::bench_push_all_0010_0010 ... bench: 159 ns/iter (+/- 15) = 62 MB/s
test vec::tests::bench_push_all_0100_0100 ... bench: 363 ns/iter (+/- 12) = 275 MB/s
test vec::tests::bench_push_all_1000_1000 ... bench: 2860 ns/iter (+/- 415) = 349 MB/s
```
This also includes extra benchmarks for `Vec` and `MemWriter`.
|
|
Add libunicode; move unicode functions from core
- created new crate, libunicode, below libstd
- split `Char` trait into `Char` (libcore) and `UnicodeChar` (libunicode)
- Unicode-aware functions now live in libunicode
- `is_alphabetic`, `is_XID_start`, `is_XID_continue`, `is_lowercase`,
`is_uppercase`, `is_whitespace`, `is_alphanumeric`, `is_control`, `is_digit`,
`to_uppercase`, `to_lowercase`
- added `width` method in UnicodeChar trait
- determines printed width of character in columns, or None if it is a non-NULL control character
- takes a boolean argument indicating whether the present context is CJK or not (characters with 'A'mbiguous widths are double-wide in CJK contexts, single-wide otherwise)
- split `StrSlice` into `StrSlice` (libcore) and `UnicodeStrSlice` (libunicode)
- functionality formerly in `StrSlice` that relied upon Unicode functionality from `Char` is now in `UnicodeStrSlice`
- `words`, `is_whitespace`, `is_alphanumeric`, `trim`, `trim_left`, `trim_right`
- also moved `Words` type alias into libunicode because `words` method is in `UnicodeStrSlice`
- unified Unicode tables from libcollections, libcore, and libregex into libunicode
- updated `unicode.py` in `src/etc` to generate aforementioned tables
- generated new tables based on latest Unicode data
- added `UnicodeChar` and `UnicodeStrSlice` traits to prelude
- libunicode is now the collection point for the `std::char` module, combining the libunicode functionality with the `Char` functionality from libcore
- thus, moved doc comment for `char` from `core::char` to `unicode::char`
- libcollections remains the collection point for `std::str`
The Unicode-aware functions that previously lived in the `Char` and `StrSlice` traits are no longer available to programs that only use libcore. To regain use of these methods, include the libunicode crate and `use` the `UnicodeChar` and/or `UnicodeStrSlice` traits:
extern crate unicode;
use unicode::UnicodeChar;
use unicode::UnicodeStrSlice;
use unicode::Words; // if you want to use the words() method
NOTE: this does *not* impact programs that use libstd, since UnicodeChar and UnicodeStrSlice have been added to the prelude.
closes #15224
[breaking-change]
|
|
[breaking-change]
|
|
- created new crate, libunicode, below libstd
- split Char trait into Char (libcore) and UnicodeChar (libunicode)
- Unicode-aware functions now live in libunicode
- is_alphabetic, is_XID_start, is_XID_continue, is_lowercase,
is_uppercase, is_whitespace, is_alphanumeric, is_control,
is_digit, to_uppercase, to_lowercase
- added width method in UnicodeChar trait
- determines printed width of character in columns, or None if it is
a non-NULL control character
- takes a boolean argument indicating whether the present context is
CJK or not (characters with 'A'mbiguous widths are double-wide in
CJK contexts, single-wide otherwise)
- split StrSlice into StrSlice (libcore) and UnicodeStrSlice
(libunicode)
- functionality formerly in StrSlice that relied upon Unicode
functionality from Char is now in UnicodeStrSlice
- words, is_whitespace, is_alphanumeric, trim, trim_left, trim_right
- also moved Words type alias into libunicode because words method is
in UnicodeStrSlice
- unified Unicode tables from libcollections, libcore, and libregex into
libunicode
- updated unicode.py in src/etc to generate aforementioned tables
- generated new tables based on latest Unicode data
- added UnicodeChar and UnicodeStrSlice traits to prelude
- libunicode is now the collection point for the std::char module,
combining the libunicode functionality with the Char functionality
from libcore
- thus, moved doc comment for char from core::char to unicode::char
- libcollections remains the collection point for std::str
The Unicode-aware functions that previously lived in the Char and
StrSlice traits are no longer available to programs that only use
libcore. To regain use of these methods, include the libunicode crate
and use the UnicodeChar and/or UnicodeStrSlice traits:
extern crate unicode;
use unicode::UnicodeChar;
use unicode::UnicodeStrSlice;
use unicode::Words; // if you want to use the words() method
NOTE: this does *not* impact programs that use libstd, since UnicodeChar
and UnicodeStrSlice have been added to the prelude.
closes #15224
[breaking-change]
|
|
|
|
|
|
|
|
|
|
POSIX has recvfrom(2) and sendto(2), but their name seem not to be suitable with Rust. We already renamed getpeername(2) and getsockname(2), so I think it makes sense.
Alternatively, `receive_from` would be fine. However, we have `.recv()` so I chose `recv_from`.
What do you think? If this makes sense, should I provide `recvfrom` and `sendto` deprecated methods just calling new methods for compatibility?
|
|
Being able to index into the bytes of a string encourages
poor UTF-8 hygiene. To get a view of `&[u8]` from either
a `String` or `&str` slice, use the `as_bytes()` method.
Closes #12710.
[breaking-change]
|
|
We leave them for compatibility, but mark them as deprecated.
Signed-off-by: OGINO Masanori <masanori.ogino@gmail.com>
|