| Age | Commit message (Collapse) | Author | Lines |
|
|
|
|
|
Closes #16945
|
|
Closes #12794
|
|
|
|
|
|
|
|
This unifies the `non_snake_case_functions` and `uppercase_variables` lints
into one lint, `non_snake_case`. It also now checks for non-snake-case modules.
This also extends the non-camel-case types lint to check type parameters, and
merges the `non_uppercase_pattern_statics` lint into the
`non_uppercase_statics` lint.
Because the `uppercase_variables` lint is now part of the `non_snake_case`
lint, all non-snake-case variables that start with lowercase characters (such
as `fooBar`) will now trigger the `non_snake_case` lint.
New code should be updated to use the new `non_snake_case` lint instead of the
previous `non_snake_case_functions` and `uppercase_variables` lints. All use of
the `non_uppercase_pattern_statics` should be replaced with the
`non_uppercase_statics` lint. Any code that previously contained non-snake-case
module or variable names should be updated to use snake case names or disable
the `non_snake_case` lint. Any code with non-camel-case type parameters should
be changed to use camel case or disable the `non_camel_case_types` lint.
[breaking-change]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
[breaking-change]
|
|
|
|
These are already coded as traits.
|
|
|
|
|
|
A quick and dirty fix for #15036 until we get serious decoder reform.
Right now it is impossible for a Decodable to signal a decode error,
for example if it has only finitely many allowed values, is a string
which must be encoded a certain way, needs a valid checksum, etc. For
example in the libuuid implementation of Decodable an Option is
unwrapped, meaning that a decode of a malformed UUID will cause the
task to fail.
Since this adds a method to the `Decoder` trait, all users will need
to update their implementations to add it. The strategy used for the
current implementations for JSON and EBML is to add a new entry to
the error enum `ApplicationError(String)` which stores the string
provided to `.error()`.
[breaking-change]
|
|
Our implementation of ebml has diverged from the standard in order
to better serve the needs of the compiler, so it doesn't make much
sense to call what we have ebml anyore. Furthermore, our implementation
is pretty crufty, and should eventually be rewritten into a format
that better suits the needs of the compiler. This patch factors out
serialize::ebml into librbml, otherwise known as the Really Bad
Markup Language. This is a stopgap library that shouldn't be used
by end users, and will eventually be replaced by something better.
[breaking-change]
|
|
Not all users of MemWriter need to seek, but having MemWriter
seekable adds between 3-29% in overhead in certain circumstances.
This fixes that performance gap by making a non-seekable MemWriter,
and creating a new SeekableMemWriter for those circumstances when
that functionality is actually needed.
```
test io::mem::test::bench_buf_reader ... bench: 682 ns/iter (+/- 85)
test io::mem::test::bench_buf_writer ... bench: 580 ns/iter (+/- 57)
test io::mem::test::bench_mem_reader ... bench: 793 ns/iter (+/- 99)
test io::mem::test::bench_mem_writer_001_0000 ... bench: 48 ns/iter (+/- 27)
test io::mem::test::bench_mem_writer_001_0010 ... bench: 65 ns/iter (+/- 27) = 153 MB/s
test io::mem::test::bench_mem_writer_001_0100 ... bench: 132 ns/iter (+/- 12) = 757 MB/s
test io::mem::test::bench_mem_writer_001_1000 ... bench: 802 ns/iter (+/- 151) = 1246 MB/s
test io::mem::test::bench_mem_writer_100_0000 ... bench: 481 ns/iter (+/- 28)
test io::mem::test::bench_mem_writer_100_0010 ... bench: 1957 ns/iter (+/- 126) = 510 MB/s
test io::mem::test::bench_mem_writer_100_0100 ... bench: 8222 ns/iter (+/- 434) = 1216 MB/s
test io::mem::test::bench_mem_writer_100_1000 ... bench: 82496 ns/iter (+/- 11191) = 1212 MB/s
test io::mem::test::bench_seekable_mem_writer_001_0000 ... bench: 48 ns/iter (+/- 2)
test io::mem::test::bench_seekable_mem_writer_001_0010 ... bench: 64 ns/iter (+/- 2) = 156 MB/s
test io::mem::test::bench_seekable_mem_writer_001_0100 ... bench: 129 ns/iter (+/- 7) = 775 MB/s
test io::mem::test::bench_seekable_mem_writer_001_1000 ... bench: 801 ns/iter (+/- 159) = 1248 MB/s
test io::mem::test::bench_seekable_mem_writer_100_0000 ... bench: 711 ns/iter (+/- 51)
test io::mem::test::bench_seekable_mem_writer_100_0010 ... bench: 2532 ns/iter (+/- 227) = 394 MB/s
test io::mem::test::bench_seekable_mem_writer_100_0100 ... bench: 8962 ns/iter (+/- 947) = 1115 MB/s
test io::mem::test::bench_seekable_mem_writer_100_1000 ... bench: 85086 ns/iter (+/- 11555) = 1175 MB/s
```
[breaking-change]
|
|
|
|
Replaced by `string::raw::from_utf8`
[breaking-change]
|
|
|
|
|
|
I blame @ChrisMorgan for the hyphens.
|
|
The algorithm was already based on bytes internally.
Also use byte literals instead of casting u8 to char for matching.
|
|
|
|
- add one simple example of using `ToJson` trait
- make example code more readable
|
|
|
|
- add one simple example of using `ToJson` trait
- make example code more readable
|
|
Use `String::from_utf8` instead
[breaking-change]
|
|
|
|
Closes #15544
|
|
[breaking-change]
|
|
I ran `make check` and everything went smoothly. I also tested `#[deriving(Decodable, Encodable)]` on a struct containing both Cell<T> and RefCell<T> and everything now seems to work fine.
|
|
Updated PR with fixme and test
Updated PR with fixme and test
|
|
|
|
This speeds up json serialization by removing most of the allocations.
|
|
|
|
This significantly speeds up encoding json strings.
|
|
|
|
This was parsed by the parser but completely ignored; not even stored in
the AST!
This breaks code that looks like:
static X: &'static [u8] = &'static [1, 2, 3];
Change this code to the shorter:
static X: &'static [u8] = &[1, 2, 3];
Closes #15312.
[breaking-change]
|
|
Conflicts:
src/libstd/lib.rs
|
|
|
|
Fixed some errors, removed some code examples and added usage of the
`encode` and `decode` functions.
|