| Age | Commit message (Collapse) | Author | Lines |
|
|
|
This is one of the final steps needed to complete #9128. It still needs a little bit of polish before closing that issue, but it's in a pretty much "done" state now.
The idea here is that the entire event loop implementation using libuv is now housed in `librustuv` as a completely separate library. This library is then injected (via `extern mod rustv`) into executable builds (similarly to how libstd is injected, tunable via `#[no_uv]`) to bring in the "rust blessed event loop implementation."
Codegen-wise, there is a new `event_loop_factory` language item which is tagged on a function with 0 arguments returning `~EventLoop`. This function's symbol is then inserted into the crate map for an executable crate, and if there is no definition of the `event_loop_factory` language item then the value is null.
What this means is that embedding rust as a library in another language just got a little harder. Libraries don't have crate maps, which means that there's no way to find the event loop implementation to spin up the runtime. That being said, it's always possible to build the runtime manually. This request also makes more runtime components public which should probably be public anyway. This new public-ness should allow custom scheduler setups everywhere regardless of whether you follow the `rt::start `path.
|
|
There are a few reasons that this is a desirable move to take:
1. Proof of concept that a third party event loop is possible
2. Clear separation of responsibility between rt::io and the uv-backend
3. Enforce in the future that the event loop is "pluggable" and replacable
Here's a quick summary of the points of this pull request which make this
possible:
* Two new lang items were introduced: event_loop, and event_loop_factory.
The idea of a "factory" is to define a function which can be called with no
arguments and will return the new event loop as a trait object. This factory
is emitted to the crate map when building an executable. The factory doesn't
have to exist, and when it doesn't then an empty slot is in the crate map and
a basic event loop with no I/O support is provided to the runtime.
* When building an executable, then the rustuv crate will be linked by default
(providing a default implementation of the event loop) via a similar method to
injecting a dependency on libstd. This is currently the only location where
the rustuv crate is ever linked.
* There is a new #[no_uv] attribute (implied by #[no_std]) which denies
implicitly linking to rustuv by default
Closes #5019
|
|
|
|
Closes #8811
|
|
In addition to being able to sleep the current task, timers should be able to
create ports which get notified after a period of time.
Closes #10014
|
|
Some code cleanup, sorting of import blocks
Removed std::unstable::UnsafeArc's use of Either
Added run-fail tests for the new FailWithCause impls
Changed future_result and try to return Result<(), ~Any>.
- Internally, there is an enum of possible fail messages passend around.
- In case of linked failure or a string message, the ~Any gets
lazyly allocated in future_results recv method.
- For that, future result now returns a wrapper around a Port.
- Moved and renamed task::TaskResult into rt::task::UnwindResult
and made it an internal enum.
- Introduced a replacement typedef `type TaskResult = Result<(), ~Any>`.
|
|
I'm not entirely sure why this is happening, but the server task is never seeing
the second send of the client task, and this test will very reliably fail to
complete on windows.
|
|
Closes #8811
|
|
This optimizes the `home_for_io` code path by requiring fewer scheduler
operations in some situtations.
When moving to your home scheduler, this no longer forces a context switch if
you're already on the home scheduler. Instead, the homing code now simply pins
you to your current scheduler (making it so you can't be stolen away). If you're
not on your home scheduler, then we context switch away, sending you to your
home scheduler.
When the I/O operation is done, then we also no longer forcibly trigger a
context switch. Instead, the action is cased on whether the task is homed or
not. If a task does not have a home, then the task is re-flagged as not having a
home and no context switch is performed. If a task is homed to the current
scheduler, then we don't do anything, and if the task is homed to a foreign
scheduler, then it's sent along its merry way.
I verified that there are about a third as many `write` syscalls done in print
operations now. Libuv uses write to implement async handles, and the homing
before and after each I/O operation was triggering a write on these async
handles. Additionally, using the terrible benchmark of printing 10k times in a
loop, this drives the runtime from 0.6s down to 0.3s (yay!).
|
|
In addition to being able to sleep the current task, timers should be able to
create ports which get notified after a period of time.
Closes #10014
|
|
This optimizes the `home_for_io` code path by requiring fewer scheduler
operations in some situtations.
When moving to your home scheduler, this no longer forces a context switch if
you're already on the home scheduler. Instead, the homing code now simply pins
you to your current scheduler (making it so you can't be stolen away). If you're
not on your home scheduler, then we context switch away, sending you to your
home scheduler.
When the I/O operation is done, then we also no longer forcibly trigger a
context switch. Instead, the action is cased on whether the task is homed or
not. If a task does not have a home, then the task is re-flagged as not having a
home and no context switch is performed. If a task is homed to the current
scheduler, then we don't do anything, and if the task is homed to a foreign
scheduler, then it's sent along its merry way.
I verified that there are about a third as many `write` syscalls done in print
operations now. Libuv uses write to implement async handles, and the homing
before and after each I/O operation was triggering a write on these async
handles. Additionally, using the terrible benchmark of printing 10k times in a
loop, this drives the runtime from 0.6s down to 0.3s (yay!).
|
|
Almost all languages provide some form of buffering of the stdout stream, and
this commit adds this feature for rust. A handle to stdout is lazily initialized
in the Task structure as a buffered owned Writer trait object. The buffer
behavior depends on where stdout is directed to. Like C, this line-buffers the
stream when the output goes to a terminal (flushes on newlines), and also like C
this uses a fixed-size buffer when output is not directed at a terminal.
We may decide the fixed-size buffering is overkill, but it certainly does reduce
write syscall counts when piping output elsewhere. This is a *huge* benefit to
any code using logging macros or the printing macros. Formatting emits calls to
`write` very frequently, and to have each of them backed by a write syscall was
very expensive.
In a local benchmark of printing 10000 lines of "what" to stdout, I got the
following timings:
when | terminal | redirected
----------|---------------|--------
before | 0.575s | 0.525s
after | 0.197s | 0.013s
C | 0.019s | 0.004s
I can also confirm that we're buffering the output appropriately in both
situtations. We're still far slower than C, but I believe much of that has to do
with the "homing" that all tasks due, we're still performing an order of
magnitude more write syscalls than C does.
|
|
Almost all languages provide some form of buffering of the stdout stream, and
this commit adds this feature for rust. A handle to stdout is lazily initialized
in the Task structure as a buffered owned Writer trait object. The buffer
behavior depends on where stdout is directed to. Like C, this line-buffers the
stream when the output goes to a terminal (flushes on newlines), and also like C
this uses a fixed-size buffer when output is not directed at a terminal.
We may decide the fixed-size buffering is overkill, but it certainly does reduce
write syscall counts when piping output elsewhere. This is a *huge* benefit to
any code using logging macros or the printing macros. Formatting emits calls to
`write` very frequently, and to have each of them backed by a write syscall was
very expensive.
In a local benchmark of printing 10000 lines of "what" to stdout, I got the
following timings:
when | terminal | redirected
----------------------------------
before | 0.575s | 0.525s
after | 0.197s | 0.013s
C | 0.019s | 0.004s
I can also confirm that we're buffering the output appropriately in both
situtations. We're still far slower than C, but I believe much of that has to do
with the "homing" that all tasks due, we're still performing an order of
magnitude more write syscalls than C does.
|
|
This is a peculiar function to require event loops to implement, and it's only
used in one spot during tests right now. Instead, a possibly more robust apis
for timers should be used rather than requiring all event loops to implement a
curious-looking function.
|
|
|
|
|
|
descriptive names
easier-to-use api
reorganize and document
|
|
|
|
|
|
I was seeing a lot of weird behavior with stdin behaving as a tty, and it
doesn't really quite make sense, so instead this moves to using libuv's pipes
instead (which make more sense for stdin specifically).
This prevents piping input to rustc hanging forever.
|
|
When uv's TTY I/O is used for the stdio streams, the file descriptors are put
into a non-blocking mode. This means that other concurrent writes to the same
stream can fail with EAGAIN or EWOULDBLOCK. By all I/O to event-loop I/O, we
avoid this error.
There is one location which cannot move, which is the runtime's dumb_println
function. This was implemented to handle the EAGAIN and EWOULDBLOCK errors and
simply retry again and again.
|
|
There are no longer any remnants of typedefs, and everything is now built on
true trait objects.
|
|
|
|
This involved changing a fair amount of code, rooted in how we access the local
IoFactory instance. I added a helper method to the rtio module to access the
optional local IoFactory. This is different than before in which it was assumed
that a local IoFactory was *always* present. Now, a separate io_error is raised
when an IoFactory is not present, yet I/O is requested.
|
|
This removes the PathLike trait associated with this "support module". This is
yet another "container of bytes" trait, so I didn't want to duplicate what
already exists throughout libstd. In actuality, we're going to pass of C strings
to the libuv APIs, so instead the arguments are now bound with the 'ToCStr'
trait instead.
Additionally, a layer of complexity was removed by immediately converting these
type-generic parameters into CStrings to get handed off to libuv apis.
|
|
This moves as many as I could over to ~Trait instead of ~Typedef. The only
remaining one is the IoFactoryObject which should be coming soon...
|
|
We get a little more functionality from libuv for these kinds of streams (things
like terminal dimentions), and it also appears to more gracefully handle the
stream being a window. Beforehand, if you used stdio and hit CTRL+d on a
process, libuv would continually return 0-length successful reads instead of
interpreting that the stream was closed.
I was hoping to be able to write tests for this, but currently the testing
infrastructure doesn't allow tests with a stdin and a stdout, but this has been
manually tested! (not that it means much)
|
|
This isn't necessary for creating processes (or at least not right now), and
it inherently attempts to expose implementation details.
|
|
This fills in the `hints` structure and exposes libuv's full functionality for
doing dns lookups.
|
|
|
|
|
|
|
|
Fixes #9958
|
|
|
|
|
|
|
|
Who doesn't like a massive renaming?
|
|
See #9605 for detailed information.
This also fixes two tests of #8811.
|
|
Add a new trait BytesContainer that is implemented for both byte vectors
and strings.
Convert Path::from_vec and ::from_str to one function, Path::new().
Remove all the _str-suffixed mutation methods (push, join, with_*,
set_*) and modify the non-suffixed versions to use BytesContainer.
|
|
Remove the old path.
Rename path2 to path.
Update all clients for the new path.
Also make some miscellaneous changes to the Path APIs to help the
adoption process.
|
|
|
|
|
|
cc #9605
|
|
|
|
Closes #8815.
|
|
|
|
|
|
Progress on #7981
|
|
This is a re-landing of #8645, except that the bindings are *not* being used to
power std::run just yet. Instead, this adds the bindings as standalone bindings
inside the rt::io::process module.
I made one major change from before, having to do with how pipes are
created/bound. It's much clearer now when you can read/write to a pipe, as
there's an explicit difference (different types) between an unbound and a bound
pipe. The process configuration now takes unbound pipes (and consumes ownership
of them), and will return corresponding pipe structures back if spawning is
successful (otherwise everything is destroyed normally).
|