diff options
| author | Alex Crichton <alex@alexcrichton.com> | 2014-02-10 22:48:45 -0800 |
|---|---|---|
| committer | Alex Crichton <alex@alexcrichton.com> | 2014-02-12 09:46:31 -0800 |
| commit | 2650b61505e5ed5ac3075451a73e64fd226f5b10 (patch) | |
| tree | 07ee98fa426de7952d7454c924490fda5595ba29 /src/rustllvm/ExecutionEngineWrapper.cpp | |
| parent | 4256d24a16600715aa46007450e6b3d076740711 (diff) | |
| download | rust-2650b61505e5ed5ac3075451a73e64fd226f5b10.tar.gz rust-2650b61505e5ed5ac3075451a73e64fd226f5b10.zip | |
Don't hit epoll unless a scheduler absolutely must
Currently, a scheduler will hit epoll() or kqueue() at the end of *every task*.
The reason is that the scheduler will context switch back to the scheduler task,
terminate the previous task, and then return from run_sched_once. In doing so,
the scheduler will poll for any active I/O.
This shows up painfully in benchmarks that have no I/O at all. For example, this
benchmark:
for _ in range(0, 1000000) {
spawn(proc() {});
}
In this benchmark, the scheduler is currently wasting a good chunk of its time
hitting epoll() when there's always active work to be done (run with
RUST_THREADS=1).
This patch uses the previous two commits to alter the scheduler's behavior to
only return from run_sched_once if no work could be found when trying really
really hard. If there is active I/O, this commit will perform the same as
before, falling back to epoll() to check for I/O completion (to not starve I/O
tasks).
In the benchmark above, I got the following numbers:
12.554s on today's master
3.861s with #12172 applied
2.261s with both this and #12172 applied
cc #8341
Diffstat (limited to 'src/rustllvm/ExecutionEngineWrapper.cpp')
0 files changed, 0 insertions, 0 deletions
