| Age | Commit message (Collapse) | Author | Lines |
|
A the same time this purges all runtime support needed for statically
initialized mutexes, moving all users over to the new Mutex type instead.
|
|
Try to acquire lock and succeed only if lock is not already held.
Uses TryEnterCriticalSection or pthread_mutex_trylock.
|
|
|
|
|
|
|
|
Many changes to code structure are included:
- removed TIME_SLICE_IN_MS
- removed sychronized_indexed_list
- removed region_owned
- kernel_owned move to kernel.h, task_owned moved to task.h
- global configs moved to rust_globals.h
- changed #pragma once to standard guard in rust_upcall.h
- got rid of memory.h
|
|
Conflicts:
src/rt/rust_sched_loop.cpp
src/rt/rust_shape.cpp
src/rt/rust_task.cpp
|
|
|
|
Adds back the ability to make assertions about locks, but only under the
--enable-debug configuration
This reverts commit b247de64583e2ab527088813ba9192824554e801.
Conflicts:
src/rt/rust_sched_loop.cpp
|
|
|
|
|
|
This assert doesn't hold because it isn't made while holding the lock
|
|
Assert that locks are not reentered on the same thread, unlocked by a
different thread, or deleted while locked.
|
|
If a CRITICAL_SECTION is not initialized with a spin count, it will
default to 0, even on multi-processor systems. MSDN suggests using
4000. On single-processor systems, the spin count parameter is ignored
and the critical section's spin count defaults to 0.
For Windows >= Vista, extra debug info is allocated for
CRITICAL_SECTIONs but not released in a timely manner. Consider using
InitializeCriticalSectionEx(CRITICAL_SECTION_NO_DEBUG_INFO).
|
|
|
|
The way I read the docs, having this be a manual reset event means
that after the first time it's signalled it stays that way until reset,
and we never, ever reset it.
|
|
|
|
|
|
This simplifies the check for thread ownership by removing the _locked flag
and just comparing against the thread ID of the last thread to take the lock.
If the running thread took the lock _holding_thread will be equal to
pthread_self(); if _holding_thread is some other value then the running thread
does not have the lock.
Setting a pthread_t to 0 like this is not portable but should work on every
platform we are likely to care about for the near future.
|
|
|
|
|
|
|
|
|
|
atomic reference counting of tasks.
|
|
appears to give us much better parallel performance.
Also, commented out one more unsafe log and updated rust_kernel.cpp to compile under g++
|
|
|
|
|
|
the minimum stack size to get the pfib benchmark to run without exhausting its address space on Windows.
|
|
RUST_THREADS=8, so we're probably fairly safe now. In the future we can relax the synchronization to get better performance.
|
|
the CPU.
|
|
with a new one inspired by ucontext. It works under Linux, OS X and Windows, and is Valgrind clean on Linux and OS X (provided the runtime is built with gcc).
This commit also moves yield and join to the standard library, as requested in #42. Join is currently a no-op though.
|
|
|
|
wait instead of yield.
|
|
primitives, switch rust_kernel to use a lock/signal pair and wait rather than spin.
|