- Apr 12, 2020
-
-
shuo authored
* tokio-io: make write_i* same behavior as write_all when poll_write returns Ok(0) Fixes: #2329 Co-authored-by:
lishuo <lishuo.03@bytedance.com>
-
Nikita Baksalyar authored
The streams documentation referred to module-level 'split' doc which is no longer there
-
Max Inden authored
-
- Apr 09, 2020
-
-
Eliza Weisman authored
# 0.2.17 (April 9, 2020) ### Fixes - rt: bug in work-stealing queue (#2387) ### Changes - rt: threadpool uses logical CPU count instead of physical by default (#2391) Signed-off-by:
Eliza Weisman <eliza@buoyant.io>
-
Sean McArthur authored
Some reasons to prefer logical count as the default: - Chips reporting many logical CPUs vs physical, such as via hyperthreading, probably know better than us about the workload the CPUs can handle. - The logical count (`num_cpus::get()`) takes into consideration schedular affinity, and cgroups CPU quota, in case the user wants to limit the amount of CPUs a process can use. Closes #2269
-
Carl Lerche authored
Fixes a couple bugs in the work-stealing queue introduced as part of #2315. First, the cursor needs to be able to represent more values than the size of the buffer. This is to be able to track if `tail` is ahead of `head` or if they are identical. This bug resulted in the "overflow" path being taken before the buffer was full. The second bug can happen when a queue is being stolen from concurrently with stealing into. In this case, it is possible for buffer slots to be overwritten before they are released by the stealer. This is harder to happen in practice due to the first bug preventing the queue from filling up 100%, but could still happen. It triggered an assertion in `steal_into`. This bug slipped through due to a bug in loom not correctly catching the case. The loom bug is fixed as part of tokio-rs/loom#119. Fixes: #2382
-
- Apr 06, 2020
-
-
nasa authored
-
- Apr 04, 2020
-
-
Vojtech Kral authored
-
Alice Ryhl authored
Also updates Empty and Pending to be unconditionally Send and Sync.
-
Eliza Weisman authored
# 0.2.16 (April 3, 2020) ### Fixes - sync: fix a regression where `Mutex`, `Semaphore`, and `RwLock` futures no longer implement `Sync` (#2375) - fs: fix `fs::copy` not copying file permissions (#2354) ### Added - time: added `deadline` method to `delay_queue::Expired` (#2300) - io: added `StreamReader` (#2052) Signed-off-by:
Eliza Weisman <eliza@buoyant.io>
-
- Apr 03, 2020
-
-
Eliza Weisman authored
Previously, the `Mutex::lock`, `RwLock::{read, write}`, and `Semaphore::acquire` futures in `tokio::sync` implemented `Send + Sync` automatically. This was by virtue of being implemented using a `poll_fn` that only closed over `Send + Sync` types. However, this broke in PR #2325, which rewrote those types using the new `batch_semaphore`. Now, they await an `Acquire` future, which contains a `Waiter`, which internally contains an `UnsafeCell`, and thus does not implement `Sync`. Since removing previously implemented traits breaks existing code, this inadvertantly caused a breaking change. There were tests ensuring that the `Mutex`, `RwLock`, and `Semaphore` types themselves were `Send + Sync`, but no tests that the _futures they return_ implemented those traits. I've fixed this by adding an explicit impl of `Sync` for the `batch_semaphore::Acquire` future. Since the `Waiter` type held by this struct is only accessed when borrowed mutably, it is safe for it to implement `Sync`. Additionally, I've added to the bounds checks for the effected `tokio::sync` types to ensure that returned futures continue to implement `Send + Sync` in the future.
-
nasa authored
-
- Apr 02, 2020
-
-
Alice Ryhl authored
Allow conversion from a stream of chunks of bytes to an `AsyncRead`.
-
Alice Ryhl authored
-
MOZGIII authored
* Expose time::deplay_queue::Expired::deadline * Return by value
-
Kevin Leimkuhler authored
Signed-off-by:
Kevin Leimkuhler <kevin@kleimkuhler.com>
-
Benjamin Halsted authored
Enable testing of edge cases caused by io errors.
-
Benjamin Halsted authored
There is a gap in examples for Builder::num_skip() that shows how to move past unused bytes between the length and payload.
-
Lucio Franco authored
Signed-off-by:
Lucio Franco <luciofranco14@gmail.com>
-
Jon Gjengset authored
Fixes #898.
-
Carl Lerche authored
The new queue uses `u8` to track offsets. Cursors are expected to wrap. An operation was performed with `+` instead of `wrapping_add`. This was not _obviously_ issue before as it is difficult to wrap a `usize` on 64bit platforms, but wrapping a `u8` is trivial. The fix is to use `wrapping_add` instead of `+`. A new test is added that catches the issue. Fixes #2361
-
- Apr 01, 2020
-
- Mar 28, 2020
-
-
Carl Lerche authored
The work-stealing scheduler includes an optimization where each worker includes a single slot to store the **last** scheduled task. Tasks in scheduler's LIFO slot are executed next. This speeds up and reduces latency with message passing patterns. Previously, this optimization was susceptible to starving other tasks in certain cases. If two tasks ping-ping between each other without ever yielding, the worker would never execute other tasks. An early PR (#2160) introduced a form of pre-emption. Each task is allocated a per-poll operation budget. Tokio resources will return ready until the budget is depleted, at which point, Tokio resources will always return `Pending`. This patch leverages the operation budget to limit the LIFO scheduler optimization. When executing tasks from the LIFO slot, the budget is **not** reset. Once the budget goes to zero, the task in the LIFO slot is pushed to the back of the queue.
-
Alice Ryhl authored
-
- Mar 27, 2020
-
-
Eliza Weisman authored
## Motivation When cancelling futures which are waiting to acquire semaphore permits, there is a possible dangling pointer if notified futures are dropped after the notified wakers have been split into a separate list. Because these futures' wait queue nodes are no longer in the main list guarded by the lock, their `Drop` impls will complete immediately, and they may be dropped while still in the list of tasks to notify. ## Solution This branch fixes this by popping from the wait list inside the lock. The wakers of popped nodes are temporarily stored in a stack array, so that they can be notified after the lock is released. Since the size of the stack array is fixed, we may in some cases have to loop multiple times, acquiring and releasing the lock, until all permits have been released. This may also have the possible side advantage of preventing a thread releasing a very large number of permits from starving other threads that need to enqueue waiters. I've also added a loom test that can reliably reproduce a segfault on master, but passes on this branch (after a lot of iterations). Signed-off-by:
Eliza Weisman <eliza@buoyant.io>
-
-
Carl Lerche authored
-
Carl Lerche authored
Add test verifying that joining on a task consumes the caller's budget.
-
- Mar 26, 2020
-
-
Carl Lerche authored
Fixes a test from a PR that was written before the recent loom upgrade. A change in the details how loom executes models resulted in the test to start failing. The fix is to reduce the number of iterations performed by the test.
-
Brian L. Troutwine authored
This commit improves the memory ordering in the implementation of Inner's increment function. The former code did a sequentially consistent load of self.num, then entered a loop with a sequentially consistent compare and swap on the same, bailing out with and Err only if the loaded value was MAX_TIMEOUTS. The use of SeqCst means that all threads must observe all relevant memory operations in the same order, implying synchronization between all CPUs. This commit adjusts the implementation in two key ways. First, the initial load of self.num is now down with Relaxed ordering. If two threads entered this code simultaneously, formerly, tokio required that one proceed before the other, negating their parallelism. Now, either thread may proceed without coordination. Second, the SeqCst compare_and_swap is changed to a Release, Relaxed compare_exchange_weak. The first memory ordering referrs to success: if the value is swapped the load of that value for comparison will be Relaxed and the store will be Release. The second memory ordering referrs to failure: if the value is not swapped the load is Relaxed. The _weak variant may spuriously fail but will generate better code. These changes mean that it is possible for more loops to be taken per call than strictly necessary but with greater parallelism available on this operation, improved energy consumption as CPUs don't have to coordinate as much.
-
Christofer Nolander authored
When the queue was polled and yielded an index from the wheel, the delay until the next item was never updated. As a result, when one item was yielded from `poll_idx` the following insert erronously updated the delay to the instant of the inserted item. Fixes: #1700
-
Carl Lerche authored
Loom is having a big refresh to improve performance and tighten up the concurrency model. This diff tracks those changes. Included in the changes is the removal of `CausalCell` deferred checks. This is due to it technically being undefined behavior in the C++11 memory model. To address this, the work-stealing queue is updated to avoid needing this behavior. This is done by limiting the queue to have one concurrent stealer.
-
- Mar 25, 2020
-
-
Carl Lerche authored
-
- Mar 24, 2020
-
-
Tudor Sidea authored
The resume function was breaking the guarantee that Instants should never be less than any previously measured Instants when created. Altered the pause and resume function such that they will not break this guarantee. After resume, the time should continue from where it left off. Created test to prove that the advanced function still works as expected. Added additional tests for the pause/advance/resume functions.
-
- Mar 23, 2020
-
-
Eliza Weisman authored
## Motivation Many of Tokio's synchronization primitives (`RwLock`, `Mutex`, `Semaphore`, and the bounded MPSC channel) are based on the internal semaphore implementation, called `semaphore_ll`. This semaphore type provides a lower-level internal API for the semaphore implementation than the public `Semaphore` type, and supports "batch" operations, where waiters may acquire more than one permit at a time, and batches of permits may be released back to the semaphore. Currently, `semaphore_ll` uses an atomic singly-linked list for the waiter queue. The linked list implementation is specific to the semaphore. This implementation therefore requires a heap allocation for every waiter in the queue. These allocations are owned by the semaphore, rather than by the task awaiting permits from the semaphore. Critically, they are only _deallocated_ when permits are released back to the semaphore, at which point it dequeues as many waiters from the front of the queue as can be satisfied with the released permits. If a task attempts to acquire permits from the semaphore and is cancelled (such as by timing out), their waiter nodes remain in the list until they are dequeued while releasing permits. In cases where large numbers of tasks are cancelled while waiting for permits, this results in extremely high memory use for the semaphore (see #2237). ## Solution @Matthias247 has proposed that Tokio adopt the approach used in his `futures-intrusive` crate: using an _intrusive_ linked list to store the wakers of tasks waiting on a synchronization primitive. In an intrusive list, each list node is stored as part of the entry that node represents, rather than in a heap allocation that owns the entry. Because futures must be pinned in order to be polled, the necessary invariant of such a list --- that entries may not move while in the list --- may be upheld by making the waiter node `!Unpin`. In this approach, the waiter node can be stored inline in the future, rather than requiring separate heap allocation, and cancelled futures may remove their nodes from the list. This branch adds a new semaphore implementation that uses the intrusive list added to Tokio in #2210. The implementation is essentially a hybrid of the old `semaphore_ll` and the semaphore used in `futures-intrusive`: while a `Mutex` around the wait list is necessary, since the intrusive list is not thread-safe, the permit state is stored outside of the mutex and updated atomically. The mutex is acquired only when accessing the wait list — if a task can acquire sufficient permits without waiting, it does not need to acquire the lock. When releasing permits, we iterate over the wait list from the end of the queue until we run out of permits to release, and split off all the nodes that received enough permits to wake up into a separate list. Then, we can drain the new list and notify those wakers *after* releasing the lock. Because the split operation only modifies the pointers on the head node of the split-off list and the new tail node of the old list, it is O(1) and does not require an allocation to return a variable length number of waiters to notify. Because of the intrusive list invariants, the API provided by the new `batch_semaphore` is somewhat different than that of `semaphore_ll`. In particular, the `Permit` type has been removed. This type was primarily intended allow the reuse of a wait list node allocated on the heap. Since the intrusive list means we can avoid heap-allocating waiters, this is no longer necessary. Instead, acquiring permits is done by polling an `Acquire` future returned by the `Semaphore` type. The use of a future here ensures that the waiter node is always pinned while waiting to acquire permits, and that a reference to the semaphore is available to remove the waiter if the future is cancelled. Unfortunately, the current implementation of the bounded MPSC requires a `poll_acquire` operation, and has methods that call it while outside of a pinned context. Therefore, I've left the old `semaphore_ll` implementation in place to be used by the bounded MPSC, and updated the `Mutex`, `RwLock`, and `Semaphore` APIs to use the new implementation. Hopefully, a subsequent change can update the bounded MPSC to use the new semaphore as well. Fixes #2237 Signed-off-by:
Eliza Weisman <eliza@buoyant.io>
-
MarinPostma authored
Fixes: #2311
-
- Mar 21, 2020
-
-
Carl Lerche authored
Since the original shell runtime was implemented, utilities have been added to encapsulate `unsafe`. The shell runtime is now able to use those utilities and not include its own `unsafe` code.
-
- Mar 19, 2020
-
-
Nikhil Benesch authored
-
- Mar 18, 2020
-
-
Nikhil Benesch authored
tokio-util uses tokio::stream::StreamExt, which was not introduced until tokio v0.2.5. The current dependency specification is incorrect, and breaks with cargo update -Z minimal-versions.
-
Daniel Müller authored
Add RwLock::into_inner method that consumes the lock and returns the wrapped value. Fixes: #2320
-