- Apr 21, 2020
-
-
Taiki Endo authored
-
damienrg authored
The link to tokio::main was relative to tokio_macros crate in the source directory. This is why it worked in local build of documentation and not in doc.rs. Refs: #1473
-
- Apr 20, 2020
-
-
Jon Gjengset authored
This enables `block_in_place` to be used in more contexts. Specifically, it allows you to block whenever you are off the tokio runtime (like if you are not using tokio, are in a `spawn_blocking` closure, etc.), and in the threaded scheduler's `block_on`. Blocking in `LocalSet` and the basic scheduler's` block_on` is still disallowed. Fixes #2327. Fixes #2393.
-
Alice Ryhl authored
-
Alice Ryhl authored
Co-authored-by:
Eliza Weisman <eliza@buoyant.io>
-
Gardner Vickers authored
-
- Apr 19, 2020
-
-
Alice Ryhl authored
-
Alice Ryhl authored
-
- Apr 17, 2020
-
-
Lucio Franco authored
-
Nikolai Vazquez authored
Allows for simply clicking on the PR number to view the corresponding changes made.
-
- Apr 16, 2020
-
-
Jon Gjengset authored
-
- Apr 15, 2020
-
-
Carl Lerche authored
-
- Apr 13, 2020
-
-
xliiv authored
-
Taiki Endo authored
-
- Apr 12, 2020
-
-
Alice Ryhl authored
This does not count as a breaking change as it fixes a regression and a soundness bug.
-
xliiv authored
Included changes - all simple references like `<type>.<name>.html` for these types - enum - fn - struct - trait - type - simple references for methods, like struct.DelayQueue.html#method.poll Refs: #1473
-
shuo authored
* tokio-io: make write_i* same behavior as write_all when poll_write returns Ok(0) Fixes: #2329 Co-authored-by:
lishuo <lishuo.03@bytedance.com>
-
Nikita Baksalyar authored
The streams documentation referred to module-level 'split' doc which is no longer there
-
Max Inden authored
-
- Apr 09, 2020
-
-
Eliza Weisman authored
# 0.2.17 (April 9, 2020) ### Fixes - rt: bug in work-stealing queue (#2387) ### Changes - rt: threadpool uses logical CPU count instead of physical by default (#2391) Signed-off-by:
Eliza Weisman <eliza@buoyant.io>
-
Sean McArthur authored
Some reasons to prefer logical count as the default: - Chips reporting many logical CPUs vs physical, such as via hyperthreading, probably know better than us about the workload the CPUs can handle. - The logical count (`num_cpus::get()`) takes into consideration schedular affinity, and cgroups CPU quota, in case the user wants to limit the amount of CPUs a process can use. Closes #2269
-
Carl Lerche authored
Fixes a couple bugs in the work-stealing queue introduced as part of #2315. First, the cursor needs to be able to represent more values than the size of the buffer. This is to be able to track if `tail` is ahead of `head` or if they are identical. This bug resulted in the "overflow" path being taken before the buffer was full. The second bug can happen when a queue is being stolen from concurrently with stealing into. In this case, it is possible for buffer slots to be overwritten before they are released by the stealer. This is harder to happen in practice due to the first bug preventing the queue from filling up 100%, but could still happen. It triggered an assertion in `steal_into`. This bug slipped through due to a bug in loom not correctly catching the case. The loom bug is fixed as part of tokio-rs/loom#119. Fixes: #2382
-
- Apr 06, 2020
-
-
nasa authored
-
- Apr 04, 2020
-
-
Vojtech Kral authored
-
Alice Ryhl authored
Also updates Empty and Pending to be unconditionally Send and Sync.
-
Eliza Weisman authored
# 0.2.16 (April 3, 2020) ### Fixes - sync: fix a regression where `Mutex`, `Semaphore`, and `RwLock` futures no longer implement `Sync` (#2375) - fs: fix `fs::copy` not copying file permissions (#2354) ### Added - time: added `deadline` method to `delay_queue::Expired` (#2300) - io: added `StreamReader` (#2052) Signed-off-by:
Eliza Weisman <eliza@buoyant.io>
-
- Apr 03, 2020
-
-
Eliza Weisman authored
Previously, the `Mutex::lock`, `RwLock::{read, write}`, and `Semaphore::acquire` futures in `tokio::sync` implemented `Send + Sync` automatically. This was by virtue of being implemented using a `poll_fn` that only closed over `Send + Sync` types. However, this broke in PR #2325, which rewrote those types using the new `batch_semaphore`. Now, they await an `Acquire` future, which contains a `Waiter`, which internally contains an `UnsafeCell`, and thus does not implement `Sync`. Since removing previously implemented traits breaks existing code, this inadvertantly caused a breaking change. There were tests ensuring that the `Mutex`, `RwLock`, and `Semaphore` types themselves were `Send + Sync`, but no tests that the _futures they return_ implemented those traits. I've fixed this by adding an explicit impl of `Sync` for the `batch_semaphore::Acquire` future. Since the `Waiter` type held by this struct is only accessed when borrowed mutably, it is safe for it to implement `Sync`. Additionally, I've added to the bounds checks for the effected `tokio::sync` types to ensure that returned futures continue to implement `Send + Sync` in the future.
-
nasa authored
-
- Apr 02, 2020
-
-
Alice Ryhl authored
Allow conversion from a stream of chunks of bytes to an `AsyncRead`.
-
Alice Ryhl authored
-
MOZGIII authored
* Expose time::deplay_queue::Expired::deadline * Return by value
-
Kevin Leimkuhler authored
Signed-off-by:
Kevin Leimkuhler <kevin@kleimkuhler.com>
-
Benjamin Halsted authored
Enable testing of edge cases caused by io errors.
-
Benjamin Halsted authored
There is a gap in examples for Builder::num_skip() that shows how to move past unused bytes between the length and payload.
-
Lucio Franco authored
Signed-off-by:
Lucio Franco <luciofranco14@gmail.com>
-
Jon Gjengset authored
Fixes #898.
-
Carl Lerche authored
The new queue uses `u8` to track offsets. Cursors are expected to wrap. An operation was performed with `+` instead of `wrapping_add`. This was not _obviously_ issue before as it is difficult to wrap a `usize` on 64bit platforms, but wrapping a `u8` is trivial. The fix is to use `wrapping_add` instead of `+`. A new test is added that catches the issue. Fixes #2361
-
- Apr 01, 2020
-
- Mar 28, 2020
-
-
Carl Lerche authored
The work-stealing scheduler includes an optimization where each worker includes a single slot to store the **last** scheduled task. Tasks in scheduler's LIFO slot are executed next. This speeds up and reduces latency with message passing patterns. Previously, this optimization was susceptible to starving other tasks in certain cases. If two tasks ping-ping between each other without ever yielding, the worker would never execute other tasks. An early PR (#2160) introduced a form of pre-emption. Each task is allocated a per-poll operation budget. Tokio resources will return ready until the budget is depleted, at which point, Tokio resources will always return `Pending`. This patch leverages the operation budget to limit the LIFO scheduler optimization. When executing tasks from the LIFO slot, the budget is **not** reset. Once the budget goes to zero, the task in the LIFO slot is pushed to the back of the queue.
-