Skip to content
Snippets Groups Projects
  1. Apr 04, 2020
  2. Apr 03, 2020
    • Eliza Weisman's avatar
      sync: ensure Mutex, RwLock, and Semaphore futures are Send + Sync (#2375) · 1121a8eb
      Eliza Weisman authored
      Previously, the `Mutex::lock`, `RwLock::{read, write}`, and
      `Semaphore::acquire` futures in `tokio::sync` implemented `Send + Sync`
      automatically. This was by virtue of being implemented using a `poll_fn`
      that only closed over `Send + Sync` types. However, this broke in
      PR #2325, which rewrote those types using the new `batch_semaphore`.
      Now, they await an `Acquire` future, which contains a `Waiter`, which
      internally contains an `UnsafeCell`, and thus does not implement `Sync`.
      
      Since removing previously implemented traits breaks existing code, this
      inadvertantly caused a breaking change. There were tests ensuring that
      the `Mutex`, `RwLock`, and `Semaphore` types themselves were `Send +
      Sync`, but no tests that the _futures they return_ implemented those
      traits.
      
      I've fixed this by adding an explicit impl of `Sync` for the
      `batch_semaphore::Acquire` future. Since the `Waiter` type held by this
      struct is only accessed when borrowed mutably, it is safe for it to
      implement `Sync`.
      
      Additionally, I've added to the bounds checks for the effected
      `tokio::sync` types to ensure that returned futures continue to
      implement `Send + Sync` in the future.
      1121a8eb
    • nasa's avatar
      doc: Fix readme link (#2370) · 6fa40b6e
      nasa authored
      6fa40b6e
  3. Apr 02, 2020
  4. Apr 01, 2020
  5. Mar 28, 2020
    • Carl Lerche's avatar
      rt: cap fifo scheduler slot to avoid starvation (#2349) · caa7e180
      Carl Lerche authored
      The work-stealing scheduler includes an optimization where each worker
      includes a single slot to store the **last** scheduled task. Tasks in
      scheduler's LIFO slot are executed next. This speeds up and reduces
      latency with message passing patterns.
      
      Previously, this optimization was susceptible to starving other tasks in
      certain cases. If two tasks ping-ping between each other without ever
      yielding, the worker would never execute other tasks.
      
      An early PR (#2160) introduced a form of pre-emption. Each task is
      allocated a per-poll operation budget. Tokio resources will return ready
      until the budget is depleted, at which point, Tokio resources will
      always return `Pending`.
      
      This patch leverages the operation budget to limit the LIFO scheduler
      optimization. When executing tasks from the LIFO slot, the budget is
      **not** reset. Once the budget goes to zero, the task in the LIFO slot
      is pushed to the back of the queue.
      caa7e180
    • Alice Ryhl's avatar
      sync: fix notified link (#2351) · 7b2438e7
      Alice Ryhl authored
      7b2438e7
  6. Mar 27, 2020
    • Eliza Weisman's avatar
      sync: fix possible dangling pointer in semaphore (#2340) · 00725f68
      Eliza Weisman authored
      
      ## Motivation
      
      When cancelling futures which are waiting to acquire semaphore permits,
      there is a possible dangling pointer if notified futures are dropped
      after the notified wakers have been split into a separate list. Because
      these futures' wait queue nodes are no longer in the main list guarded
      by the lock, their `Drop` impls will complete immediately, and they may
      be dropped while still in the list of tasks to notify.
      
      ## Solution
      
      This branch fixes this by popping from the wait list inside the lock.
      The wakers of popped nodes are temporarily stored in a stack array,
      so that they can be notified after the lock is released. Since the
      size of the stack array is fixed, we may in some cases have to loop
      multiple times, acquiring and releasing the lock, until all permits
      have been released. This may also have the possible side advantage of
      preventing a thread releasing a very large number of permits from
      starving other threads that need to enqueue waiters.
      
      I've also added a loom test that can reliably reproduce a segfault
      on master, but passes on this branch (after a lot of iterations).
      
      Signed-off-by: default avatarEliza Weisman <eliza@buoyant.io>
      00725f68
    • kalcutter's avatar
      sync: broadcast, revert "Keep lock until sender notified" (#2348) · 5c71268b
      kalcutter authored
      This reverts commit 826fc21a.
      
      The code was intentional. Holding the lock while notifying is
      unnecessary. Also change the code to use `drop` so clippy doesn't
      confuse people against their will.
      5c71268b
    • Carl Lerche's avatar
      fs: add coop test (#2344) · 8020b02b
      Carl Lerche authored
      8020b02b
    • Carl Lerche's avatar
      rt: add task join coop test (#2345) · 11acfbbe
      Carl Lerche authored
      Add test verifying that joining on a task consumes the caller's budget.
      11acfbbe
  7. Mar 26, 2020
    • Carl Lerche's avatar
      timer: fix loom test (#2346) · f2005a78
      Carl Lerche authored
      Fixes a test from a PR that was written before the recent loom upgrade.
      A change in the details how loom executes models resulted in the test to
      start failing. The fix is to reduce the number of iterations performed
      by the test.
      f2005a78
    • Brian L. Troutwine's avatar
      timer: improve memory ordering in Inner's increment (#2107) · 3fb213a8
      Brian L. Troutwine authored
      This commit improves the memory ordering in the implementation of
      Inner's increment function. The former code did a sequentially
      consistent load of self.num, then entered a loop with a sequentially
      consistent compare and swap on the same, bailing out with and Err only
      if the loaded value was MAX_TIMEOUTS. The use of SeqCst means that all
      threads must observe all relevant memory operations in the same order,
      implying synchronization between all CPUs.
      
      This commit adjusts the implementation in two key ways. First, the
      initial load of self.num is now down with Relaxed ordering. If two
      threads entered this code simultaneously, formerly, tokio required
      that one proceed before the other, negating their parallelism. Now,
      either thread may proceed without coordination. Second, the SeqCst
      compare_and_swap is changed to a Release, Relaxed
      compare_exchange_weak. The first memory ordering referrs to success:
      if the value is swapped the load of that value for comparison will be
      Relaxed and the store will be Release. The second memory ordering
      referrs to failure: if the value is not swapped the load is
      Relaxed. The _weak variant may spuriously fail but will generate
      better code.
      
      These changes mean that it is possible for more loops to be taken per
      call than strictly necessary but with greater parallelism available on
      this operation, improved energy consumption as CPUs don't have to
      coordinate as much.
      3fb213a8
    • Christofer Nolander's avatar
      time: fix DelayQueue rewriting delay on insert after Poll::Ready (#2285) · 6cf1a5b6
      Christofer Nolander authored
      When the queue was polled and yielded an index from the wheel, the delay
      until the next item was never updated. As a result, when one item was
      yielded from `poll_idx` the following insert erronously updated the
      delay to the instant of the inserted item.
      
      Fixes: #1700
      6cf1a5b6
    • Carl Lerche's avatar
      rt: track loom changes + tweak queue (#2315) · 1cb1e291
      Carl Lerche authored
      Loom is having a big refresh to improve performance and tighten up the
      concurrency model. This diff tracks those changes.
      
      Included in the changes is the removal of `CausalCell` deferred checks.
      This is due to it technically being undefined behavior in the C++11
      memory model. To address this, the work-stealing queue is updated to
      avoid needing this behavior. This is done by limiting the queue to have
      one concurrent stealer.
      1cb1e291
  8. Mar 25, 2020
  9. Mar 24, 2020
    • Tudor Sidea's avatar
      time: fix repeated pause/resume of time (#2253) · 57ba37c9
      Tudor Sidea authored
      The resume function was breaking the guarantee that Instants should
      never be less than any previously measured Instants when created.
      
      Altered the pause and resume function such that they will not break this
      guarantee. After resume, the time should continue from where it left
      off.
      
      Created test to prove that the advanced function still works as
      expected.
      
      Added additional tests for the pause/advance/resume functions.
      57ba37c9
  10. Mar 23, 2020
    • Eliza Weisman's avatar
      sync: new internal semaphore based on intrusive lists (#2325) · acf8a7da
      Eliza Weisman authored
      
      ## Motivation
      
      Many of Tokio's synchronization primitives (`RwLock`, `Mutex`,
      `Semaphore`, and the bounded MPSC channel) are based on the internal
      semaphore implementation, called `semaphore_ll`. This semaphore type
      provides a lower-level internal API for the semaphore implementation
      than the public `Semaphore` type, and supports "batch" operations, where
      waiters may acquire more than one permit at a time, and batches of
      permits may be released back to the semaphore.
      
      Currently, `semaphore_ll` uses an atomic singly-linked list for the
      waiter queue. The linked list implementation is specific to the
      semaphore. This implementation therefore requires a heap allocation for
      every waiter in the queue. These allocations are owned by the semaphore,
      rather than by the task awaiting permits from the semaphore. Critically,
      they are only _deallocated_ when permits are released back to the
      semaphore, at which point it dequeues as many waiters from the front of
      the queue as can be satisfied with the released permits. If a task
      attempts to acquire permits from the semaphore and is cancelled (such as
      by timing out), their waiter nodes remain in the list until they are
      dequeued while releasing permits. In cases where large numbers of tasks
      are cancelled while waiting for permits, this results in extremely high
      memory use for the semaphore (see #2237).
      
      ## Solution
      
      @Matthias247 has proposed that Tokio adopt the approach used in his
      `futures-intrusive` crate: using an _intrusive_ linked list to store the
      wakers of tasks waiting on a synchronization primitive. In an intrusive
      list, each list node is stored as part of the entry that node
      represents, rather than in a heap allocation that owns the entry.
      Because futures must be pinned in order to be polled, the necessary
      invariant of such a list --- that entries may not move while in the list
      --- may be upheld by making the waiter node `!Unpin`. In this approach,
      the waiter node can be stored inline in the future, rather than
      requiring  separate heap allocation, and cancelled futures may remove
      their nodes from the list.
      
      This branch adds a new semaphore implementation that uses the intrusive
      list added to Tokio in #2210. The implementation is essentially a hybrid
      of the old `semaphore_ll` and the semaphore used in `futures-intrusive`:
      while a `Mutex` around the wait list is necessary, since the intrusive
      list is not thread-safe, the permit state is stored outside of the mutex
      and updated atomically. 
      
      The mutex is acquired only when accessing the wait list — if a task 
      can acquire sufficient permits without waiting, it does not need to
      acquire the lock. When releasing permits, we iterate over the wait
      list from the end of the queue until we run out of permits to release,
      and split off all the nodes that received enough permits to wake up
      into a separate list. Then, we can drain the new list and notify those
      wakers *after* releasing the lock. Because the split operation only
      modifies the pointers on the head node of the split-off list and the
      new tail node of the old list, it is O(1) and does not require an
      allocation to return a variable length number of waiters to notify.
      
      
      Because of the intrusive list invariants, the API provided by the new
      `batch_semaphore` is somewhat different than that of `semaphore_ll`. In
      particular, the `Permit` type has been removed. This type was primarily
      intended allow the reuse of a wait list node allocated on the heap.
      Since the intrusive list means we can avoid heap-allocating waiters,
      this is no longer necessary. Instead, acquiring permits is done by
      polling an `Acquire` future returned by the `Semaphore` type. The use of
      a future here ensures that the waiter node is always pinned while
      waiting to acquire permits, and that a reference to the semaphore is
      available to remove the waiter if the future is cancelled.
      Unfortunately, the current implementation of the bounded MPSC requires a
      `poll_acquire` operation, and has methods that call it while outside of
      a pinned context. Therefore, I've left the old `semaphore_ll`
      implementation in place to be used by the bounded MPSC, and updated the
      `Mutex`, `RwLock`, and `Semaphore` APIs to use the new implementation.
      Hopefully, a subsequent change can update the bounded MPSC to use the
      new semaphore as well.
      
      Fixes #2237
      
      Signed-off-by: default avatarEliza Weisman <eliza@buoyant.io>
      acf8a7da
    • MarinPostma's avatar
      io: impl as `RawFd` / `AsRawHandle` for stdio (#2335) · 2258de51
      MarinPostma authored
      Fixes: #2311
      2258de51
  11. Mar 21, 2020
    • Carl Lerche's avatar
      rt: remove `unsafe` from shell runtime. (#2333) · dd27f1a2
      Carl Lerche authored
      Since the original shell runtime was implemented, utilities have been
      added to encapsulate `unsafe`. The shell runtime is now able to use
      those utilities and not include its own `unsafe` code.
      dd27f1a2
  12. Mar 19, 2020
  13. Mar 18, 2020
  14. Mar 17, 2020
  15. Mar 16, 2020
    • Jon Gjengset's avatar
      Add cooperative task yielding (#2160) · 06a4d895
      Jon Gjengset authored
      A single call to `poll` on a top-level task may potentially do a lot of
      work before it returns `Poll::Pending`. If a task runs for a long period
      of time without yielding back to the executor, it can starve other tasks
      waiting on that executor to execute them, or drive underlying resources.
      See for example rust-lang/futures-rs#2047, rust-lang/futures-rs#1957,
      and rust-lang/futures-rs#869. Since Rust does not have a runtime, it is
      difficult to forcibly preempt a long-running task.
      
      Consider a future like this one:
      
      ```rust
      use tokio::stream::StreamExt;
      async fn drop_all<I: Stream>(input: I) {
          while let Some(_) = input.next().await {}
      }
      ```
      
      It may look harmless, but consider what happens under heavy load if the
      input stream is _always_ ready. If we spawn `drop_all`, the task will
      never yield, and will starve other tasks and resources on the same
      executor.
      
      This patch adds a `coop` module that provides an opt-in mechanism for
      futures to cooperate with the executor to avoid starvation. This
      alleviates the problem above:
      
      ```
      use tokio::stream::StreamExt;
      async fn drop_all<I: Stream>(input: I) {
          while let Some(_) = input.next().await {
              tokio::coop::proceed().await;
          }
      }
      ```
      
      The call to [`proceed`] will coordinate with the executor to make sure
      that every so often control is yielded back to the executor so it can
      run other tasks.
      
      The implementation uses a thread-local counter that simply counts how
      many "cooperation points" we have passed since the task was first
      polled. Once the "budget" has been spent, any subsequent points will
      return `Poll::Pending`, eventually making the top-level task yield. When
      it finally does yield, the executor resets the budget before
      running the next task.
      
      The budget per task poll is currently hard-coded to 128. Eventually, we
      may want to make it dynamic as more cooperation points are added. The
      number 128 was chosen more or less arbitrarily to balance the cost of
      yielding unnecessarily against the time an executor may be "held up".
      
      At the moment, all the tokio leaf futures ("resources") call into coop,
      but external futures have no way of doing so. We probably want to
      continue limiting coop points to leaf futures in the future, but may
      want to also enable third-party leaf futures to cooperate to benefit the
      ecosystem as a whole. This is reflected in the methods marked as `pub`
      in `mod coop` (even though the module is only `pub(crate)`). We will
      likely also eventually want to expose `coop::limit`, which enables
      sub-executors and manual `impl Future` blocks to avoid one sub-task
      spending all of their poll budget.
      
      Benchmarks (see tokio-rs/tokio#2160) suggest that the overhead of `coop`
      is marginal.
      06a4d895
  16. Mar 15, 2020
  17. Mar 09, 2020
  18. Mar 06, 2020
  19. Mar 05, 2020
    • Carl Lerche's avatar
      rt: cleanup and simplify scheduler (scheduler v2.5) (#2273) · a78b1c65
      Carl Lerche authored
      A refactor of the scheduler internals focusing on simplifying and
      reducing unsafety. There are no fundamental logic changes.
      
      * The state transitions of the core task component are refined and
      reduced.
      * `basic_scheduler` has most unsafety removed.
      * `local_set` has most unsafety removed.
      * `threaded_scheduler` limits most unsafety to its queue implementation.
      a78b1c65
  20. Mar 04, 2020
Loading