Skip to content
Snippets Groups Projects
  1. Apr 20, 2020
  2. Apr 19, 2020
  3. Apr 17, 2020
  4. Apr 16, 2020
  5. Apr 15, 2020
  6. Apr 13, 2020
  7. Apr 12, 2020
  8. Apr 09, 2020
    • Eliza Weisman's avatar
      chore: prepare to release 0.2.17 (#2392) · 3137c6f0
      Eliza Weisman authored
      
      # 0.2.17 (April 9, 2020)
      
      ### Fixes
      - rt: bug in work-stealing queue (#2387) 
      
      ### Changes 
      - rt: threadpool uses logical CPU count instead of physical by default
        (#2391)
      
      
      Signed-off-by: default avatarEliza Weisman <eliza@buoyant.io>
    • Sean McArthur's avatar
      Use logical CPUs instead of physical by default (#2391) · d294c992
      Sean McArthur authored
      Some reasons to prefer logical count as the default:
      
      - Chips reporting many logical CPUs vs physical, such as via
      hyperthreading, probably know better than us about the workload the CPUs
      can handle.
      - The logical count (`num_cpus::get()`) takes into consideration
      schedular affinity, and cgroups CPU quota, in case the user wants to
      limit the amount of CPUs a process can use.
      
      Closes #2269
      d294c992
    • Carl Lerche's avatar
      rt: fix bug in work-stealing queue (#2387) · 58ba45a3
      Carl Lerche authored
      Fixes a couple bugs in the work-stealing queue introduced as
      part of #2315. First, the cursor needs to be able to represent more
      values than the size of the buffer. This is to be able to track if
      `tail` is ahead of `head` or if they are identical. This bug resulted in
      the "overflow" path being taken before the buffer was full.
      
      The second bug can happen when a queue is being stolen from concurrently
      with stealing into. In this case, it is possible for buffer slots to be
      overwritten before they are released by the stealer. This is harder to
      happen in practice due to the first bug preventing the queue from
      filling up 100%, but could still happen. It triggered an assertion in
      `steal_into`. This bug slipped through due to a bug in loom not
      correctly catching the case. The loom bug is fixed as part of
      tokio-rs/loom#119.
      
      Fixes: #2382
      58ba45a3
  9. Apr 06, 2020
  10. Apr 04, 2020
  11. Apr 03, 2020
    • Eliza Weisman's avatar
      sync: ensure Mutex, RwLock, and Semaphore futures are Send + Sync (#2375) · 1121a8eb
      Eliza Weisman authored
      Previously, the `Mutex::lock`, `RwLock::{read, write}`, and
      `Semaphore::acquire` futures in `tokio::sync` implemented `Send + Sync`
      automatically. This was by virtue of being implemented using a `poll_fn`
      that only closed over `Send + Sync` types. However, this broke in
      PR #2325, which rewrote those types using the new `batch_semaphore`.
      Now, they await an `Acquire` future, which contains a `Waiter`, which
      internally contains an `UnsafeCell`, and thus does not implement `Sync`.
      
      Since removing previously implemented traits breaks existing code, this
      inadvertantly caused a breaking change. There were tests ensuring that
      the `Mutex`, `RwLock`, and `Semaphore` types themselves were `Send +
      Sync`, but no tests that the _futures they return_ implemented those
      traits.
      
      I've fixed this by adding an explicit impl of `Sync` for the
      `batch_semaphore::Acquire` future. Since the `Waiter` type held by this
      struct is only accessed when borrowed mutably, it is safe for it to
      implement `Sync`.
      
      Additionally, I've added to the bounds checks for the effected
      `tokio::sync` types to ensure that returned futures continue to
      implement `Send + Sync` in the future.
      1121a8eb
    • nasa's avatar
      doc: Fix readme link (#2370) · 6fa40b6e
      nasa authored
      6fa40b6e
  12. Apr 02, 2020
  13. Apr 01, 2020
  14. Mar 28, 2020
    • Carl Lerche's avatar
      rt: cap fifo scheduler slot to avoid starvation (#2349) · caa7e180
      Carl Lerche authored
      The work-stealing scheduler includes an optimization where each worker
      includes a single slot to store the **last** scheduled task. Tasks in
      scheduler's LIFO slot are executed next. This speeds up and reduces
      latency with message passing patterns.
      
      Previously, this optimization was susceptible to starving other tasks in
      certain cases. If two tasks ping-ping between each other without ever
      yielding, the worker would never execute other tasks.
      
      An early PR (#2160) introduced a form of pre-emption. Each task is
      allocated a per-poll operation budget. Tokio resources will return ready
      until the budget is depleted, at which point, Tokio resources will
      always return `Pending`.
      
      This patch leverages the operation budget to limit the LIFO scheduler
      optimization. When executing tasks from the LIFO slot, the budget is
      **not** reset. Once the budget goes to zero, the task in the LIFO slot
      is pushed to the back of the queue.
      caa7e180
    • Alice Ryhl's avatar
      sync: fix notified link (#2351) · 7b2438e7
      Alice Ryhl authored
      7b2438e7
  15. Mar 27, 2020
    • Eliza Weisman's avatar
      sync: fix possible dangling pointer in semaphore (#2340) · 00725f68
      Eliza Weisman authored
      
      ## Motivation
      
      When cancelling futures which are waiting to acquire semaphore permits,
      there is a possible dangling pointer if notified futures are dropped
      after the notified wakers have been split into a separate list. Because
      these futures' wait queue nodes are no longer in the main list guarded
      by the lock, their `Drop` impls will complete immediately, and they may
      be dropped while still in the list of tasks to notify.
      
      ## Solution
      
      This branch fixes this by popping from the wait list inside the lock.
      The wakers of popped nodes are temporarily stored in a stack array,
      so that they can be notified after the lock is released. Since the
      size of the stack array is fixed, we may in some cases have to loop
      multiple times, acquiring and releasing the lock, until all permits
      have been released. This may also have the possible side advantage of
      preventing a thread releasing a very large number of permits from
      starving other threads that need to enqueue waiters.
      
      I've also added a loom test that can reliably reproduce a segfault
      on master, but passes on this branch (after a lot of iterations).
      
      Signed-off-by: default avatarEliza Weisman <eliza@buoyant.io>
      00725f68
    • kalcutter's avatar
      sync: broadcast, revert "Keep lock until sender notified" (#2348) · 5c71268b
      kalcutter authored
      This reverts commit 826fc21a.
      
      The code was intentional. Holding the lock while notifying is
      unnecessary. Also change the code to use `drop` so clippy doesn't
      confuse people against their will.
      5c71268b
    • Carl Lerche's avatar
      fs: add coop test (#2344) · 8020b02b
      Carl Lerche authored
      8020b02b
    • Carl Lerche's avatar
      rt: add task join coop test (#2345) · 11acfbbe
      Carl Lerche authored
      Add test verifying that joining on a task consumes the caller's budget.
      11acfbbe
Loading