This serves two purposes.
First, it makes repeatedly stopping then starting a peer cheaper.
Second, it prevents a data race observed accessing the queues.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Before, the code attached a finalizer to an object that wasn't returned,
resulting in immediate garbage collection. Instead return the actual
pointer.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
The immediate motivation for this change is an observed deadlock.
1. A goroutine calls peer.Stop. That calls peer.queue.Lock().
2. Another goroutine is in RoutineSequentialReceiver.
It receives an elem from peer.queue.inbound.
3. The peer.Stop goroutine calls close(peer.queue.inbound),
close(peer.queue.outbound), and peer.stopping.Wait().
It blocks waiting for RoutineSequentialReceiver
and RoutineSequentialSender to exit.
4. The RoutineSequentialReceiver goroutine calls peer.SendStagedPackets().
SendStagedPackets attempts peer.queue.RLock().
That blocks forever because the peer.Stop
goroutine holds a write lock on that mutex.
A background motivation for this change is that it can be expensive
to have a mutex in the hot code path of RoutineSequential*.
The mutex was necessary to avoid attempting to send elems on a closed channel.
This commit removes that danger by never closing the channel.
Instead, we send a sentinel nil value on the channel to indicate
to the receiver that it should exit.
The only problem with this is that if the receiver exits,
we could write an elem into the channel which would never get received.
If it never gets received, it cannot get returned to the device pools.
To work around this, we use a finalizer. When the channel can be GC'd,
the finalizer drains any remaining elements from the channel and
restores them to the device pool.
After that change, peer.queue.RWMutex no longer makes sense where it is.
It is only used to prevent concurrent calls to Start and Stop.
Move it to a more sensible location and make it a plain sync.Mutex.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
timersInit sets up the timers.
It need only be done once per peer.
timersStart does the work to prepare the timers
for a newly running peer. It needs to be done
every time a peer starts.
Separate the two and call them in the appropriate places.
This prevents data races on the peer's timers fields
when starting and stopping peers.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
This commit simplifies device state management.
It creates a single unified state variable and documents its semantics.
It also makes state changes more atomic.
As an example of the sort of bug that occurred due to non-atomic state changes,
the following sequence of events used to occur approximately every 2.5 million test runs:
* RoutineTUNEventReader received an EventDown event.
* It called device.Down, which called device.setUpDown.
* That set device.state.changing, but did not yet attempt to lock device.state.Mutex.
* Test completion called device.Close.
* device.Close locked device.state.Mutex.
* device.Close blocked on a call to device.state.stopping.Wait.
* device.setUpDown then attempted to lock device.state.Mutex and blocked.
Deadlock results. setUpDown cannot progress because device.state.Mutex is locked.
Until setUpDown returns, RoutineTUNEventReader cannot call device.state.stopping.Done.
Until device.state.stopping.Done gets called, device.state.stopping.Wait is blocked.
As long as device.state.stopping.Wait is blocked, device.state.Mutex cannot be unlocked.
This commit fixes that deadlock by holding device.state.mu
when checking that the device is not closed.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
This moves to a simple queue with no routine processing it, to reduce
scheduler pressure.
This splits latency in half!
benchmark old ns/op new ns/op delta
BenchmarkThroughput-16 2394 2364 -1.25%
BenchmarkLatency-16 259652 120810 -53.47%
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
This makes the IpcGet method much faster.
We also refactor the traversal API to use a callback so that we don't
need to allocate at all. Avoiding allocations we do self-masking on
insertion, which in turn means that split intermediate nodes require a
copy of the bits.
benchmark old ns/op new ns/op delta
BenchmarkUAPIGet-16 3243 2659 -18.01%
benchmark old allocs new allocs delta
BenchmarkUAPIGet-16 35 30 -14.29%
benchmark old bytes new bytes delta
BenchmarkUAPIGet-16 1218 737 -39.49%
This benchmark is good, though it's only for a pair of peers, each with
only one allowedips. As this grows, the delta expands considerably.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
There are very few cases, if any, in which a user only wants one of
these levels, so combine it into a single level.
While we're at it, reduce indirection on the loggers by using an empty
function rather than a nil function pointer. It's not like we have
retpolines anyway, and we were always calling through a function with a
branch prior, so this seems like a net gain.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
This commit overhauls wireguard-go's logging.
The primary, motivating change is to use a function instead
of a *log.Logger as the basic unit of logging.
Using functions provides a lot more flexibility for
people to bring their own logging system.
It also introduces logging helper methods on Device.
These reduce line noise at the call site.
They also allow for log functions to be nil;
when nil, instead of generating a log line and throwing it away,
we don't bother generating it at all.
This spares allocation and pointless work.
This is a breaking change, although the fix required
of clients is fairly straightforward.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
One of the first rules of WaitGroups is that you call wg.Add
outside of a goroutine, not inside it. Fix this embarrassing mistake.
This prevents an extremely rare race condition (2 per 100,000 runs)
which could occur when attempting to start a new peer
concurrently with shutting down a device.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
This was shifted by 2 bytes when making persistent keepalive into a u32.
Fix it by placing it after the aligned region.
Fixes: e739ff7 ("device: fix persistent_keepalive_interval data races")
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Found by the race detector and existing tests.
To avoid introducing a lock into this hot path,
calculate and cache whether any peers exist.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Access keypair.sendNonce atomically.
Eliminate one unnecessary initialization to zero.
Mutate handshake.lastSentHandshake with the mutex held.
Co-authored-by: David Anderson <danderson@tailscale.com>
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
This is a similar treatment to the handling of the encryption
channel found a few commits ago: Use the closing of the channel
to manage goroutine lifetime and shutdown.
It is considerably simpler because there is only a single writer.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
When closing a device, packets that are in flight
can make it to SendBuffer, which then returns an error.
Those errors add noise but no light;
they do not reflect an actual problem.
Adding the synchronization required to prevent
this from occurring is currently expensive and error-prone.
Instead, quietly drop such packets instead of
returning an error.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
In each case, the starting waitgroup did nothing but ensure
that the goroutine has launched.
Nothing downstream depends on the order in which goroutines launch,
and if the Go runtime scheduler is so broken that goroutines
don't get launched reasonably promptly, we have much deeper problems.
Given all that, simplify the code.
Passed a race-enabled stress test 25,000 times without failure.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
fix panic: send on closed channel when remove peer
Signed-off-by: Haichao Liu <liuhaichao@bytedance.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Go's GC semantics might not always guarantee the safety of this, and the
race detector gets upset too, so instead we wrap this all in atomic
accessors.
Reported-by: David Anderson <danderson@tailscale.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
The sticky socket code stays in the device package for now,
as it reaches deeply into the peer list.
This is the first step in an effort to split some code out of
the very busy device package.
Signed-off-by: David Crawshaw <crawshaw@tailscale.com>
Adds a test that will fail consistently on 32-bit platforms if the
struct ever changes again to violate the rules. This is likely not
needed because unaligned access crashes reliably, but this will reliably
fail even if tests accidentally pass due to lucky alignment.
Signed-Off-By: David Anderson <danderson@tailscale.com>