Skip to content

Conversation

@garmr-ulfr
Copy link

No description provided.

nshalman and others added 24 commits January 7, 2025 08:53
Reviewed-by: James Tucker <[email protected]>
Upstream doesn't use GitHub actions for CI as GitHub is simply a mirror.
Our workflows involve GitHub, so establish some basic CI jobs.

Updates tailscale/corp#28877

Signed-off-by: Jordan Whited <[email protected]>
Only bother updating the rxBytes counter once we've processed a whole
vector, since additions are atomic.

cherry picked from commit WireGuard/wireguard-go@542e565

Updates tailscale/corp#28879

Signed-off-by: Jason A. Donenfeld <[email protected]>
There is a possible deadlock in `device.Close()` when you try to close
the device very soon after its start. The problem is that two different
methods acquire the same locks in different order:

1. device.Close()
 - device.ipcMutex.Lock()
 - device.state.Lock()

2. device.changeState(deviceState)
 - device.state.Lock()
 - device.ipcMutex.Lock()

Reproducer:

    func TestDevice_deadlock(t *testing.T) {
    	d := randDevice(t)
    	d.Close()
    }

Problem:

    $ go clean -testcache && go test -race -timeout 3s -run TestDevice_deadlock ./device | grep -A 10 sync.runtime_SemacquireMutex
    sync.runtime_SemacquireMutex(0xc000117d20?, 0x94?, 0x0?)
            /usr/local/opt/go/libexec/src/runtime/sema.go:77 +0x25
    sync.(*Mutex).lockSlow(0xc000130518)
            /usr/local/opt/go/libexec/src/sync/mutex.go:171 +0x213
    sync.(*Mutex).Lock(0xc000130518)
            /usr/local/opt/go/libexec/src/sync/mutex.go:90 +0x55
    golang.zx2c4.com/wireguard/device.(*Device).Close(0xc000130500)
            /Users/martin.basovnik/git/basovnik/wireguard-go/device/device.go:373 +0xb6
    golang.zx2c4.com/wireguard/device.TestDevice_deadlock(0x0?)
            /Users/martin.basovnik/git/basovnik/wireguard-go/device/device_test.go:480 +0x2c
    testing.tRunner(0xc00014c000, 0x131d7b0)
    --
    sync.runtime_SemacquireMutex(0xc000130564?, 0x60?, 0xc000130548?)
            /usr/local/opt/go/libexec/src/runtime/sema.go:77 +0x25
    sync.(*Mutex).lockSlow(0xc000130750)
            /usr/local/opt/go/libexec/src/sync/mutex.go:171 +0x213
    sync.(*Mutex).Lock(0xc000130750)
            /usr/local/opt/go/libexec/src/sync/mutex.go:90 +0x55
    sync.(*RWMutex).Lock(0xc000130750)
            /usr/local/opt/go/libexec/src/sync/rwmutex.go:147 +0x45
    golang.zx2c4.com/wireguard/device.(*Device).upLocked(0xc000130500)
            /Users/martin.basovnik/git/basovnik/wireguard-go/device/device.go:179 +0x72
    golang.zx2c4.com/wireguard/device.(*Device).changeState(0xc000130500, 0x1)

cherry picked from commit WireGuard/wireguard-go@12269c2

Updates tailscale/corp#28879

Signed-off-by: Martin Basovnik <[email protected]>
Signed-off-by: Jason A. Donenfeld <[email protected]>
Reduce allocations by eliminating byte reader, hand-rolled decoding and
reusing message structs.

Synthetic benchmark:

    var msgSink MessageInitiation
    func BenchmarkMessageInitiationUnmarshal(b *testing.B) {
        packet := make([]byte, MessageInitiationSize)
        reader := bytes.NewReader(packet)
        err := binary.Read(reader, binary.LittleEndian, &msgSink)
        if err != nil {
            b.Fatal(err)
        }
        b.Run("binary.Read", func(b *testing.B) {
            b.ReportAllocs()
            for range b.N {
                reader := bytes.NewReader(packet)
                _ = binary.Read(reader, binary.LittleEndian, &msgSink)
            }
        })
        b.Run("unmarshal", func(b *testing.B) {
            b.ReportAllocs()
            for range b.N {
                _ = msgSink.unmarshal(packet)
            }
        })
    }

Results:
                                         │      -      │
                                         │   sec/op    │
MessageInitiationUnmarshal/binary.Read-8   1.508µ ± 2%
MessageInitiationUnmarshal/unmarshal-8     12.66n ± 2%

                                         │      -       │
                                         │     B/op     │
MessageInitiationUnmarshal/binary.Read-8   208.0 ± 0%
MessageInitiationUnmarshal/unmarshal-8     0.000 ± 0%

                                         │      -       │
                                         │  allocs/op   │
MessageInitiationUnmarshal/binary.Read-8   2.000 ± 0%
MessageInitiationUnmarshal/unmarshal-8     0.000 ± 0%

cherry picked from commit WireGuard/wireguard-go@9e7529c

Updates tailscale/corp#28879

Signed-off-by: Alexander Yastrebov <[email protected]>
Signed-off-by: Jason A. Donenfeld <[email protected]>
This is already enforced in receive.go, but if these unmarshallers are
to have error return values anyway, make them as explicit as possible.

cherry picked from commit WireGuard/wireguard-go@842888a

Updates tailscale/corp#28879

Signed-off-by: Jason A. Donenfeld <[email protected]>
Optimize message encoding by eliminating binary.Write (which internally
uses reflection) in favour of hand-rolled encoding.

This is companion to 9e7529c.

Synthetic benchmark:

    var packetSink []byte
    func BenchmarkMessageInitiationMarshal(b *testing.B) {
        var msg MessageInitiation
        b.Run("binary.Write", func(b *testing.B) {
            b.ReportAllocs()
            for range b.N {
                var buf [MessageInitiationSize]byte
                writer := bytes.NewBuffer(buf[:0])
                _ = binary.Write(writer, binary.LittleEndian, msg)
                packetSink = writer.Bytes()
            }
        })
        b.Run("binary.Encode", func(b *testing.B) {
            b.ReportAllocs()
            for range b.N {
                packet := make([]byte, MessageInitiationSize)
                _, _ = binary.Encode(packet, binary.LittleEndian, msg)
                packetSink = packet
            }
        })
        b.Run("marshal", func(b *testing.B) {
            b.ReportAllocs()
            for range b.N {
                packet := make([]byte, MessageInitiationSize)
                _ = msg.marshal(packet)
                packetSink = packet
            }
        })
    }

Results:
                                             │      -      │
                                             │   sec/op    │
    MessageInitiationMarshal/binary.Write-8    1.337µ ± 0%
    MessageInitiationMarshal/binary.Encode-8   1.242µ ± 0%
    MessageInitiationMarshal/marshal-8         53.05n ± 1%

                                             │     -      │
                                             │    B/op    │
    MessageInitiationMarshal/binary.Write-8    368.0 ± 0%
    MessageInitiationMarshal/binary.Encode-8   160.0 ± 0%
    MessageInitiationMarshal/marshal-8         160.0 ± 0%

                                             │     -      │
                                             │ allocs/op  │
    MessageInitiationMarshal/binary.Write-8    3.000 ± 0%
    MessageInitiationMarshal/binary.Encode-8   1.000 ± 0%
    MessageInitiationMarshal/marshal-8         1.000 ± 0%

cherry picked from commit WireGuard/wireguard-go@264889f

Updates tailscale/corp#28879

Signed-off-by: Alexander Yastrebov <[email protected]>
Signed-off-by: Jason A. Donenfeld <[email protected]>
This enables a conn.Bind to bring its own encapsulating transport, e.g.
VXLAN/Geneve.

Updates tailscale/corp#27502

Signed-off-by: Jordan Whited <[email protected]>
It was previously suppressed if roaming was disabled for the peer.
Tailscale always disables roaming as we explicitly configure
conn.Endpoint's for all peers.

This commit also modifies PeerAwareEndpoint usage such that wireguard-go
never uses/sets it as a Peer Endpoint value. In theory we (Tailscale)
always disable roaming, so we should always return early from
SetEndpointFromPacket(), but this acts as an extra footgun guard and
improves clarity around intended usage.

Updates tailscale/corp#27502
Updates tailscale/corp#29422
Updates tailscale/corp#30042

Signed-off-by: Jordan Whited <[email protected]>
To be implemented by [magicsock.lazyEndpoint], which is responsible for
triggering JIT peer configuration.

Updates tailscale/corp#20732
Updates tailscale/corp#30042

Signed-off-by: Jordan Whited <[email protected]>
Peer.SetEndpointFromPacket is not called per-packet. It is
guaranteed to be called at least once per packet batch.

Updates tailscale/corp#30042
Updates tailscale/corp#20732

Signed-off-by: Jordan Whited <[email protected]>
(cherry picked from commit 7c2acad)
(cherry picked from commit a7bac17)
(cherry picked from commit 7a2f11c)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants