forked from SagerNet/wireguard-go
-
Notifications
You must be signed in to change notification settings - Fork 2
sync with sagernet/wireguard-go #8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
garmr-ulfr
wants to merge
24
commits into
getlantern:dev
Choose a base branch
from
SagerNet:dev
base: dev
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Reviewed-by: James Tucker <[email protected]>
Upstream doesn't use GitHub actions for CI as GitHub is simply a mirror. Our workflows involve GitHub, so establish some basic CI jobs. Updates tailscale/corp#28877 Signed-off-by: Jordan Whited <[email protected]>
Only bother updating the rxBytes counter once we've processed a whole vector, since additions are atomic. cherry picked from commit WireGuard/wireguard-go@542e565 Updates tailscale/corp#28879 Signed-off-by: Jason A. Donenfeld <[email protected]>
There is a possible deadlock in `device.Close()` when you try to close
the device very soon after its start. The problem is that two different
methods acquire the same locks in different order:
1. device.Close()
- device.ipcMutex.Lock()
- device.state.Lock()
2. device.changeState(deviceState)
- device.state.Lock()
- device.ipcMutex.Lock()
Reproducer:
func TestDevice_deadlock(t *testing.T) {
d := randDevice(t)
d.Close()
}
Problem:
$ go clean -testcache && go test -race -timeout 3s -run TestDevice_deadlock ./device | grep -A 10 sync.runtime_SemacquireMutex
sync.runtime_SemacquireMutex(0xc000117d20?, 0x94?, 0x0?)
/usr/local/opt/go/libexec/src/runtime/sema.go:77 +0x25
sync.(*Mutex).lockSlow(0xc000130518)
/usr/local/opt/go/libexec/src/sync/mutex.go:171 +0x213
sync.(*Mutex).Lock(0xc000130518)
/usr/local/opt/go/libexec/src/sync/mutex.go:90 +0x55
golang.zx2c4.com/wireguard/device.(*Device).Close(0xc000130500)
/Users/martin.basovnik/git/basovnik/wireguard-go/device/device.go:373 +0xb6
golang.zx2c4.com/wireguard/device.TestDevice_deadlock(0x0?)
/Users/martin.basovnik/git/basovnik/wireguard-go/device/device_test.go:480 +0x2c
testing.tRunner(0xc00014c000, 0x131d7b0)
--
sync.runtime_SemacquireMutex(0xc000130564?, 0x60?, 0xc000130548?)
/usr/local/opt/go/libexec/src/runtime/sema.go:77 +0x25
sync.(*Mutex).lockSlow(0xc000130750)
/usr/local/opt/go/libexec/src/sync/mutex.go:171 +0x213
sync.(*Mutex).Lock(0xc000130750)
/usr/local/opt/go/libexec/src/sync/mutex.go:90 +0x55
sync.(*RWMutex).Lock(0xc000130750)
/usr/local/opt/go/libexec/src/sync/rwmutex.go:147 +0x45
golang.zx2c4.com/wireguard/device.(*Device).upLocked(0xc000130500)
/Users/martin.basovnik/git/basovnik/wireguard-go/device/device.go:179 +0x72
golang.zx2c4.com/wireguard/device.(*Device).changeState(0xc000130500, 0x1)
cherry picked from commit WireGuard/wireguard-go@12269c2
Updates tailscale/corp#28879
Signed-off-by: Martin Basovnik <[email protected]>
Signed-off-by: Jason A. Donenfeld <[email protected]>
Reduce allocations by eliminating byte reader, hand-rolled decoding and
reusing message structs.
Synthetic benchmark:
var msgSink MessageInitiation
func BenchmarkMessageInitiationUnmarshal(b *testing.B) {
packet := make([]byte, MessageInitiationSize)
reader := bytes.NewReader(packet)
err := binary.Read(reader, binary.LittleEndian, &msgSink)
if err != nil {
b.Fatal(err)
}
b.Run("binary.Read", func(b *testing.B) {
b.ReportAllocs()
for range b.N {
reader := bytes.NewReader(packet)
_ = binary.Read(reader, binary.LittleEndian, &msgSink)
}
})
b.Run("unmarshal", func(b *testing.B) {
b.ReportAllocs()
for range b.N {
_ = msgSink.unmarshal(packet)
}
})
}
Results:
│ - │
│ sec/op │
MessageInitiationUnmarshal/binary.Read-8 1.508µ ± 2%
MessageInitiationUnmarshal/unmarshal-8 12.66n ± 2%
│ - │
│ B/op │
MessageInitiationUnmarshal/binary.Read-8 208.0 ± 0%
MessageInitiationUnmarshal/unmarshal-8 0.000 ± 0%
│ - │
│ allocs/op │
MessageInitiationUnmarshal/binary.Read-8 2.000 ± 0%
MessageInitiationUnmarshal/unmarshal-8 0.000 ± 0%
cherry picked from commit WireGuard/wireguard-go@9e7529c
Updates tailscale/corp#28879
Signed-off-by: Alexander Yastrebov <[email protected]>
Signed-off-by: Jason A. Donenfeld <[email protected]>
This is already enforced in receive.go, but if these unmarshallers are to have error return values anyway, make them as explicit as possible. cherry picked from commit WireGuard/wireguard-go@842888a Updates tailscale/corp#28879 Signed-off-by: Jason A. Donenfeld <[email protected]>
Optimize message encoding by eliminating binary.Write (which internally uses reflection) in favour of hand-rolled encoding. This is companion to 9e7529c. Synthetic benchmark: var packetSink []byte func BenchmarkMessageInitiationMarshal(b *testing.B) { var msg MessageInitiation b.Run("binary.Write", func(b *testing.B) { b.ReportAllocs() for range b.N { var buf [MessageInitiationSize]byte writer := bytes.NewBuffer(buf[:0]) _ = binary.Write(writer, binary.LittleEndian, msg) packetSink = writer.Bytes() } }) b.Run("binary.Encode", func(b *testing.B) { b.ReportAllocs() for range b.N { packet := make([]byte, MessageInitiationSize) _, _ = binary.Encode(packet, binary.LittleEndian, msg) packetSink = packet } }) b.Run("marshal", func(b *testing.B) { b.ReportAllocs() for range b.N { packet := make([]byte, MessageInitiationSize) _ = msg.marshal(packet) packetSink = packet } }) } Results: │ - │ │ sec/op │ MessageInitiationMarshal/binary.Write-8 1.337µ ± 0% MessageInitiationMarshal/binary.Encode-8 1.242µ ± 0% MessageInitiationMarshal/marshal-8 53.05n ± 1% │ - │ │ B/op │ MessageInitiationMarshal/binary.Write-8 368.0 ± 0% MessageInitiationMarshal/binary.Encode-8 160.0 ± 0% MessageInitiationMarshal/marshal-8 160.0 ± 0% │ - │ │ allocs/op │ MessageInitiationMarshal/binary.Write-8 3.000 ± 0% MessageInitiationMarshal/binary.Encode-8 1.000 ± 0% MessageInitiationMarshal/marshal-8 1.000 ± 0% cherry picked from commit WireGuard/wireguard-go@264889f Updates tailscale/corp#28879 Signed-off-by: Alexander Yastrebov <[email protected]> Signed-off-by: Jason A. Donenfeld <[email protected]>
This enables a conn.Bind to bring its own encapsulating transport, e.g. VXLAN/Geneve. Updates tailscale/corp#27502 Signed-off-by: Jordan Whited <[email protected]>
It was previously suppressed if roaming was disabled for the peer. Tailscale always disables roaming as we explicitly configure conn.Endpoint's for all peers. This commit also modifies PeerAwareEndpoint usage such that wireguard-go never uses/sets it as a Peer Endpoint value. In theory we (Tailscale) always disable roaming, so we should always return early from SetEndpointFromPacket(), but this acts as an extra footgun guard and improves clarity around intended usage. Updates tailscale/corp#27502 Updates tailscale/corp#29422 Updates tailscale/corp#30042 Signed-off-by: Jordan Whited <[email protected]>
To be implemented by [magicsock.lazyEndpoint], which is responsible for triggering JIT peer configuration. Updates tailscale/corp#20732 Updates tailscale/corp#30042 Signed-off-by: Jordan Whited <[email protected]>
Updates tailscale/corp#30364 Signed-off-by: Jordan Whited <[email protected]>
Peer.SetEndpointFromPacket is not called per-packet. It is guaranteed to be called at least once per packet batch. Updates tailscale/corp#30042 Updates tailscale/corp#20732 Signed-off-by: Jordan Whited <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.