From tmani at prosimo.io Thu Mar 10 01:00:48 2022 From: tmani at prosimo.io (Manikandan Thiagarajan) Date: Wed, 9 Mar 2022 17:00:48 -0800 Subject: Wireguard Go C API Callbacks In-Reply-To: References: Message-ID: <0BDD2370-8F67-43BD-9186-76FF85EE1FC1@prosimo.io> Hi, We are using Wireguard Go C API to integrate with our packet tunnel network extension to forward traffic to WG Tunnel on Mac OS. Sometimes We encounter some issues with the tunnel that it doesn?t respond anymore. Below log is one of the scenarios where we see this issue. The Wireguard log says "no buffer space available?. Below are my questions regarding this issue and in general WG C API. ?[Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Failed to read packet from TUN device: route ip+net: sysctl: no buffer space available" 1. When do we hit this issue? How do we prevent this. 2. I think we need some kind of callback C APIs that notifies such errors to the callers. 3. Also it would be nice to have status update callbacks such as Tunnel established, Handshake completed. Handshake failed, Tunnel file descriptor closed and any other updates. We would like to handle these notifications and take some actions such as re create the tunnels, update our own UI, and etc Thanks, Mani [Sun Mar 6 15:27:15 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: receive incoming v4 - started [Sun Mar 6 15:27:21 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: event worker - stopped replace_peers=true public_key=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx endpoint=x.x.x.x:51820 persistent_keepalive_interval=0 replace_allowed_ips=true allowed_ip=10.50.19.176/32 public_key=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx endpoint=x.x.x.x:51820 persistent_keepalive_interval=0 replace_allowed_ips=true allowed_ip=10.255.254.170/32 for interface 100.140.34.230/11 [Sun Mar 6 15:27:21 2022] 832, AgentPacketTunnelProvider.mm:286 [INFO]: Update tunnel 0 CCConfig: private_key=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx replace_peers=true public_key=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx endpoint=x.x.x.x:51820 persistent_keepalive_interval=0 replace_allowed_ips=true allowed_ip=10.50.19.176/32 public_key=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx endpoint=x.x.x.x:51820 persistent_keepalive_interval=0 replace_allowed_ips=true allowed_ip=10.255.254.170/32 [Sun Mar 6 15:27:21 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: UAPI: Updating private key [Sun Mar 6 15:27:21 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: UAPI: Removing all peers [Sun Mar 6 15:27:21 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: peer(EBXr?oDk8) - Stopping [Sun Mar 6 15:27:21 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: peer(EBXr?oDk8) - Routine: sequential sender - stopped [Sun Mar 6 15:27:21 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: peer(EBXr?oDk8) - Routine: sequential receiver - stopped [Sun Mar 6 15:27:21 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: peer(3wsi?8IXc) - Stopping [Sun Mar 6 15:27:21 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: peer(3wsi?8IXc) - Routine: sequential receiver - stopped [Sun Mar 6 15:27:21 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: peer(3wsi?8IXc) - Routine: sequential sender - stopped [Sun Mar 6 15:27:21 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: peer(EBXr?oDk8) - Starting [Sun Mar 6 15:27:21 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: peer(EBXr?oDk8) - Routine: sequential sender - started [Sun Mar 6 15:27:21 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: peer(EBXr?oDk8) - UAPI: Created [Sun Mar 6 15:27:21 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: peer(EBXr?oDk8) - Routine: sequential receiver - started [Sun Mar 6 15:27:21 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: peer(EBXr?oDk8) - UAPI: Updating endpoint [Sun Mar 6 15:27:21 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: peer(EBXr?oDk8) - UAPI: Updating persistent keepalive interval [Sun Mar 6 15:27:21 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: peer(EBXr?oDk8) - UAPI: Removing all allowedips [Sun Mar 6 15:27:21 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: peer(EBXr?oDk8) - UAPI: Adding allowedip [Sun Mar 6 15:27:21 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: peer(3wsi?8IXc) - Starting [Sun Mar 6 15:27:21 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: peer(3wsi?8IXc) - UAPI: Created [Sun Mar 6 15:27:21 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: peer(3wsi?8IXc) - Routine: sequential receiver - started [Sun Mar 6 15:27:21 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: peer(3wsi?8IXc) - Routine: sequential sender - started [Sun Mar 6 15:27:21 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: peer(3wsi?8IXc) - UAPI: Updating endpoint [Sun Mar 6 15:27:21 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: peer(3wsi?8IXc) - UAPI: Updating persistent keepalive interval [Sun Mar 6 15:27:21 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: peer(3wsi?8IXc) - UAPI: Removing all allowedips [Sun Mar 6 15:27:21 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: peer(3wsi?8IXc) - UAPI: Adding allowedip [Sun Mar 6 15:27:21 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: receive incoming v4 - stopped [Sun Mar 6 15:27:21 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: receive incoming v6 - stopped [Sun Mar 6 15:27:21 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: UDP bind has been updated [Sun Mar 6 15:27:21 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: receive incoming v6 - started [Sun Mar 6 15:27:21 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: receive incoming v4 - started [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Failed to read packet from TUN device: route ip+net: sysctl: no buffer space available [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: TUN reader - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Device closing [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: receive incoming v4 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: receive incoming v6 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: peer(EBXr?oDk8) - Stopping [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: peer(EBXr?oDk8) - Routine: sequential sender - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: peer(EBXr?oDk8) - Routine: sequential receiver - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: peer(3wsi?8IXc) - Stopping [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: peer(3wsi?8IXc) - Routine: sequential sender - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: peer(3wsi?8IXc) - Routine: sequential receiver - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Device closed [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: decryption worker 2 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: decryption worker 11 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: decryption worker 6 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: decryption worker 10 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: handshake worker 7 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: decryption worker 4 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: handshake worker 10 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: decryption worker 5 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: decryption worker 12 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: handshake worker 2 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: decryption worker 7 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: decryption worker 8 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: decryption worker 1 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: decryption worker 9 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: handshake worker 9 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: handshake worker 5 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: handshake worker 1 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: decryption worker 3 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: handshake worker 11 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: handshake worker 8 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: handshake worker 12 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: handshake worker 4 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: handshake worker 3 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: handshake worker 6 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: encryption worker 5 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: encryption worker 9 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: encryption worker 6 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: encryption worker 11 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: encryption worker 12 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: encryption worker 3 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: encryption worker 10 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: encryption worker 4 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: encryption worker 1 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: encryption worker 7 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: encryption worker 8 - stopped [Sun Mar 6 15:27:22 2022] 832, AgentPacketTunnelProvider.mm:41 [INFO]: Routine: encryption worker 2 - stopped From hendrik at friedels.name Fri Mar 4 12:03:05 2022 From: hendrik at friedels.name (Hendrik Friedel) Date: Fri, 04 Mar 2022 12:03:05 +0000 Subject: Wireguard and double NAT Message-ID: Hello, I have a running Server serving several Clients with a wireguard tunnel already. For this, port 51820 UDP is porwarded in my router to the Server. Now, I have one Client with a bit of a tricky setup: I have two Routers doing NAT. For one of those, I have no control, i.e. I cannot setup portforwarding. Is it still possible, to use Wireguard to create a tunnel between my server and this tricky client? If so, is there anything special, that I need to consider? Best regards, Hendrik From dgeor8 at gmail.com Tue Mar 8 07:31:28 2022 From: dgeor8 at gmail.com (Dan George) Date: Tue, 8 Mar 2022 18:31:28 +1100 Subject: [wireguard-linux-compat] fallthrough macro breaks pre-5.4 kernel compatibility? In-Reply-To: References: Message-ID: Hi all, I just tried compiling the latest wireguard-linux-compat kernel module v1.0.20211208 for kernel 4.4 [1] and encountered the following error: "net/wireguard/compat/siphash/siphash.c:112:36: error: ?fallthrough? undeclared (first use in this function)" The previous version v1.0.20210606 compiles and runs fine. Some (inexpert) digging suggests that this is because a recent patch [2] modified siphash.c to use 'fallthrough', which is a Linux kernel compiler macro that was only added in kernel 5.4 [3]. In which case, presumably, either 'fallthrough' needs to be removed again, or the macro added in wireguard-linux-compat somewhere. I'm happy to submit a patch for either (although would appreciate some guidance if the second option), or someone with more experience can. In my immediate case, removing all instances of 'fallthrough' from siphash.c has fixed compilation. Thanks, Dan. [1] Specifically, in case this becomes relevant, cross-compiling for the Synology RT2600AC router (armv7l, kernel 4.4.60) from Ubuntu 20.04.4 x64 using gcc 4.9.3 according to the steps at https://gitlab.com/Kendek/syno-router-scripts/-/issues/3 [2] https://git.zx2c4.com/wireguard-linux-compat/commit/?id=ea6b8e7be5072553b37df4b0b8ee6e0a37134738 [3] https://github.com/torvalds/linux/commit/294f69e662d1570703e9b56e95be37a9fd3afba5 From john at gravitl.com Fri Mar 11 05:28:32 2022 From: john at gravitl.com (john at gravitl.com) Date: Thu, 10 Mar 2022 23:28:32 -0600 Subject: [PATCH] tun/netstack: implement MaxHeaderLength Message-ID: <20220311052832.86700-1-john@gravitl.com> From: John Sahhar Signed-off-by: John Sahhar --- tun/netstack/tun.go | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/tun/netstack/tun.go b/tun/netstack/tun.go index 8b1bb7f..94b59f8 100644 --- a/tun/netstack/tun.go +++ b/tun/netstack/tun.go @@ -71,8 +71,11 @@ func (*endpoint) Capabilities() stack.LinkEndpointCapabilities { return stack.CapabilityNone } -func (*endpoint) MaxHeaderLength() uint16 { - return 0 +func (e *endpoint) MaxHeaderLength() uint16 { + if e.hasV6 && !e.hasV4 { + return 40 + } + return 60 } func (*endpoint) LinkAddress() tcpip.LinkAddress { -- 2.32.0 From stephen at networkplumber.org Mon Mar 14 17:05:48 2022 From: stephen at networkplumber.org (Stephen Hemminger) Date: Mon, 14 Mar 2022 10:05:48 -0700 Subject: Fw: [Bug 215682] New: Long iperf test of wireguard interface causes kernel panic Message-ID: <20220314100548.35026ad0@hermes.local> Begin forwarded message: Date: Mon, 14 Mar 2022 16:51:08 +0000 From: bugzilla-daemon at kernel.org To: stephen at networkplumber.org Subject: [Bug 215682] New: Long iperf test of wireguard interface causes kernel panic https://bugzilla.kernel.org/show_bug.cgi?id=215682 Bug ID: 215682 Summary: Long iperf test of wireguard interface causes kernel panic Product: Networking Version: 2.5 Kernel Version: 5.15.27 Hardware: ARM OS: Linux Tree: Mainline Status: NEW Severity: normal Priority: P1 Component: Other Assignee: stephen at networkplumber.org Reporter: alexey.kv at gmail.com Regression: No I have setup of two Rock64 SBC which I connected via lan and created a wireguard interface on it to do some benchmarking with iperf3. After long run (~3 days) on one instance following kernel panic was observed: [266706.637097] Kernel panic - not syncing: stack-protector: Kernel stack is corrupted in: netif_receive_skb_list_internal+0x2b0/0x2b0 [266706.638171] CPU: 3 PID: 9861 Comm: kworker/3:0 Tainted: G C 5.15.27-rockchip64 #trunk [266706.638999] Hardware name: Pine64 Rock64 (DT) [266706.639406] Workqueue: wg-crypt-wg0 wg_packet_decrypt_worker [wireguard] [266706.640084] Call trace: [266706.640316] dump_backtrace+0x0/0x200 [266706.640686] show_stack+0x18/0x28 [266706.640997] dump_stack_lvl+0x68/0x84 [266706.641348] dump_stack+0x18/0x34 [266706.641661] panic+0x164/0x35c [266706.641955] __stack_chk_fail+0x3c/0x40 [266706.642309] netif_receive_skb_list+0x0/0x158 [266706.642711] gro_normal_list.part.159+0x20/0x40 [266706.643129] napi_complete_done+0xc0/0x1e8 [266706.643512] wg_packet_rx_poll+0x45c/0x8c0 [wireguard] [266706.644000] __napi_poll+0x38/0x230 [266706.644324] net_rx_action+0x284/0x2c8 [266706.644675] _stext+0x160/0x3f8 [266706.644986] do_softirq+0xa8/0xb8 [266706.645297] __local_bh_enable_ip+0xac/0xb8 [266706.645682] _raw_spin_unlock_bh+0x34/0x60 [266706.646059] wg_packet_decrypt_worker+0x50/0x1a8 [wireguard] [266706.646589] process_one_work+0x20c/0x4c8 [266706.646961] worker_thread+0x48/0x478 [266706.647300] kthread+0x138/0x150 [266706.647599] ret_from_fork+0x10/0x20 [266706.647931] SMP: stopping secondary CPUs [266706.648297] Kernel Offset: disabled [266706.648615] CPU features: 0x00001001,00000846 [266706.649009] Memory Limit: none [266706.649293] ---[ end Kernel panic - not syncing: stack-protector: Kernel stack is corrupted in: netif_receive_skb_list_internal+0x2b0/0x2b0 ]--- -- You may reply to this email to add a comment. You are receiving this mail because: You are the assignee for the bug. From Jason at zx2c4.com Mon Mar 14 19:07:10 2022 From: Jason at zx2c4.com (Jason A. Donenfeld) Date: Mon, 14 Mar 2022 13:07:10 -0600 Subject: [PATCH] tun/netstack: implement MaxHeaderLength In-Reply-To: <20220311052832.86700-1-john@gravitl.com> References: <20220311052832.86700-1-john@gravitl.com> Message-ID: Can you describe what this is doing in a real commit message? And why is the v6 case smaller than v4? From ashish.is at lostca.se Mon Mar 14 22:17:16 2022 From: ashish.is at lostca.se (Ashish SHUKLA) Date: Mon, 14 Mar 2022 22:17:16 +0000 Subject: Wireguard and double NAT In-Reply-To: References: Message-ID: <20220314221716.h6sfkapawd64ijv6@chateau.d.if> On Fri, Mar 04, 2022 at 12:03:05PM +0000, Hendrik Friedel wrote: > Hello, > > I have a running Server serving several Clients with a wireguard tunnel > already. For this, port 51820 UDP is porwarded in my router to the Server. > > Now, I have one Client with a bit of a tricky setup: I have two Routers > doing NAT. For one of those, I have no control, i.e. I cannot setup > portforwarding. > > Is it still possible, to use Wireguard to create a tunnel between my server > and this tricky client? If so, is there anything special, that I need to > consider? IIUC, it's quite possible as long as none of the routers are filtering packets with following constraints: - client has to initiate the connection To increase the reliability of the connectivity: - preferably not use a fixed listen port in the client, esp. if it's roaming, or you failover between gateways - setup PersistentKeepalive I have a similar setup with my internet connection (behind double NAT), and it works fine. HTH -- Ashish | GPG: F682 CDCC 39DC 0FEA E116 20B6 C746 CFA9 E74F A4B0 "Should I kill myself, or have a cup of coffee?" (Albert Camus) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From Jason at zx2c4.com Wed Mar 16 04:39:08 2022 From: Jason at zx2c4.com (Jason A. Donenfeld) Date: Tue, 15 Mar 2022 22:39:08 -0600 Subject: Need Help - When 2 WireGuard tunnels are active Windows BSOD occurs upon disconnecting/disabling Ethernet NIC In-Reply-To: References: Message-ID: Hi Siva, Can you send me the files in C:\windows\minidump and c:\windows\memory.dmp if it exists? This will let me figure out what's happened. These contain memory dumps of crashes, so you should probably send it to me directly rather than to the public mailing list. Thanks for reporting the bug. Jason From houmie at gmail.com Wed Mar 16 13:28:30 2022 From: houmie at gmail.com (Houman) Date: Wed, 16 Mar 2022 13:28:30 +0000 Subject: Is Wireguard supported on Mac Catalyst? Message-ID: Hello, The Wireguard project currently supports both iOS and MacOS natively. Now there is also the option of enabling Mac Catalyst inside an iOS target and making it Mac compatible. Without having to rewrite the code in Mac's AppKit. But does Wireguard support Catalyst? When I try to build the project with Catalyst enabled, I get this error: ld: library not found for -lwg-go I have been trying now for hours. It would be amazing to know if that's possible or not. Many Thanks, Houman From syzbot+fb57d2a7c4678481a495 at syzkaller.appspotmail.com Fri Mar 18 23:36:19 2022 From: syzbot+fb57d2a7c4678481a495 at syzkaller.appspotmail.com (syzbot) Date: Fri, 18 Mar 2022 16:36:19 -0700 Subject: [syzbot] net-next test error: WARNING in __napi_schedule Message-ID: <0000000000000eaff805da869d5b@google.com> Hello, syzbot found the following issue on: HEAD commit: e89600ebeeb1 af_vsock: SOCK_SEQPACKET broken buffer test git tree: net-next console output: https://syzkaller.appspot.com/x/log.txt?x=134d43d5700000 kernel config: https://syzkaller.appspot.com/x/.config?x=ef691629edb94d6a dashboard link: https://syzkaller.appspot.com/bug?extid=fb57d2a7c4678481a495 compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2 IMPORTANT: if you fix the issue, please add the following tag to the commit: Reported-by: syzbot+fb57d2a7c4678481a495 at syzkaller.appspotmail.com ------------[ cut here ]------------ WARNING: CPU: 0 PID: 1133 at net/core/dev.c:4268 ____napi_schedule net/core/dev.c:4268 [inline] WARNING: CPU: 0 PID: 1133 at net/core/dev.c:4268 __napi_schedule+0xe2/0x440 net/core/dev.c:5878 Modules linked in: CPU: 0 PID: 1133 Comm: kworker/0:3 Not tainted 5.17.0-rc8-syzkaller-02525-ge89600ebeeb1 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Workqueue: wg-crypt-wg0 wg_packet_decrypt_worker RIP: 0010:____napi_schedule net/core/dev.c:4268 [inline] RIP: 0010:__napi_schedule+0xe2/0x440 net/core/dev.c:5878 Code: 74 4a e8 31 16 47 fa 31 ff 65 44 8b 25 47 c5 d0 78 41 81 e4 00 ff 0f 00 44 89 e6 e8 98 19 47 fa 45 85 e4 75 07 e8 0e 16 47 fa <0f> 0b e8 07 16 47 fa 65 44 8b 25 5f cf d0 78 31 ff 44 89 e6 e8 75 RSP: 0018:ffffc900057d7c88 EFLAGS: 00010093 RAX: 0000000000000000 RBX: ffff88801e680748 RCX: 0000000000000000 RDX: ffff88801ccb0000 RSI: ffffffff8731aa92 RDI: 0000000000000003 RBP: 0000000000000200 R08: 0000000000000000 R09: 0000000000000001 R10: ffffffff8731aa88 R11: 0000000000000000 R12: 0000000000000000 R13: ffff8880b9c00000 R14: 000000000003adc0 R15: ffff88801e118ec0 FS: 0000000000000000(0000) GS:ffff8880b9c00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007fdaa5c65300 CR3: 0000000070af4000 CR4: 00000000003506f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: napi_schedule include/linux/netdevice.h:465 [inline] wg_queue_enqueue_per_peer_rx drivers/net/wireguard/queueing.h:204 [inline] wg_packet_decrypt_worker+0x408/0x5d0 drivers/net/wireguard/receive.c:510 process_one_work+0x9ac/0x1650 kernel/workqueue.c:2307 worker_thread+0x657/0x1110 kernel/workqueue.c:2454 kthread+0x2e9/0x3a0 kernel/kthread.c:377 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 --- This report is generated by a bot. It may contain errors. See https://goo.gl/tpsmEJ for more information about syzbot. syzbot engineers can be reached at syzkaller at googlegroups.com. syzbot will keep track of this issue. See: https://goo.gl/tpsmEJ#status for how to communicate with syzbot. From Jason at zx2c4.com Sat Mar 19 00:47:38 2022 From: Jason at zx2c4.com (Jason A. Donenfeld) Date: Fri, 18 Mar 2022 18:47:38 -0600 Subject: [PATCH] net: remove lockdep asserts from ____napi_schedule() Message-ID: <20220319004738.1068685-1-Jason@zx2c4.com> This reverts commit fbd9a2ceba5c ("net: Add lockdep asserts to ____napi_schedule()."). While good in theory, in practice it causes issues with various drivers, and so it can be revisited earlier in the cycle where those drivers can be adjusted if needed. Link: https://lore.kernel.org/netdev/20220317192145.g23wprums5iunx6c at sx1/ Link: https://lore.kernel.org/netdev/CAHmME9oHFzL6CYVh8nLGkNKOkMeWi2gmxs_f7S8PATWwc6uQsw at mail.gmail.com/ Link: https://lore.kernel.org/wireguard/0000000000000eaff805da869d5b at google.com/ Cc: Sebastian Andrzej Siewior Cc: Jakub Kicinski Cc: Saeed Mahameed Cc: Eric Dumazet Cc: Thomas Gleixner Cc: Peter Zijlstra Signed-off-by: Jason A. Donenfeld --- include/linux/lockdep.h | 7 ------- net/core/dev.c | 5 +---- 2 files changed, 1 insertion(+), 11 deletions(-) diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h index 0cc65d216701..467b94257105 100644 --- a/include/linux/lockdep.h +++ b/include/linux/lockdep.h @@ -329,12 +329,6 @@ extern void lock_unpin_lock(struct lockdep_map *lock, struct pin_cookie); #define lockdep_assert_none_held_once() \ lockdep_assert_once(!current->lockdep_depth) -/* - * Ensure that softirq is handled within the callchain and not delayed and - * handled by chance. - */ -#define lockdep_assert_softirq_will_run() \ - lockdep_assert_once(hardirq_count() | softirq_count()) #define lockdep_recursing(tsk) ((tsk)->lockdep_recursion) @@ -420,7 +414,6 @@ extern int lockdep_is_held(const void *); #define lockdep_assert_held_read(l) do { (void)(l); } while (0) #define lockdep_assert_held_once(l) do { (void)(l); } while (0) #define lockdep_assert_none_held_once() do { } while (0) -#define lockdep_assert_softirq_will_run() do { } while (0) #define lockdep_recursing(tsk) (0) diff --git a/net/core/dev.c b/net/core/dev.c index 8e0cc5f2020d..6cad39b73a8e 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -4277,9 +4277,6 @@ static inline void ____napi_schedule(struct softnet_data *sd, { struct task_struct *thread; - lockdep_assert_softirq_will_run(); - lockdep_assert_irqs_disabled(); - if (test_bit(NAPI_STATE_THREADED, &napi->state)) { /* Paired with smp_mb__before_atomic() in * napi_enable()/dev_set_threaded(). @@ -4887,7 +4884,7 @@ int __netif_rx(struct sk_buff *skb) { int ret; - lockdep_assert_softirq_will_run(); + lockdep_assert_once(hardirq_count() | softirq_count()); trace_netif_rx_entry(skb); ret = netif_rx_internal(skb); -- 2.35.1 From Jason at zx2c4.com Sat Mar 19 00:50:08 2022 From: Jason at zx2c4.com (Jason A. Donenfeld) Date: Fri, 18 Mar 2022 18:50:08 -0600 Subject: [PATCH] net: remove lockdep asserts from ____napi_schedule() In-Reply-To: <20220319004738.1068685-1-Jason@zx2c4.com> References: <20220319004738.1068685-1-Jason@zx2c4.com> Message-ID: Hi Jakub, Er, I forgot to mark this as net-next, but as it's connected to the discussion we were just having, I think you get the idea. :) Jason From kuba at kernel.org Sat Mar 19 04:31:12 2022 From: kuba at kernel.org (Jakub Kicinski) Date: Fri, 18 Mar 2022 21:31:12 -0700 Subject: [PATCH] net: remove lockdep asserts from ____napi_schedule() In-Reply-To: References: <20220319004738.1068685-1-Jason@zx2c4.com> Message-ID: <20220318213112.33289a68@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> On Fri, 18 Mar 2022 18:50:08 -0600 Jason A. Donenfeld wrote: > Hi Jakub, > > Er, I forgot to mark this as net-next, but as it's connected to the > discussion we were just having, I think you get the idea. :) Yup, patchwork bot figured it out, too. All good :) From syzbot+6f21ac9e27fca7e97623 at syzkaller.appspotmail.com Sat Mar 19 08:16:24 2022 From: syzbot+6f21ac9e27fca7e97623 at syzkaller.appspotmail.com (syzbot) Date: Sat, 19 Mar 2022 01:16:24 -0700 Subject: [syzbot] linux-next test error: WARNING in __napi_schedule Message-ID: <000000000000110dee05da8de18a@google.com> Hello, syzbot found the following issue on: HEAD commit: 6d72dda014a4 Add linux-next specific files for 20220318 git tree: linux-next console output: https://syzkaller.appspot.com/x/log.txt?x=124f5589700000 kernel config: https://syzkaller.appspot.com/x/.config?x=5907d82c35688f04 dashboard link: https://syzkaller.appspot.com/bug?extid=6f21ac9e27fca7e97623 compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2 IMPORTANT: if you fix the issue, please add the following tag to the commit: Reported-by: syzbot+6f21ac9e27fca7e97623 at syzkaller.appspotmail.com ------------[ cut here ]------------ WARNING: CPU: 0 PID: 3612 at net/core/dev.c:4268 ____napi_schedule net/core/dev.c:4268 [inline] WARNING: CPU: 0 PID: 3612 at net/core/dev.c:4268 __napi_schedule+0xe2/0x440 net/core/dev.c:5878 Modules linked in: CPU: 0 PID: 3612 Comm: kworker/0:5 Not tainted 5.17.0-rc8-next-20220318-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Workqueue: wg-crypt-wg0 wg_packet_decrypt_worker RIP: 0010:____napi_schedule net/core/dev.c:4268 [inline] RIP: 0010:__napi_schedule+0xe2/0x440 net/core/dev.c:5878 Code: 74 4a e8 11 61 3c fa 31 ff 65 44 8b 25 d7 27 c6 78 41 81 e4 00 ff 0f 00 44 89 e6 e8 18 63 3c fa 45 85 e4 75 07 e8 ee 60 3c fa <0f> 0b e8 e7 60 3c fa 65 44 8b 25 f7 31 c6 78 31 ff 44 89 e6 e8 f5 RSP: 0018:ffffc9000408fc78 EFLAGS: 00010093 RAX: 0000000000000000 RBX: ffff88807fa90748 RCX: 0000000000000000 RDX: ffff888019800000 RSI: ffffffff873c4802 RDI: 0000000000000003 RBP: 0000000000000200 R08: 0000000000000000 R09: 0000000000000001 R10: ffffffff873c47f8 R11: 0000000000000000 R12: 0000000000000000 R13: ffff8880b9c00000 R14: 000000000003b100 R15: ffff88801cf90ec0 FS: 0000000000000000(0000) GS:ffff8880b9c00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f998512c300 CR3: 00000000707f2000 CR4: 00000000003506f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: napi_schedule include/linux/netdevice.h:465 [inline] wg_queue_enqueue_per_peer_rx drivers/net/wireguard/queueing.h:204 [inline] wg_packet_decrypt_worker+0x408/0x5d0 drivers/net/wireguard/receive.c:510 process_one_work+0x996/0x1610 kernel/workqueue.c:2289 worker_thread+0x665/0x1080 kernel/workqueue.c:2436 kthread+0x2e9/0x3a0 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298 --- This report is generated by a bot. It may contain errors. See https://goo.gl/tpsmEJ for more information about syzbot. syzbot engineers can be reached at syzkaller at googlegroups.com. syzbot will keep track of this issue. See: https://goo.gl/tpsmEJ#status for how to communicate with syzbot. From bigeasy at linutronix.de Sat Mar 19 12:01:04 2022 From: bigeasy at linutronix.de (Sebastian Andrzej Siewior) Date: Sat, 19 Mar 2022 13:01:04 +0100 Subject: [PATCH] net: remove lockdep asserts from ____napi_schedule() In-Reply-To: <20220319004738.1068685-1-Jason@zx2c4.com> References: <20220319004738.1068685-1-Jason@zx2c4.com> Message-ID: On 2022-03-18 18:47:38 [-0600], Jason A. Donenfeld wrote: > This reverts commit fbd9a2ceba5c ("net: Add lockdep asserts to > ____napi_schedule()."). While good in theory, in practice it causes > issues with various drivers, and so it can be revisited earlier in the > cycle where those drivers can be adjusted if needed. Do you plan to address to address the wireguard warning? > --- a/net/core/dev.c > +++ b/net/core/dev.c > @@ -4277,9 +4277,6 @@ static inline void ____napi_schedule(struct softnet_data *sd, > { > struct task_struct *thread; > > - lockdep_assert_softirq_will_run(); > - lockdep_assert_irqs_disabled(); Could you please keep that lockdep_assert_irqs_disabled()? That is needed regardless of the upper one. Sebastian From Jason at zx2c4.com Mon Mar 21 07:24:35 2022 From: Jason at zx2c4.com (Jason A. Donenfeld) Date: Mon, 21 Mar 2022 01:24:35 -0600 Subject: [PATCH] net: remove lockdep asserts from ____napi_schedule() In-Reply-To: References: <20220319004738.1068685-1-Jason@zx2c4.com> Message-ID: Hi Sebastian, On Sat, Mar 19, 2022 at 6:01 AM Sebastian Andrzej Siewior wrote: > > On 2022-03-18 18:47:38 [-0600], Jason A. Donenfeld wrote: > > This reverts commit fbd9a2ceba5c ("net: Add lockdep asserts to > > ____napi_schedule()."). While good in theory, in practice it causes > > issues with various drivers, and so it can be revisited earlier in the > > cycle where those drivers can be adjusted if needed. > > Do you plan to address to address the wireguard warning? It seemed to me like you had a lot of interesting ideas regarding packet batching and performance and whatnot around when bh is enabled or not. I'm waiting for a patch from you on this, as I mentioned in my previous email. There is definitely a lot of interesting potential performance work here. I am curious to play around with it too, of course, but it sounded to me like you had very specific ideas. I'm not really sure how to determine how many packets to batch, except for through empirical observation or some kind of crazy dql thing. Or maybe there's some optimal quantity due to the way napi works in the first place. Anyway, there's some research to do here. > > > --- a/net/core/dev.c > > +++ b/net/core/dev.c > > @@ -4277,9 +4277,6 @@ static inline void ____napi_schedule(struct softnet_data *sd, > > { > > struct task_struct *thread; > > > > - lockdep_assert_softirq_will_run(); > > - lockdep_assert_irqs_disabled(); > > Could you please keep that lockdep_assert_irqs_disabled()? That is > needed regardless of the upper one. Feel free to send in a more specific revert if you think it's justifiable. I just sent in the thing that reverted the patch that caused the regression - the dumb brute approach. Jason From kuba at kernel.org Mon Mar 21 19:08:46 2022 From: kuba at kernel.org (Jakub Kicinski) Date: Mon, 21 Mar 2022 12:08:46 -0700 Subject: [syzbot] net-next test error: WARNING in __napi_schedule In-Reply-To: <0000000000000eaff805da869d5b@google.com> References: <0000000000000eaff805da869d5b@google.com> Message-ID: <20220321120846.4441e49a@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> On Fri, 18 Mar 2022 16:36:19 -0700 syzbot wrote: > Hello, > > syzbot found the following issue on: > > HEAD commit: e89600ebeeb1 af_vsock: SOCK_SEQPACKET broken buffer test > git tree: net-next > console output: https://syzkaller.appspot.com/x/log.txt?x=134d43d5700000 > kernel config: https://syzkaller.appspot.com/x/.config?x=ef691629edb94d6a > dashboard link: https://syzkaller.appspot.com/bug?extid=fb57d2a7c4678481a495 > compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2 #syz fix: net: Revert the softirq will run annotation in ____napi_schedule(). From kuba at kernel.org Mon Mar 21 19:08:49 2022 From: kuba at kernel.org (Jakub Kicinski) Date: Mon, 21 Mar 2022 12:08:49 -0700 Subject: [syzbot] linux-next test error: WARNING in __napi_schedule In-Reply-To: <000000000000110dee05da8de18a@google.com> References: <000000000000110dee05da8de18a@google.com> Message-ID: <20220321120849.1c87c4a4@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com> On Sat, 19 Mar 2022 01:16:24 -0700 syzbot wrote: > Hello, > > syzbot found the following issue on: > > HEAD commit: 6d72dda014a4 Add linux-next specific files for 20220318 > git tree: linux-next > console output: https://syzkaller.appspot.com/x/log.txt?x=124f5589700000 > kernel config: https://syzkaller.appspot.com/x/.config?x=5907d82c35688f04 > dashboard link: https://syzkaller.appspot.com/bug?extid=6f21ac9e27fca7e97623 > compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2 #syz fix: net: Revert the softirq will run annotation in ____napi_schedule(). From lugo at uw.edu Mon Mar 14 17:16:28 2022 From: lugo at uw.edu (Luki Goldschmidt) Date: Mon, 14 Mar 2022 10:16:28 -0700 Subject: Performance on 10G / 1500 MTU link Message-ID: <506c5c38-8451-7982-f2fe-d9a489e71752@ipd.uw.edu> Hi, Has anyone succeeded in saturating a 10Gb link with a MTU 1500 using WireGuard? On a LAN with 10Gb or 40Gb links, I'm getting 5-6 Gbps throughput with WireGuard (tunnel MTU 1420). Without WireGuard, I have no problem pushing 9.8 and 35 Gbps, respectively. When I increase the tunnel MTU to 8920, I can easily push 9.3 Gbps through Wireguard. I'm testing testing with iperf using a single or multiple parallel transfers. I tried kernels 5.14 and 5.15, and a range of CPUs like Intel E5-2680v4, E3-1270v6, Xeon Silver 4114. The bottleneck seems to linked to packets per second. I can't use jumbo frames over the WAN connection so MTU 1500 (link) will have to be it, but I'd love to get the most out of the 10 Gbps connection. The WAN link latency is only ~1 ms so it ought to be doable. Any tuning tips are appreciated. Luki From mail at vvzvlad.xyz Wed Mar 16 21:10:21 2022 From: mail at vvzvlad.xyz (vvzvlad) Date: Thu, 17 Mar 2022 00:10:21 +0300 Subject: Bug in the macOS client wireguard: make disable the tunnel from the context menu impossibility Message-ID: In your repository (https://git.zx2c4.com/wireguard-apple/) I have not found a githab or something similar, where I could leave an issue, so I write to the post, as I was advised by one of the committers. I am actively using WG client on my apple devices, and if the ios-clients work fine, with macos versions I have problems from time to time, so I want to point them out to developers. Here is, for example, one of the problems: https://www.dropbox.com/s/4bit2xem8g1wmk8/Screen%202022-03-13%20at%2017.59.43.mov?dl=0 If the option "on demand" is enabled, then after disconnecting the tunnel from the context menu or from the system settings, it immediately reconnects. If the respected developers are interested, I could describe the reproduction of some of these kinds of problems and give some thoughts on usability (I compare it to warp, which is very nice to use). From laodeng99 at 163.com Thu Mar 17 22:58:55 2022 From: laodeng99 at 163.com (Lao Deng) Date: Fri, 18 Mar 2022 06:58:55 +0800 (CST) Subject: Introduce Navigation component to Wireguard-Android Message-ID: <7fd66fe3.14e.17f9a1b030e.Coremail.laodeng99@163.com> I am going to add some un-related functions to Wireguard-Android . The final app. should look like the demo of com.google.android.material.bottomnavigation.BottomNavigationView , ?which? consists of a? bottom navigation component and several? independent fragments( or activities ) .? I am wondering how to introduce this navigation component to the wireguard-android? project with the least modification :) Could you please shed some light on it ? Thankyou for your attention :)?? ? From fokin33 at gmail.com Fri Mar 18 11:18:53 2022 From: fokin33 at gmail.com (Fokin Denis) Date: Fri, 18 Mar 2022 14:18:53 +0300 Subject: Too litle timeout for networkwatcher Message-ID: Hello! Timeout for NetworkWatcher is one minute now. It is too little on some PC For example about 2 minutes (i use modified version - set timeout to one hour): 2022-03-17 00:22:57.612: [TUN] [client1] Starting WireGuard/0.5.3 (Windows 6.1.7601; amd64) 2022-03-17 00:22:57.612: [TUN] [client1] Watching network interfaces 2022-03-17 00:22:57.612: [TUN] [client1] Resolving DNS names 2022-03-17 00:22:57.612: [TUN] [client1] Creating network adapter 2022-03-17 00:22:57.676: [TUN] [client1] Using existing driver 0.10 2022-03-17 00:22:57.680: [TUN] [client1] Creating adapter 2022-03-17 00:22:58.056: [TUN] [client1] Using WireGuardNT/0.10 2022-03-17 00:22:58.056: [TUN] [client1] Enabling firewall rules 2022-03-17 00:22:58.029: [TUN] [client1] Interface created 2022-03-17 00:22:58.059: [TUN] [client1] Dropping privileges 2022-03-17 00:22:58.060: [TUN] [client1] Setting interface configuration 2022-03-17 00:22:58.060: [TUN] [client1] Peer 1 created 2022-03-17 00:22:58.060: [TUN] [client1] Sending keepalive packet to peer 1 (167.99.208.250:51820) 2022-03-17 00:22:58.060: [TUN] [client1] Sending handshake initiation to peer 1 (167.99.208.250:51820) 2022-03-17 00:22:58.060: [TUN] [client1] Interface up 2022-03-17 00:22:58.107: [TUN] [client1] Receiving handshake response from peer 1 (167.99.208.250:51820) 2022-03-17 00:22:58.107: [TUN] [client1] Keypair 1 created for peer 1 2022-03-17 00:23:19.329: [TUN] [client1] Sending keepalive packet to peer 1 (167.99.208.250:51820) 2022-03-17 00:23:40.579: [TUN] [client1] Sending keepalive packet to peer 1 (167.99.208.250:51820) 2022-03-17 00:24:01.830: [TUN] [client1] Sending keepalive packet to peer 1 (167.99.208.250:51820) 2022-03-17 00:24:23.079: [TUN] [client1] Sending keepalive packet to peer 1 (167.99.208.250:51820) 2022-03-17 00:24:28.038: [TUN] [client1] Monitoring MTU of default v4 routes 2022-03-17 00:24:28.107: [TUN] [client1] Setting device v4 addresses 2022-03-17 00:24:28.239: [TUN] [client1] Monitoring MTU of default v6 routes 2022-03-17 00:24:28.239: [TUN] [client1] Startup complete 2022-03-17 00:24:28.239: [TUN] [client1] Setting device v6 addresses Maybe set timeout to 5 minutes or make setting for it ? Thank you for great VPN! From k.seidel at q1.eu Fri Mar 18 12:34:26 2022 From: k.seidel at q1.eu (Kevin Seidel) Date: Fri, 18 Mar 2022 13:34:26 +0100 Subject: [PATCH] Always show "Last Handshake" and "Rx / Tx Bytes" Message-ID: <20220318123426.12414-1-k.seidel@q1.eu> From: moogle19 --- Sources/WireGuardApp/UI/TunnelViewModel.swift | 37 +++++++++---------- 1 file changed, 18 insertions(+), 19 deletions(-) diff --git a/Sources/WireGuardApp/UI/TunnelViewModel.swift b/Sources/WireGuardApp/UI/TunnelViewModel.swift index b65c8cc..1c578c5 100644 --- a/Sources/WireGuardApp/UI/TunnelViewModel.swift +++ b/Sources/WireGuardApp/UI/TunnelViewModel.swift @@ -309,7 +309,7 @@ class TunnelViewModel { scratchpad[.preSharedKey] = preSharedKey } if !config.allowedIPs.isEmpty { - scratchpad[.allowedIPs] = config.allowedIPs.map { $0.stringRepresentation }.joined(separator: ", ") + scratchpad[.allowedIPs] = config.allowedIPs.map(\.stringRepresentation).joined(separator: ", ") } if let endpoint = config.endpoint { scratchpad[.endpoint] = endpoint.stringRepresentation @@ -317,15 +317,10 @@ class TunnelViewModel { if let persistentKeepAlive = config.persistentKeepAlive { scratchpad[.persistentKeepAlive] = String(persistentKeepAlive) } - if let rxBytes = config.rxBytes { - scratchpad[.rxBytes] = prettyBytes(rxBytes) - } - if let txBytes = config.txBytes { - scratchpad[.txBytes] = prettyBytes(txBytes) - } - if let lastHandshakeTime = config.lastHandshakeTime { - scratchpad[.lastHandshakeTime] = prettyTimeAgo(timestamp: lastHandshakeTime) - } + scratchpad[.rxBytes] = config.rxBytes?.prettyBytes() ?? "-" + scratchpad[.txBytes] = config.txBytes?.prettyBytes() ?? "-" + scratchpad[.lastHandshakeTime] = config.lastHandshakeTime?.prettyTimeAgo() ?? "-" + return scratchpad } @@ -624,30 +619,34 @@ class TunnelViewModel { } } -private func prettyBytes(_ bytes: UInt64) -> String { - switch bytes { +private extension UInt64 { + func prettyBytes() -> String { + switch self { case 0..<1024: - return "\(bytes) B" + return "\(self) B" case 1024 ..< (1024 * 1024): - return String(format: "%.2f", Double(bytes) / 1024) + " KiB" + return String(format: "%.2f", Double(self) / 1024) + " KiB" case 1024 ..< (1024 * 1024 * 1024): - return String(format: "%.2f", Double(bytes) / (1024 * 1024)) + " MiB" + return String(format: "%.2f", Double(self) / (1024 * 1024)) + " MiB" case 1024 ..< (1024 * 1024 * 1024 * 1024): - return String(format: "%.2f", Double(bytes) / (1024 * 1024 * 1024)) + " GiB" + return String(format: "%.2f", Double(self) / (1024 * 1024 * 1024)) + " GiB" default: - return String(format: "%.2f", Double(bytes) / (1024 * 1024 * 1024 * 1024)) + " TiB" + return String(format: "%.2f", Double(self) / (1024 * 1024 * 1024 * 1024)) + " TiB" } + } } -private func prettyTimeAgo(timestamp: Date) -> String { +private extension Date { + func prettyTimeAgo() -> String { let now = Date() - let timeInterval = Int64(now.timeIntervalSince(timestamp)) + let timeInterval = Int64(now.timeIntervalSince(self)) switch timeInterval { case ..<0: return tr("tunnelHandshakeTimestampSystemClockBackward") case 0: return tr("tunnelHandshakeTimestampNow") default: return tr(format: "tunnelHandshakeTimestampAgo (%@)", prettyTime(secondsLeft: timeInterval)) } + } } private func prettyTime(secondsLeft: Int64) -> String { -- 2.35.1 From brcisna at gmail.com Sat Mar 19 14:20:32 2022 From: brcisna at gmail.com (Barry Cisna) Date: Sat, 19 Mar 2022 09:20:32 -0500 Subject: Systemd-resolved problem Message-ID: Hello, Newbie to Wireguard, 1- Server VPS in Google Cloud Debian Bullseye, 2- Home Server Debian Bullseye , behind CGNAT. After many hours trying to get this to work,,here is situation, from 'client' server2 i can ping any external /internet ip address, i cannot web browse/ resolve names. Have tried multiple iptables examples. So now find that when i do an sudo wg-quick up systemd-resolved.service is in a 'degraded state'. As soon as i down the wg0 the systemd-resolved service is running normally again when doing a resolvectl with the wg0 interface up there are no dns servers on the wg0 interface From rgovostes at gmail.com Sat Mar 19 18:34:30 2022 From: rgovostes at gmail.com (Ryan Govostes) Date: Sat, 19 Mar 2022 14:34:30 -0400 Subject: macOS: WireGuard traffic sent over wrong interface Message-ID: I wasn't able to find a bug tracker so apologies if this is a known issue. I?m running macOS 12.3 with WireGuard 1.0.15 from the App Store. My WireGuard peer is on my corporate network, to which I am first connecting via Palo Alto Networks GlobalProtect. If I use `route get ` then macOS reports that it will send traffic to the peer via the GlobalProtect tunnel interface. And I can confirm this by sending UDP traffic from macOS to the peer server and monitoring it going over that interface using Wireshark. However, when I then turn on the WireGuard tunnel, it sends its traffic over en0, my Wi-Fi interface, over which the peer is not reachable. As a workaround, I can have my endpoint configured as localhost and use socat to redirect traffic over the correct interface: socat -T 3600 udp-listen:51820,reuseaddr,fork udp::51820 Ryan From alexey.ponkin at gmail.com Sun Mar 20 15:24:38 2022 From: alexey.ponkin at gmail.com (Alexey Ponkin) Date: Sun, 20 Mar 2022 16:24:38 +0100 Subject: WireGuardKit iOS - Import package and usage of 'Shared' classes Message-ID: Hi guys, I'm trying to use WiregurdKit in my iOS app. I imported the package as described here - https://github.com/WireGuard/wireguard-apple. Now I can use `PacketTunnelProvider` inside `WireGuardNetworkExtension`. But unfortunately , I can't use any classes and extensions from the `Shared` folder (https://github.com/WireGuard/wireguard-apple/tree/master/Sources/Shared). Is there any way to make them 'visible' for my project? I'm fairly new to Swift and iOS development. I would like, for instance, to reuse this extension (https://github.com/WireGuard/wireguard-apple/blob/master/Sources/Shared/Model/NETunnelProviderProtocol%2BExtension.swift) and may be, `Keychan` wrapper class. Thanks in advance for your help. From and at mullvad.net Tue Mar 22 11:50:12 2022 From: and at mullvad.net (Andrej Mihajlov) Date: Tue, 22 Mar 2022 12:50:12 +0100 Subject: WireGuardKit iOS - Import package and usage of 'Shared' classes In-Reply-To: References: Message-ID: <49AF0CB9-0474-40D2-813B-4B1C22CB8161@mullvad.net> Hi, The source code under Sources/Shared is a part of WireGuard app. These files are checked out by SPM, because both WireGuardKit and WireGuard app share the same repository. However, these files aren?t part of WireGuardKit and thus not available for direct import via WireGuardKit. Best, Andrej > On 20 Mar 2022, at 16:24, Alexey Ponkin wrote: > > Hi guys, > I'm trying to use WiregurdKit in my iOS app. I imported the package as > described here - https://github.com/WireGuard/wireguard-apple. Now I > can use `PacketTunnelProvider` inside `WireGuardNetworkExtension`. But > unfortunately , I can't use any classes and extensions from the > `Shared` folder > (https://github.com/WireGuard/wireguard-apple/tree/master/Sources/Shared). > Is there any way to make them 'visible' for my project? I'm fairly new > to Swift and iOS development. I would like, for instance, to reuse > this extension (https://github.com/WireGuard/wireguard-apple/blob/master/Sources/Shared/Model/NETunnelProviderProtocol%2BExtension.swift) > and may be, `Keychan` wrapper class. > Thanks in advance for your help. From me at msfjarvis.dev Thu Mar 24 06:06:51 2022 From: me at msfjarvis.dev (Harsh Shandilya) Date: Thu, 24 Mar 2022 06:06:51 +0000 Subject: Introduce Navigation component to Wireguard-Android In-Reply-To: <7fd66fe3.14e.17f9a1b030e.Coremail.laodeng99@163.com> References: <7fd66fe3.14e.17f9a1b030e.Coremail.laodeng99@163.com> Message-ID: Hi, On 2022-03-17 22:58, Lao Deng wrote: > I am going to add some un-related functions to Wireguard-Android . The > final app. should look like the demo of > com.google.android.material.bottomnavigation.BottomNavigationView , > ?which? consists of a? bottom navigation component and several? > independent fragments( or activities ) .? > > > I am wondering how to introduce this navigation component to the > wireguard-android? project with the least modification :) Could you > please shed some light on it ? > Thankyou for your attention :)?? > I'm a bit confused, is this change something you are proposing for the upstream project or are you looking for assistance to implement this in a fork? I don't think we want a Bottom Navigation component in the app itself since it adds no value to the app, the only new destination we could add with it would be a quicker entry into settings which is not a high-priority destination requiring such a prominent navigation action. If you want to add it to a fork, the only real help I can provide is that MainActivity inflates the main_activity.xml layout which in turn handles inflating TunnelListFragment, so you'd want to make changes to MainActivity and its layout for adding the component. -- Harsh Shandilya From houmie at gmail.com Sat Mar 26 12:48:51 2022 From: houmie at gmail.com (Houman) Date: Sat, 26 Mar 2022 12:48:51 +0000 Subject: Is Wireguard supported on Mac Catalyst? In-Reply-To: References: Message-ID: Hi Jason, I hope all is well. I was hoping to nudge you on this issue again to see if you could clarify this, please. Does Wireguard support Mac Catalyst (macOS 11.0) on the Apple repo? https://github.com/WireGuard/wireguard-apple Mac Catalyst is supported natively in Xcode and allows turning an iOS app into a Mac app. Many Thanks, Houman On Wed, 16 Mar 2022 at 13:28, Houman wrote: > > Hello, > > The Wireguard project currently supports both iOS and MacOS natively. > Now there is also the option of enabling Mac Catalyst inside an iOS > target and making it Mac compatible. Without having to rewrite the > code in Mac's AppKit. > > But does Wireguard support Catalyst? > > When I try to build the project with Catalyst enabled, I get this error: > > ld: library not found for -lwg-go > > I have been trying now for hours. It would be amazing to know if > that's possible or not. > > Many Thanks, > Houman From Jason at zx2c4.com Tue Mar 29 16:29:55 2022 From: Jason at zx2c4.com (Jason A. Donenfeld) Date: Tue, 29 Mar 2022 12:29:55 -0400 Subject: [PATCH net] wireguard: socket: fix memory leak in send6() In-Reply-To: <20220329121552.661647-1-wanghai38@huawei.com> References: <20220329121552.661647-1-wanghai38@huawei.com> Message-ID: Applied, thanks for the patch. From brcisna at gmail.com Wed Mar 23 20:08:03 2022 From: brcisna at gmail.com (Barry Cisna) Date: Wed, 23 Mar 2022 20:08:03 -0000 Subject: WG IPV6 problem Message-ID: Hello All, Have Wireguard setup as PEER1 on an Debian Bullseye machine residing on Google Cloud instance. IPV4 only,, PEER2 is the home Debian Bullseye server behind cgnat cellular provider After a learning curve these connect fine and PEER2 does get the PEER 1 public ipv4 address,,,with a US Cellular hotspot connected to PEER2. BUT on the same PEER2 if i use a T-Mobile hotspot ,,,again CGNAT,,internet does still work fine and can ping PEER1,,,BUT does not get PEER1 public ip address. Only thing i see different is the T-Mobile hot spot is set to IPV6 only in the GUi settings of the hotspot. I did not in the Network manager gui setting of PEER2 the IPV6 is set to 'IGNORE' I havent yet tried changing that to 'automatic'. What would i change in the PEER2 Network Manager config to make this work with IPV6? Thank You From brcisna at gmail.com Fri Mar 25 23:12:00 2022 From: brcisna at gmail.com (Barry Cisna) Date: Fri, 25 Mar 2022 23:12:00 -0000 Subject: one subnet not pingable Message-ID: Hello All, Peer2 (client) - Debian Bullseye wwan0 = 100.64.2.161/30 # cellular modem CGNAT bridge0 = ethernet & wifi interfaces 192.168.67.1 wg0client2 = 192.168.67.2 Peer 1 (server) Google Cloud Instance Debian Bullseye static IPV4 address ens4 = 10.128.0.2 wg0 + 192.168.69.1 PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o ens4 -j MASQUERADE PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o ens4 -j MASQUERADE Peer2, Clients connected to LAN/bridge0 both wired and wireless can webbrowse but is delayed. DNS not exactly right,, Peer2 can ping Peer1 fine responds both interface addresses If Peer1 pings Peer2 at 192.168.67.1 returns "no message,,,something" and returns 192.168.69.1 if Peer1 pings Peer2 at wwan0 it gets a response So..it seems the wwan0 can not hop to the bridge0 interface for some reason, I have tried for hours to make static routes what i think may work,and always get "route already exists'. tried a few iptables guesses on client,,,no go.. Thanks From erwan at rail.eu.org Sat Mar 26 20:27:22 2022 From: erwan at rail.eu.org (Erwan David) Date: Sat, 26 Mar 2022 20:27:22 -0000 Subject: Choosing local IP address Message-ID: <91765b65-5daa-c699-4a72-b59b0f6f9ebb@rail.eu.org> Hello I have a wireguard setup between my home router (and the home network behind) and a distant FreeBSD servers with several jails. I use IPv6 fir transport, but I have a routing problem because whan at home I need to ssh to the server, and if I use for endpoint address (on the home router) the main IPv6 address it ends up with a traffic half out of the tunnel (from home to server), and half in the tunnel (from server to home). So I chose to add an IPv6 address to the server, route it outside the tunnel and use it only for the tunnel. But I cannot specify to wireguard on the server to use this address, thus I get packets from the main address, my router changes the endpoint address and tunnel does not work. How can I say to wireguard which IP address to use when sending ths encrypted packets to the endpoint ? -- Erwan From cf.natali at gmail.com Tue Mar 29 22:16:58 2022 From: cf.natali at gmail.com (=?UTF-8?Q?Charles=2DFran=C3=A7ois_Natali?=) Date: Tue, 29 Mar 2022 22:16:58 -0000 Subject: CPU round-robin and isolated cores Message-ID: Hi! We've run into an issue where wireguard doesn't play nice with isolated cores (`isolcpus` kernel parameter). Basically we use `isolcpus` to isolate cores and explicitly bind our low-latency processes to those cores, in order to minimize latency due to the kernel and userspace. It worked great until we started using wireguard, in particular it seems to be due to the way work is allocated to the workqueues created here: https://github.com/torvalds/linux/blob/ae085d7f9365de7da27ab5c0d16b12d51ea7fca9/drivers/net/wireguard/device.c#L335 I'm not familiar with the wireguard code at all so might be missing something, but looking at e.g. https://github.com/torvalds/linux/blob/ae085d7f9365de7da27ab5c0d16b12d51ea7fca9/drivers/net/wireguard/receive.c#L575 and https://github.com/torvalds/linux/blob/ae085d7f9365de7da27ab5c0d16b12d51ea7fca9/drivers/net/wireguard/queueing.h#L176 it seems that the RX path uses round-robin to dispatch the packets to all online CPUs, including isolated ones: ``` void wg_packet_receive(struct wg_device *wg, struct sk_buff *skb) { [...] /* Then we queue it up in the device queue, which consumes the * packet as soon as it can. */ cpu = wg_cpumask_next_online(next_cpu); if (unlikely(ptr_ring_produce_bh(&device_queue->ring, skb))) return -EPIPE; queue_work_on(cpu, wq, &per_cpu_ptr(device_queue->worker, cpu)->work); return 0; } ``` Where `wg_cpumask_next_online` is defined like this: ``` static inline int wg_cpumask_next_online(int *next) { int cpu = *next; while (unlikely(!cpumask_test_cpu(cpu, cpu_online_mask))) cpu = cpumask_next(cpu, cpu_online_mask) % nr_cpumask_bits; *next = cpumask_next(cpu, cpu_online_mask) % nr_cpumask_bits; return cpu; } ``` It's a problem for us because it causes significant latency, see e.g. this ftrace output showing a kworker - bound to an isolated core - spend over 240usec inside wg_packet_decrypt_worker - we've seen much higher, up to 500usec or even more: ``` kworker/47:1-2373323 [047] 243644.756405: funcgraph_entry: | process_one_work() { kworker/47:1-2373323 [047] 243644.756406: funcgraph_entry: | wg_packet_decrypt_worker() { [...] kworker/47:1-2373323 [047] 243644.756647: funcgraph_exit: 0.591 us | } kworker/47:1-2373323 [047] 243644.756647: funcgraph_exit: ! 242.655 us | } ``` If it was for example a physical NIC, typically what we'd do would be to set IRQ affinity to avoid those isolated cores, which would also avoid running the corresponding softirqs on those cores, avoiding such latency. However it seems that there's currently no way to tell wireguard to avoid those cores. I was wondering if it would make sense for wireguard to ignore isolated cores to avoid this kind of issue. As far as I can tell it should be a matter of replacing usages of `cpu_online_mask` by `housekeeping_cpumask(HK_TYPE_DOMAIN)` or even `housekeeping_cpumask(HK_TYPE_DOMAIN | HK_TYPE_WQ)`. We could potentially run with a patched kernel but would very much prefer using an upstream fix if that's acceptable. Thanks in advance! Charles From r.sander at heinlein-support.de Tue Mar 22 09:35:41 2022 From: r.sander at heinlein-support.de (Robert Sander) Date: Tue, 22 Mar 2022 09:35:41 -0000 Subject: Systemd-resolved problem In-Reply-To: References: Message-ID: <75feba5a-9289-0ceb-ef39-285bacdcc506@heinlein-support.de> Am 19.03.22 um 15:20 schrieb Barry Cisna: > when doing a resolvectl with the wg0 interface up there are no dns > servers on the wg0 interface Where should they come from? Do you have the default route through your Wireguard tunnel? Then systemd-resolved likely will not be able to reach the DNS resolvers any more. You will have to configure resolvers that are reachable through the Wireguard tunnel (e.g. 9.9.9.9 or similar). Regards -- Robert From unquietwiki at gmail.com Sat Mar 19 09:04:44 2022 From: unquietwiki at gmail.com (Michael Adams) Date: Sat, 19 Mar 2022 09:04:44 -0000 Subject: Anyone working on fixing IPv6 name resolution in the Windows client? Message-ID: I've been trying to make a report about this, and not sure if I've successfully done so. It apparently has been the case for some time that the Windows Wireguard client will disregard AAAA DNS records in hostname lookup; specifying IPv6 addresses as a host does work. Thank you all for working on an awesome VPN platform; I've found it to be incredibly useful. Michael Adams https://unquietwiki.com/ From wanghai38 at huawei.com Tue Mar 29 11:57:52 2022 From: wanghai38 at huawei.com (Wang Hai) Date: Tue, 29 Mar 2022 11:57:52 -0000 Subject: [PATCH net] wireguard: socket: fix memory leak in send6() Message-ID: <20220329121552.661647-1-wanghai38@huawei.com> I got a memory leak report: unreferenced object 0xffff8881191fc040 (size 232): comm "kworker/u17:0", pid 23193, jiffies 4295238848 (age 3464.870s) hex dump (first 32 bytes): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [] slab_post_alloc_hook+0x84/0x3b0 [] kmem_cache_alloc_node+0x167/0x340 [] __alloc_skb+0x1db/0x200 [] wg_socket_send_buffer_to_peer+0x3d/0xc0 [] wg_packet_send_handshake_initiation+0xfa/0x110 [] wg_packet_handshake_send_worker+0x21/0x30 [] process_one_work+0x2e8/0x770 [] worker_thread+0x4a/0x4b0 [] kthread+0x120/0x160 [] ret_from_fork+0x1f/0x30 In function wg_socket_send_buffer_as_reply_to_skb() or wg_socket_send_buffer_to_peer(), the semantics of send6() is required to free skb. But when CONFIG_IPV6 is disable, kfree_skb() is missing. This patch adds it to fix this bug. Fixes: e7096c131e51 ("net: WireGuard secure network tunnel") Signed-off-by: Wang Hai --- drivers/net/wireguard/socket.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/net/wireguard/socket.c b/drivers/net/wireguard/socket.c index 6f07b949cb81..467eef0e563b 100644 --- a/drivers/net/wireguard/socket.c +++ b/drivers/net/wireguard/socket.c @@ -160,6 +160,7 @@ static int send6(struct wg_device *wg, struct sk_buff *skb, rcu_read_unlock_bh(); return ret; #else + kfree_skb(skb); return -EAFNOSUPPORT; #endif } -- 2.25.1