From syzbot+57cb9d16a1b17521eb76 at syzkaller.appspotmail.com Fri Dec 1 15:31:23 2023 From: syzbot+57cb9d16a1b17521eb76 at syzkaller.appspotmail.com (syzbot) Date: Fri, 01 Dec 2023 07:31:23 -0800 Subject: [syzbot] [wireguard?] KCSAN: data-race in wg_packet_handshake_receive_worker / wg_packet_rx_poll (6) Message-ID: <00000000000000843f060b747650@google.com> Hello, syzbot found the following issue on: HEAD commit: d2da77f431ac Merge tag 'parisc-for-6.7-rc3' of git://git.k.. git tree: upstream console output: https://syzkaller.appspot.com/x/log.txt?x=11294880e80000 kernel config: https://syzkaller.appspot.com/x/.config?x=8c1151391aefc0c3 dashboard link: https://syzkaller.appspot.com/bug?extid=57cb9d16a1b17521eb76 compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40 Unfortunately, I don't have any reproducer for this issue yet. Downloadable assets: disk image: https://storage.googleapis.com/syzbot-assets/0ebc29947781/disk-d2da77f4.raw.xz vmlinux: https://storage.googleapis.com/syzbot-assets/a82ec858fbee/vmlinux-d2da77f4.xz kernel image: https://storage.googleapis.com/syzbot-assets/d45f2fa85085/bzImage-d2da77f4.xz IMPORTANT: if you fix the issue, please add the following tag to the commit: Reported-by: syzbot+57cb9d16a1b17521eb76 at syzkaller.appspotmail.com ================================================================== BUG: KCSAN: data-race in wg_packet_handshake_receive_worker / wg_packet_rx_poll read-write to 0xffff8881392abfa0 of 8 bytes by interrupt on cpu 1: update_rx_stats drivers/net/wireguard/receive.c:23 [inline] wg_packet_consume_data_done drivers/net/wireguard/receive.c:358 [inline] wg_packet_rx_poll+0xd35/0xf00 drivers/net/wireguard/receive.c:474 __napi_poll+0x60/0x3b0 net/core/dev.c:6533 napi_poll net/core/dev.c:6602 [inline] net_rx_action+0x32b/0x750 net/core/dev.c:6735 __do_softirq+0xc4/0x279 kernel/softirq.c:553 do_softirq+0x5e/0x90 kernel/softirq.c:454 __local_bh_enable_ip+0x64/0x70 kernel/softirq.c:381 __raw_spin_unlock_bh include/linux/spinlock_api_smp.h:167 [inline] _raw_spin_unlock_bh+0x36/0x40 kernel/locking/spinlock.c:210 spin_unlock_bh include/linux/spinlock.h:396 [inline] ptr_ring_consume_bh include/linux/ptr_ring.h:367 [inline] wg_packet_handshake_receive_worker+0x184/0x5e0 drivers/net/wireguard/receive.c:212 process_one_work kernel/workqueue.c:2630 [inline] process_scheduled_works+0x5b8/0xa30 kernel/workqueue.c:2703 worker_thread+0x525/0x730 kernel/workqueue.c:2784 kthread+0x1d7/0x210 kernel/kthread.c:388 ret_from_fork+0x48/0x60 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:242 read-write to 0xffff8881392abfa0 of 8 bytes by task 22808 on cpu 0: update_rx_stats drivers/net/wireguard/receive.c:23 [inline] wg_receive_handshake_packet drivers/net/wireguard/receive.c:198 [inline] wg_packet_handshake_receive_worker+0x4b9/0x5e0 drivers/net/wireguard/receive.c:213 process_one_work kernel/workqueue.c:2630 [inline] process_scheduled_works+0x5b8/0xa30 kernel/workqueue.c:2703 worker_thread+0x525/0x730 kernel/workqueue.c:2784 kthread+0x1d7/0x210 kernel/kthread.c:388 ret_from_fork+0x48/0x60 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:242 value changed: 0x00000000000070b0 -> 0x00000000000070d0 Reported by Kernel Concurrency Sanitizer on: CPU: 0 PID: 22808 Comm: kworker/0:4 Not tainted 6.7.0-rc2-syzkaller-00265-gd2da77f431ac #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 11/10/2023 Workqueue: wg-kex-wg2 wg_packet_handshake_receive_worker ================================================================== --- This report is generated by a bot. It may contain errors. See https://goo.gl/tpsmEJ for more information about syzbot. syzbot engineers can be reached at syzkaller at googlegroups.com. syzbot will keep track of this issue. See: https://goo.gl/tpsmEJ#status for how to communicate with syzbot. If the report is already addressed, let syzbot know by replying with: #syz fix: exact-commit-title If you want to overwrite report's subsystems, reply with: #syz set subsystems: new-subsystem (See the list of subsystem names on the web dashboard) If the report is a duplicate of another one, reply with: #syz dup: exact-subject-of-another-report If you want to undo deduplication, reply with: #syz undup From syzbot+liste63a02b3d759fb087a11 at syzkaller.appspotmail.com Sat Dec 2 14:45:22 2023 From: syzbot+liste63a02b3d759fb087a11 at syzkaller.appspotmail.com (syzbot) Date: Sat, 02 Dec 2023 06:45:22 -0800 Subject: [syzbot] Monthly wireguard report (Dec 2023) Message-ID: <0000000000003f7eb8060b87efb6@google.com> Hello wireguard maintainers/developers, This is a 31-day syzbot report for the wireguard subsystem. All related reports/information can be found at: https://syzkaller.appspot.com/upstream/s/wireguard During the period, 1 new issues were detected and 1 were fixed. In total, 4 issues are still open and 15 have been fixed so far. Some of the still happening issues: Ref Crashes Repro Title <1> 818 No KCSAN: data-race in wg_packet_send_staged_packets / wg_packet_send_staged_packets (3) https://syzkaller.appspot.com/bug?extid=6ba34f16b98fe40daef1 <2> 594 No KCSAN: data-race in wg_packet_decrypt_worker / wg_packet_rx_poll (2) https://syzkaller.appspot.com/bug?extid=d1de830e4ecdaac83d89 <3> 3 No KCSAN: data-race in wg_packet_handshake_receive_worker / wg_packet_rx_poll (6) https://syzkaller.appspot.com/bug?extid=57cb9d16a1b17521eb76 --- This report is generated by a bot. It may contain errors. See https://goo.gl/tpsmEJ for more information about syzbot. syzbot engineers can be reached at syzkaller at googlegroups.com. To disable reminders for individual bugs, reply with the following command: #syz set no-reminders To change bug's subsystems, reply with: #syz set subsystems: new-subsystem You may send multiple commands in a single email message. From tj at kernel.org Mon Dec 4 18:07:07 2023 From: tj at kernel.org (Tejun Heo) Date: Mon, 4 Dec 2023 08:07:07 -1000 Subject: Performance drop due to alloc_workqueue() misuse and recent change In-Reply-To: References: Message-ID: Hello, On Mon, Dec 04, 2023 at 04:03:47PM +0000, Naohiro Aota wrote: > Recently, commit 636b927eba5b ("workqueue: Make unbound workqueues to use > per-cpu pool_workqueues") changed WQ_UNBOUND workqueue's behavior. It > changed the meaning of alloc_workqueue()'s max_active from an upper limit > imposed per NUMA node to a limit per CPU. As a result, massive number of > workers can be running at the same time, especially if the workqueue user > thinks the max_active is a global limit. > > Actually, it is already written it is per-CPU limit in the documentation > before the commit. However, several callers seem to misuse max_active, > maybe thinking it is a global limit. It is an unexpected behavior change > for them. Right, and the behavior has been like that for a very long time and there was no other way to achieve reasonable level of concurrency, so the current situation is expected. > For example, these callers set max_active = num_online_cpus(), which is a > suspicious limit applying to per-CPU. This config means we can have nr_cpu > * nr_cpu active tasks working at the same time. Yeah, that sounds like a good indicator. > fs/f2fs/data.c: sbi->post_read_wq = alloc_workqueue("f2fs_post_read_wq", > fs/f2fs/data.c- WQ_UNBOUND | WQ_HIGHPRI, > fs/f2fs/data.c- num_online_cpus()); > > fs/crypto/crypto.c: fscrypt_read_workqueue = alloc_workqueue("fscrypt_read_queue", > fs/crypto/crypto.c- WQ_UNBOUND | WQ_HIGHPRI, > fs/crypto/crypto.c- num_online_cpus()); > > fs/verity/verify.c: fsverity_read_workqueue = alloc_workqueue("fsverity_read_queue", > fs/verity/verify.c- WQ_HIGHPRI, > fs/verity/verify.c- num_online_cpus()); > > drivers/crypto/hisilicon/qm.c: qm->wq = alloc_workqueue("%s", WQ_HIGHPRI | WQ_MEM_RECLAIM | > drivers/crypto/hisilicon/qm.c- WQ_UNBOUND, num_online_cpus(), > drivers/crypto/hisilicon/qm.c- pci_name(qm->pdev)); > > block/blk-crypto-fallback.c: blk_crypto_wq = alloc_workqueue("blk_crypto_wq", > block/blk-crypto-fallback.c- WQ_UNBOUND | WQ_HIGHPRI | > block/blk-crypto-fallback.c- WQ_MEM_RECLAIM, num_online_cpus()); > > drivers/md/dm-crypt.c: cc->crypt_queue = alloc_workqueue("kcryptd/%s", > drivers/md/dm-crypt.c- WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM | WQ_UNBOUND, > drivers/md/dm-crypt.c- num_online_cpus(), devname); Most of these work items are CPU bound but not completley so. e.g. kcrypt_crypt_write_continue() does wait_for_completion(), so setting max_active to 1 likely isn't what they want either. They mostly want some reasonable system-wide concurrency limit w.r.t. the CPU count while keeping some level of flexibility in terms of task placement. The previous max_active wasn't great for this because its meaning changed depending on the number of nodes. Now, the meaning doesn't change but it's not really useful for the above purpose. It's only useful for avoiding melting the system completely. One way to go about it is to declare that concurrency level management for unbound workqueue is on users but that seems not ideal given many use cases would want it anyway. Let me think it over but I think the right way to go about it is going the other direction - ie. making max_active apply to the whole system regardless of the number of nodes / ccx's / whatever. > Furthermore, the change affects performance in a certain case. > > Btrfs creates several WQ_UNBOUND workqueues with a default max_active = > min(NRCPUS + 2, 8). As my machine has 96 CPUs with NUMA disabled, this > max_active config allows running over 700 active works. Before the commit, > it is limited to 8 if NUMA is disabled or limited to 16 if NUMA nodes is 2. > > I reverted the workqueue code back to before the commit, and I ran the > following fio command on RAID0 btrfs on 6 SSDs. > > fio --group_reporting --eta=always --eta-interval=30s --eta-newline=30s \ > --rw=write --fallocate=none \ > --direct=1 --ioengine=libaio --iodepth=32 \ > --filesize=100G \ > --blocksize=64k \ > --time_based --runtime=300s \ > --end_fsync=1 \ > --directory=${MNT} \ > --name=writer --numjobs=32 > > By changing workqueue's max_active, the result varies. > > - wq max_active=8 (intended limit by btrfs?) > WRITE: bw=2495MiB/s (2616MB/s), 2495MiB/s-2495MiB/s (2616MB/s-2616MB/s), io=753GiB (808GB), run=308953-308953msec > - wq max_active=16 (actual limit on 2 NUMA nodes setup) > WRITE: bw=1736MiB/s (1820MB/s), 1736MiB/s-1736MiB/s (1820MB/s-1820MB/s), io=670GiB (720GB), run=395532-395532msec > - wq max_active=768 (simulating current limit) > WRITE: bw=1276MiB/s (1338MB/s), 1276MiB/s-1276MiB/s (1338MB/s-1338MB/s), io=375GiB (403GB), run=300984-300984msec > > The current performance is slower than the previous limit (max_active=16) > by 27%, or it is 50% slower than the intended limit. The performance drop > might be due to contention of the btrfs-endio-write works. There are over > 700 kworker instances were created and 100 works are on the 'D' state > competing for a lock. > > More specifically, I tested the same workload on the commit. > > - At commit 636b927eba5b ("workqueue: Make unbound workqueues to use per-cpu pool_workqueues") > WRITE: bw=1191MiB/s (1249MB/s), 1191MiB/s-1191MiB/s (1249MB/s-1249MB/s), io=350GiB (376GB), run=300714-300714msec > - At the previous commit = 4cbfd3de73 ("workqueue: Call wq_update_unbound_numa() on all CPUs in NUMA node on CPU hotplug") > WRITE: bw=1747MiB/s (1832MB/s), 1747MiB/s-1747MiB/s (1832MB/s-1832MB/s), io=748GiB (803GB), run=438134-438134msec > > So, it is -31.8% performance down with the commit. > > In summary, we misuse max_active, considering it is a global limit. And, > the recent commit introduced a huge performance drop in some cases. We > need to review alloc_workqueue() usage to check if its max_active setting > is proper or not. Thanks a lot for the report. I think it's a lot more reasonable to assume that max_active is global for unbound workqueues. The current workqueue behavior is not very intuitive or useful. I'll try to find something more reasonable. Thanks for the report and analysis. Much appreciated. Thanks. -- tejun From max.schulze at online.de Mon Dec 11 10:52:09 2023 From: max.schulze at online.de (Max Schulze) Date: Mon, 11 Dec 2023 11:52:09 +0100 Subject: wireguard.exe /ui using lots of RAM (>5GB) Message-ID: Hello, today, I saw a long-running (possibly 90+ days) "wireguard.exe /ui" process using 5Gb of RAM (nearly the limit of the machine). Normal operation is around 14 Mb. WireGuard/0.5.3 (Windows 10.0.17763; amd64) Using WireGuardNT/0.10 I did a wireguard.exe /dumplog, but it only shows "handshake/keepalive" every two minutes like clockwork. The wireguard tray Icon was possibly there (empty box) and did not react. Starting wireguard.exe /ui from the command line did nothing. I killed the process in windows explorer and restarted. What can I do next time to gather more useful info? Best, M From colin.williams.orcas at gmail.com Fri Dec 1 20:39:16 2023 From: colin.williams.orcas at gmail.com (Colin Williams) Date: Fri, 01 Dec 2023 20:39:16 -0000 Subject: No mention of ip tables to setup VPN Message-ID: I setup wireguard following the site. I did not create configuration files. I just followed the example on https://www.wireguard.com/quickstart/ I can ping between the hosts through wg via their interface IPs 10.0.0.1 / 10.0.0.2 One host I wish to use it as a VPN. Call it Host A I set `net.ipv4.ip_forward = 1 on host A and checked it was set properly. Then to setup the routing I follow the section `````Overriding The Default Route```` in https://www.wireguard.com/netns/ on Host B After adding routes by above, I can still ping each host via their ip and am still connected to the other host via SSH . But I lose my internet connection on Host B otherwise. I copied my wg command outputs and config details below. Does anyone know what I'm doing wrong? In some examples I see folks using iptables like: setting `iptables -t nat -A POSTROUTING -j MASQUERADE` on Host A . If it's likely necessary, why don't I see a mention of this on the documentation on wireguard.com ? Some errors I see: PING google.com (142.250.69.206) 56(84) bytes of data. >From XXX (10.0.0.2) icmp_seq=1 Destination Host Unreachable ping: sendmsg: Required key not available >From XXX (10.0.0.2) icmp_seq=2 Destination Host Unreachable ping: sendmsg: Required key not available >From XXX (10.0.0.2) icmp_seq=3 Destination Host Unreachable ping: sendmsg: Required key not available ../../../lib/isc/netmgr/uverr2result.c:98:isc___nm_uverr2result(): unable to convert libuv error code in udp_send_cb (../../../lib/isc/netmgr/udp.c:802) to isc_result: -126: Unknown system error -126 ;; communications error to 1.1.1.1#53: timed out ../../../lib/isc/netmgr/uverr2result.c:98:isc___nm_uverr2result(): unable to convert libuv error code in udp_send_cb (../../../lib/isc/netmgr/udp.c:802) to isc_result: -126: Unknown system error -126 ^C[colin_williams at JT9M367J07 wg]$ ping 10.0.0.1 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Host A wg command output interface: wg0 public key: 5ZXlotq43t3g3qz97ZkXeSu75+E6UchzO5hj4= private key: (hidden) listening port: XXXXX peer: 5mjkoeRw2e0IbPa2rontt5AvO8oJgCVBlJgqVil+1T4= endpoint: 203.45.131.16:33333 allowed ips: 10.0.0.2/32 latest handshake: 8 minutes, 4 seconds ago transfer: 27.48 KiB received, 33.24 KiB sent Host B wg command output interface: wg0 public key: 5mjko3qg3g3qg35AvO8oJgCVBlJgqVil+1T4= private key: (hidden) listening port: 35052 peer: 5ZXlosrq6L+ZT+O5Bg1mz97ZkXeSu75+E6UchzO5hj4= endpoint: 203.4.11.174:38101 allowed ips: 10.0.0.1/32 latest handshake: 9 minutes, 9 seconds ago transfer: 26.73 KiB received, 30.51 KiB sent +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Routing table Host B before additions. Everything works from Host A && B at this point default via 192.168.10.1 dev wlp1s0f0 proto dhcp src 192.168.10.177 metric 600 10.0.0.0/24 dev wg0 proto kernel scope link src 10.0.0.2 192.168.10.0/24 dev wlp1s0f0 proto kernel scope link src 192.168.10.177 metric 600 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Adding `````Overriding The Default Route```` from doc in https://www.wireguard.com/netns/ on Host B route. After adding the route to HostB, I can no longer access most internet resources from HostB. However, host B can still ping Host A and vice versa via IP address. The errors shown above for Host B are after I set the routing table. Please excuse if the route table looks funny. I think I am having trouble pasting from my laptop. Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.00.0.0.0128.0.0.0U 0 0 0 wg0 default _gateway 0.0.0.0UG 600 0 0 wlp1 10.0.0.00.0.0.0255.255.255.0 U 0 0 0 wg0 128.0.0.00.0.0.0128.0.0.0U 0 0 0 wg0 192.168.10.00.0.0.0255.255.255.0 U 600 0 0 wlp1 203.45.131.16:33333 _gateway 255.255.255.255 UGH 0 0 0 wlp1 From Naohiro.Aota at wdc.com Mon Dec 4 16:03:58 2023 From: Naohiro.Aota at wdc.com (Naohiro Aota) Date: Mon, 04 Dec 2023 16:03:58 -0000 Subject: Performance drop due to alloc_workqueue() misuse and recent change Message-ID: Recently, commit 636b927eba5b ("workqueue: Make unbound workqueues to use per-cpu pool_workqueues") changed WQ_UNBOUND workqueue's behavior. It changed the meaning of alloc_workqueue()'s max_active from an upper limit imposed per NUMA node to a limit per CPU. As a result, massive number of workers can be running at the same time, especially if the workqueue user thinks the max_active is a global limit. Actually, it is already written it is per-CPU limit in the documentation before the commit. However, several callers seem to misuse max_active, maybe thinking it is a global limit. It is an unexpected behavior change for them. For example, these callers set max_active = num_online_cpus(), which is a suspicious limit applying to per-CPU. This config means we can have nr_cpu * nr_cpu active tasks working at the same time. fs/f2fs/data.c: sbi->post_read_wq = alloc_workqueue("f2fs_post_read_wq", fs/f2fs/data.c- WQ_UNBOUND | WQ_HIGHPRI, fs/f2fs/data.c- num_online_cpus()); fs/crypto/crypto.c: fscrypt_read_workqueue = alloc_workqueue("fscrypt_read_queue", fs/crypto/crypto.c- WQ_UNBOUND | WQ_HIGHPRI, fs/crypto/crypto.c- num_online_cpus()); fs/verity/verify.c: fsverity_read_workqueue = alloc_workqueue("fsverity_read_queue", fs/verity/verify.c- WQ_HIGHPRI, fs/verity/verify.c- num_online_cpus()); drivers/crypto/hisilicon/qm.c: qm->wq = alloc_workqueue("%s", WQ_HIGHPRI | WQ_MEM_RECLAIM | drivers/crypto/hisilicon/qm.c- WQ_UNBOUND, num_online_cpus(), drivers/crypto/hisilicon/qm.c- pci_name(qm->pdev)); block/blk-crypto-fallback.c: blk_crypto_wq = alloc_workqueue("blk_crypto_wq", block/blk-crypto-fallback.c- WQ_UNBOUND | WQ_HIGHPRI | block/blk-crypto-fallback.c- WQ_MEM_RECLAIM, num_online_cpus()); drivers/md/dm-crypt.c: cc->crypt_queue = alloc_workqueue("kcryptd/%s", drivers/md/dm-crypt.c- WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM | WQ_UNBOUND, drivers/md/dm-crypt.c- num_online_cpus(), devname); Furthermore, the change affects performance in a certain case. Btrfs creates several WQ_UNBOUND workqueues with a default max_active = min(NRCPUS + 2, 8). As my machine has 96 CPUs with NUMA disabled, this max_active config allows running over 700 active works. Before the commit, it is limited to 8 if NUMA is disabled or limited to 16 if NUMA nodes is 2. I reverted the workqueue code back to before the commit, and I ran the following fio command on RAID0 btrfs on 6 SSDs. fio --group_reporting --eta=always --eta-interval=30s --eta-newline=30s \ --rw=write --fallocate=none \ --direct=1 --ioengine=libaio --iodepth=32 \ --filesize=100G \ --blocksize=64k \ --time_based --runtime=300s \ --end_fsync=1 \ --directory=${MNT} \ --name=writer --numjobs=32 By changing workqueue's max_active, the result varies. - wq max_active=8 (intended limit by btrfs?) WRITE: bw=2495MiB/s (2616MB/s), 2495MiB/s-2495MiB/s (2616MB/s-2616MB/s), io=753GiB (808GB), run=308953-308953msec - wq max_active=16 (actual limit on 2 NUMA nodes setup) WRITE: bw=1736MiB/s (1820MB/s), 1736MiB/s-1736MiB/s (1820MB/s-1820MB/s), io=670GiB (720GB), run=395532-395532msec - wq max_active=768 (simulating current limit) WRITE: bw=1276MiB/s (1338MB/s), 1276MiB/s-1276MiB/s (1338MB/s-1338MB/s), io=375GiB (403GB), run=300984-300984msec The current performance is slower than the previous limit (max_active=16) by 27%, or it is 50% slower than the intended limit. The performance drop might be due to contention of the btrfs-endio-write works. There are over 700 kworker instances were created and 100 works are on the 'D' state competing for a lock. More specifically, I tested the same workload on the commit. - At commit 636b927eba5b ("workqueue: Make unbound workqueues to use per-cpu pool_workqueues") WRITE: bw=1191MiB/s (1249MB/s), 1191MiB/s-1191MiB/s (1249MB/s-1249MB/s), io=350GiB (376GB), run=300714-300714msec - At the previous commit = 4cbfd3de73 ("workqueue: Call wq_update_unbound_numa() on all CPUs in NUMA node on CPU hotplug") WRITE: bw=1747MiB/s (1832MB/s), 1747MiB/s-1747MiB/s (1832MB/s-1832MB/s), io=748GiB (803GB), run=438134-438134msec So, it is -31.8% performance down with the commit. In summary, we misuse max_active, considering it is a global limit. And, the recent commit introduced a huge performance drop in some cases. We need to review alloc_workqueue() usage to check if its max_active setting is proper or not. From koenlekkerkerker at gmail.com Tue Dec 12 04:02:11 2023 From: koenlekkerkerker at gmail.com (Koen Lekkerkerker) Date: Mon, 11 Dec 2023 20:02:11 -0800 Subject: Any plans to extend apple support to tvOS (Apple TV)? Message-ID: Hi all, Since tvOS 17 (released Sep 2023), Apple has added SDK support to create VPN tvOS apps (for Apple TV hardware). I was wondering if there are plans to develop a Wireguard tvOS app. It should hopefully be able to reuse most of the iOS app from the wireguard-apple repo... I'd mainly expect some work on the GUI to make it usable on TV. I'm happy to donate a little to support such plans. Either way: Thanks for all the awesome work on Wireguard. I've been happily using it for a couple of years now and it works great! Best, Koen From ahmet.karaahmetoglu at accenture.com Wed Dec 13 11:53:29 2023 From: ahmet.karaahmetoglu at accenture.com (Karaahmetoglu, Ahmet) Date: Wed, 13 Dec 2023 11:53:29 +0000 Subject: [android] Device protected vs. user-credential protected storage, no tunnels before first unlock on modern Android? Message-ID: Dear WireGuard community, It seems that for accessing tunnel configurations the different components of wireguard-android only support accessing the user-credential protected storage (/data/data/). This path is usually not available before first unlock on modern Android, so WireGuard is not able to access its configuration. I was wondering if this in on purpose or are there any plans on adding support for device protected storage (/data/data_de/)? Actually, I would assume that storing tunnel configurations there is essential for always_on_vpn_lockdown to be working - which seems to be supported by WireGuard when looking at Android VPN settings. But this can hardly be the case - if I'm not mistaken. So, any hints/background information about the situation is highly appreciated. Thank you very much in advance, and kind regards, Ahmet Karaahmetoglu ________________________________ This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security, AI-powered support capabilities, and assessment of internal compliance with Accenture policy. Your privacy is important to us. Accenture uses your personal data only in compliance with data protection laws. For further information on how Accenture processes your personal data, please see our privacy statement at https://www.accenture.com/us-en/privacy-policy. ______________________________________________________________________________________ www.accenture.com From norbert at schmueller.de Tue Dec 19 13:12:27 2023 From: norbert at schmueller.de (Norbert Schmidt) Date: Tue, 19 Dec 2023 14:12:27 +0100 Subject: Wireguard not working in conjunction with G Data Firewall Message-ID: Hello, I've had problems using the Wireguard Client 0.5.3 on a Windows 10 with G Data Total Security installed. The connection is working. A ping to a remote host is working too, sometimes a SMB connection works but HTTP or HTTPS traffic is not coming through. I wrote to the G Data support an received this answer which I would like to share with you (translated): "There are fundamental incompatibilities between the G DATA Firewall and the use of the WireGuard VPN protocol. We believe this is due to the initialization of the WireGuard connection parameters. As a workaround, it should be possible to temporarily disable the firewall, then establish the VPN connection, and then re-enable the firewall." I then asked for a fix and got the following reply: "We understand your reaction, but the functionality you are referring to is not a frequently requested change and the required modifications would be extensive." I believe this functionality is frequently needed but if G Data is not willing to change something, maybe the problem can be fixed on the wireguard client side... Best regards Norbert Schmidt From polo-ru at yandex.ru Tue Dec 19 15:59:23 2023 From: polo-ru at yandex.ru (Roman Lesechko) Date: Tue, 19 Dec 2023 18:59:23 +0300 Subject: [PATCH] Edited RU translation to avoid misunderstanding Message-ID: <20231219155923.1892-1-polo-ru@yandex.ru> I have edited RU translations of "macToggleStatusButton" on the main screen. In english button's caption is imperative: Activate/Deactivate. Current russian translation is wrong in terms of tenses. For example, Wireguard is deactivated, but the button's caption says it is already active. Signed-off-by: Roman Lesechko --- Sources/WireGuardApp/ru.lproj/Localizable.strings | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/Sources/WireGuardApp/ru.lproj/Localizable.strings b/Sources/WireGuardApp/ru.lproj/Localizable.strings index c05a566..ab148dc 100644 --- a/Sources/WireGuardApp/ru.lproj/Localizable.strings +++ b/Sources/WireGuardApp/ru.lproj/Localizable.strings @@ -55,9 +55,9 @@ "tunnelStatusRestarting" = "??????????"; "tunnelStatusWaiting" = "????????"; -"macToggleStatusButtonActivate" = "?????????"; +"macToggleStatusButtonActivate" = "??????????"; "macToggleStatusButtonActivating" = "????????????"; -"macToggleStatusButtonDeactivate" = "????????"; +"macToggleStatusButtonDeactivate" = "?????????"; "macToggleStatusButtonDeactivating" = "???????????"; "macToggleStatusButtonReasserting" = "????????????????"; "macToggleStatusButtonRestarting" = "???????????"; @@ -111,7 +111,7 @@ "tunnelOnDemandAddMessageAddNewSSID" = "???????? ?????"; "tunnelOnDemandKey" = "???????????????"; -"tunnelOnDemandOptionOff" = "?????????"; +"tunnelOnDemandOptionOff" = "?????????"; "tunnelOnDemandOptionWiFiOnly" = "?????? Wi-Fi"; "tunnelOnDemandOptionWiFiOrCellular" = "Wi-Fi ??? ??????? ????"; "tunnelOnDemandOptionCellularOnly" = "?????? ??????? ????"; @@ -134,7 +134,7 @@ "tunnelEditPlaceholderTextStronglyRecommended" = "???????????? ?????????????"; "tunnelEditPlaceholderTextOff" = "?????????"; -"tunnelPeerPersistentKeepaliveValue (%@)" = "?????? %@ - ???????? ? ????????"; +"tunnelPeerPersistentKeepaliveValue (%@)" = "?????? %@ ??????"; "tunnelHandshakeTimestampNow" = "??????"; "tunnelHandshakeTimestampSystemClockBackward" = "(????????? ???? ?????????? ?????)"; "tunnelHandshakeTimestampAgo (%@)" = "%@ ?????"; -- 2.37.1 (Apple Git-137.1) From tj at kernel.org Wed Dec 20 07:14:59 2023 From: tj at kernel.org (Tejun Heo) Date: Tue, 19 Dec 2023 21:14:59 -1000 Subject: Performance drop due to alloc_workqueue() misuse and recent change In-Reply-To: References: Message-ID: Hello, again. On Mon, Dec 04, 2023 at 04:03:47PM +0000, Naohiro Aota wrote: ... > In summary, we misuse max_active, considering it is a global limit. And, > the recent commit introduced a huge performance drop in some cases. We > need to review alloc_workqueue() usage to check if its max_active setting > is proper or not. Can you please test the following branch? https://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git unbound-system-wide-max_active Thanks. -- tejun From mjt at tls.msk.ru Wed Dec 20 07:59:40 2023 From: mjt at tls.msk.ru (Michael Tokarev) Date: Wed, 20 Dec 2023 10:59:40 +0300 Subject: [PATCH] Edited RU translation to avoid misunderstanding In-Reply-To: <20231219155923.1892-1-polo-ru@yandex.ru> References: <20231219155923.1892-1-polo-ru@yandex.ru> Message-ID: <70a5f5c7-2846-4f96-a567-e2a7d7a65c49@tls.msk.ru> 19.12.2023 18:59, Roman Lesechko : > I have edited RU translations of "macToggleStatusButton" on the main screen. In english button's caption is imperative: Activate/Deactivate. Current russian translation is wrong in terms of tenses. For example, Wireguard is deactivated, but the button's caption says it is already active. > > Signed-off-by: Roman Lesechko > -"tunnelPeerPersistentKeepaliveValue (%@)" = "?????? %@ - ???????? ? ????????"; > +"tunnelPeerPersistentKeepaliveValue (%@)" = "?????? %@ ??????"; Personally I would keep this one as it was. Yes it is a bit ugly but it is better for, say, 2 or 3 seconds. /mjt From rm at romanrm.net Wed Dec 20 12:19:24 2023 From: rm at romanrm.net (Roman Mamedov) Date: Wed, 20 Dec 2023 17:19:24 +0500 Subject: [PATCH] Edited RU translation to avoid misunderstanding In-Reply-To: <70a5f5c7-2846-4f96-a567-e2a7d7a65c49@tls.msk.ru> References: <20231219155923.1892-1-polo-ru@yandex.ru> <70a5f5c7-2846-4f96-a567-e2a7d7a65c49@tls.msk.ru> Message-ID: <20231220171924.7f72351e@nvm> On Wed, 20 Dec 2023 10:59:40 +0300 Michael Tokarev wrote: > 19.12.2023 18:59, Roman Lesechko : > > I have edited RU translations of "macToggleStatusButton" on the main screen. In english button's caption is imperative: Activate/Deactivate. Current russian translation is wrong in terms of tenses. For example, Wireguard is deactivated, but the button's caption says it is already active. > > > > Signed-off-by: Roman Lesechko > > > -"tunnelPeerPersistentKeepaliveValue (%@)" = "?????? %@ - ???????? ? ????????"; > > +"tunnelPeerPersistentKeepaliveValue (%@)" = "?????? %@ ??????"; > > Personally I would keep this one as it was. Yes it is a bit ugly but it is better > for, say, 2 or 3 seconds. The old one is way too awkward and still wrong for when it's 1. To handle the case of 2 or 3 it can be "?????? %@ ???." In this format it's impossible to reach perfection (as in Russian the words before and after the number also need to change inflection depending on the actual value). But even as proposed in the patch, I'd say it is better than before. -- With respect, Roman From dxld at darkboxed.org Wed Dec 20 15:04:19 2023 From: dxld at darkboxed.org (Daniel =?utf-8?Q?Gr=C3=B6ber?=) Date: Wed, 20 Dec 2023 16:04:19 +0100 Subject: [PATCH 1/1] wireguard-linux: add netlink multicast group for notifications on peer change In-Reply-To: <2328185.xqv9EDfTUt@desktop> References: <2328185.xqv9EDfTUt@desktop> Message-ID: <20231220150419.zmtlrv3nhkvx2ymh@House.clients.dxld.at> Hi Raphael, Linus, Interesting patch, I've been meaning to get around to adding some change notifications as well :) I have some notes: - It seem to me some more sharing with existing code constructing nlmsgs ought to be possible your new code seems quite verbose. - Why is the endpoint_monitor flag necessary? Other genlmsg_multicast callers don't seem to do anything like this. Do you have corresponding wireguard-tools userspace patches for this yet? --Daniel PS: If you intend for this to get applied you may want to fix scripts/checkpatch.pl warnings and add people get_maintainers.pl spits out to Cc. From mjt at tls.msk.ru Wed Dec 20 17:56:12 2023 From: mjt at tls.msk.ru (Michael Tokarev) Date: Wed, 20 Dec 2023 20:56:12 +0300 Subject: [PATCH] Edited RU translation to avoid misunderstanding In-Reply-To: <802041703091998@mail.yandex.ru> References: <20231219155923.1892-1-polo-ru@yandex.ru> <70a5f5c7-2846-4f96-a567-e2a7d7a65c49@tls.msk.ru> <20231220171924.7f72351e@nvm> <802041703091998@mail.yandex.ru> Message-ID: <9d732954-b7a7-48ca-b307-168f1054f286@tls.msk.ru> 20.12.2023 20:20, Roman Lesechko ?????: > With "??????" I just tried to comply with english source. This option will be OK for the most numbers of seconds out of 60. If it's OK to use short > form I agree that "???." is much better and won't cause any problems with understanding. > My concern and main reason for patch is the part with status updates. > -"macToggleStatusButtonActivate" = "?????????"; > +"macToggleStatusButtonActivate" = "??????????"; > -"macToggleStatusButtonDeactivate" = "????????"; > +"macToggleStatusButtonDeactivate" = "?????????"; > -"tunnelOnDemandOptionOff" = "?????????"; > +"tunnelOnDemandOptionOff" = "?????????"; Yes, these definitely needs to go in, as currently the meaning is just the opposite of reality. /mjt