[HN Gopher] Show HN: WireHole combines WireGuard, Pi-hole, and U...
       ___________________________________________________________________
        
       Show HN: WireHole combines WireGuard, Pi-hole, and Unbound with an
       easy UI
        
       WireHole offers a unified docker-compose project that integrates
       WireGuard, PiHole, and Unbound, complete with a user interface.
       This solution is designed to empower users to swiftly set up and
       manage either a full or split-tunnel WireGuard VPN. It features ad-
       blocking capabilities through PiHole and enhanced DNS caching and
       privacy options via Unbound. The intuitive UI makes deployment and
       ongoing management straightforward, providing a comprehensive VPN
       solution with added privacy features.
        
       Author : byteknight
       Score  : 113 points
       Date   : 2023-10-27 22:23 UTC (1 days ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | josephcsible wrote:
       | I don't see a license.
        
         | byteknight wrote:
         | Added :)
        
           | josephcsible wrote:
           | You went with a proprietary one :(
        
             | byteknight wrote:
             | No - I had to inherit the licenses of the projects I used
             | within it :(
        
               | josephcsible wrote:
               | Ah, it is indeed wg-easy that's actually to blame.
        
       | josephcsible wrote:
       | This uses wg-easy, which isn't open source.
        
         | repelsteeltje wrote:
         | This wg-easy?
         | 
         | Definitely not an OSI approved license, but does look like they
         | made an attempt in the spirit of GPL, no?
         | 
         | https://github.com/wg-easy/wg-easy/blob/master/LICENSE.md
         | 
         | > You may:
         | 
         | > - Use this software for yourself;
         | 
         | > - Use this software for a company;
         | 
         | > - Modify this software, as long as you:
         | 
         | > * Publish the changes on GitHub as an open-source & linked
         | fork;
         | 
         | > * Don't remove any links to the original project or donation
         | pages;
         | 
         | > You may not:
         | 
         | > - Use this software in a commercial product without a license
         | from the original author;
        
           | byteknight wrote:
           | This is accurate. I just recently added the GUI from wg-easy
           | as a revival of the project. If you want to fully open source
           | version you can go back a couple commits before I added the
           | GUI.
        
           | josephcsible wrote:
           | Either there's a giant loophole in that license or it
           | prevents you from modifying wg-easy at all. In particular,
           | the prohibition on commercial use is clearly not open source,
           | so the only way you could comply with the requirement to
           | publish your changes in an open-source fork would be for your
           | fork to have a different license. If that is allowed, then
           | the giant loophole is that you could pick MIT, and then the
           | rest of the world could use your fork and ignore the
           | original's license. If that's not allowed, then there's no
           | way for you to comply with that requirement and so you can't
           | modify wg-easy at all.
        
             | byteknight wrote:
             | I think you're misunderstanding how licenses work. Being
             | that wire hole is a conglomerate of a multitude of projects
             | I am required to utilize the most restrictive version of
             | that license.
             | 
             | I believe you're also thoroughly misunderstanding the
             | license terms that are present. The license says that you
             | can utilize it for a commercial settings and in a
             | commercial environment you just cannot resell the product.
             | 
             | This means that an Enterprise can openly use it within
             | their Enterprise they just cannot sell it as a service that
             | they offer.
             | 
             | While this is not the license that I would have chosen for
             | a Greenfield project but at the moment I am at the mercy of
             | the licenses in place for the projects that I am using.
             | Once I replace the UI with a proprietary one everything
             | will be fully open source the way it's intended
        
               | josephcsible wrote:
               | Sorry, everywhere I said "this" there I meant wg-easy,
               | not WireHole. I just fixed it to clarify that.
               | 
               | > Once I replace the UI with a proprietary one everything
               | will be fully open source the way it's intended
               | 
               | Huh? Proprietary is basically the opposite of open
               | source.
        
               | ranguna wrote:
               | I'm guessing they meant "in-house".
        
               | byteknight wrote:
               | Apologize for the semantics. By proprietary I mean that I
               | will develop a new UI, have full and whole rights to do
               | with the project that I choose and that would be to fully
               | open source it
        
               | mlfreeman wrote:
               | I would suggest replacing "proprietary" with "in-house"
               | then.
        
               | byteknight wrote:
               | Suggest as you wish. It's purely semantic and I've since
               | clarified :)
        
           | AshamedCaptain wrote:
           | "Spirit of the GPL" not really, and the terms you quoted
           | already make it incompatible with the GPL itself. Pretty
           | draconian if you ask me (Github???).
        
             | repelsteeltje wrote:
             | Draconian, perhaps. Or just clumsy.
             | 
             | I leaned not to attribute to malice what can be attributed
             | to incompetence.
        
         | uneekname wrote:
         | oof, I've been using wg-easy and didn't realize the weird
         | license situation. I like it but the image doesn't get updated
         | as often as I'd like. I've been meaning to either build out an
         | alternative or at least rebuild wg-easy with the latest
         | packages
        
           | byteknight wrote:
           | My plan is to replace the UI with a fully open-source
           | version. This is part of the early revival.
        
             | uneekname wrote:
             | Awesome, let me know if/how I can help!
        
               | byteknight wrote:
               | Thanks!
        
         | deelowe wrote:
         | Huh? Yes it is.
        
           | byteknight wrote:
           | I believe OP is referring to OSI licenses as being open
           | source. Wg-easy uses a simple but proprietary license.
        
       | amar0c wrote:
       | Does everything really need to be Docker these days? Specially
       | "network stuff". I mean, it really makes me want to go and grow
       | potatoes instead doing any "IT"
        
         | toomuchtodo wrote:
         | It makes life so much easier. Time is non renewable, and if you
         | want to pull a project apart for whatever reason, you still
         | can.
         | 
         | "docker pull", deploy, and one can move on to the next
         | whatever. You can deploy this to a Synology NAS, a Raspberry
         | Pi, or Heroku with a few clicks (or even an appropriately
         | configured router that supports containers if you're not
         | running something providing this functionality natively).
         | 
         | (DevOps/infra monkey before moving to infosec, embrace the
         | container concept)
        
           | NexRebular wrote:
           | > It makes life so much easier.
           | 
           | If running an OS that supports docker...
        
             | byteknight wrote:
             | If you're running an OS that doesnt support docker you have
             | a very esoteric use case.
        
           | xorcist wrote:
           | Let's not overstate things here. It may well look like
           | "docker pull", deploy, nothing, ok, how do I configure this
           | thing, oh goodie here's the uncommented yaml, deploy again,
           | strange error, headscratch, oh it's dependent on using the
           | .68.x network which I've already used elsewhere, let's rename
           | those docker networks, deploy again, what?, oh it must have
           | initialized a temporary password to the database when it
           | didn't come up, let's wipe it all clean and pull again
           | because I have no idea what kind of state is in those
           | persistent volumes, deploy, rats! forgot the network
           | renumbering, wipe clean, confiure again, deploy again, yay!
           | 
           | Provided you already turned off everything that can interfere
           | with this stuff, including IPv6, any security like SELinux,
           | grsecurity and friends, and you let it administer your
           | netfilter firewall for you. Don't forget to check if you
           | accidentally exposed some redis instance to the public
           | Internet.
           | 
           | (And yes, I have embraced the concept and work daily with
           | similar things, albeit in a larger scale. Let's just not kid
           | ourselves it's easier than it is though. Just because an out
           | of the box deploy goes sideways doesn't mean you are dumb.)
        
             | byteknight wrote:
             | To be fair none of those operations require a re-pull; not
             | a single one.
        
               | xorcist wrote:
               | That's the spirit!
        
               | byteknight wrote:
               | Not sure the intention but I still don't see how
               | debugging config in docker is inherently different than
               | native.
        
             | RussianCow wrote:
             | Almost none of what you just mentioned has anything to do
             | with Docker, and you can easily have that much trouble just
             | running a binary. (In fact, I've found that many projects
             | have better documentation for their Docker image than for
             | running it natively.) Yes, there are some Docker-specific
             | things you sometimes have to debug (especially with
             | networking), but I've had far more trouble getting software
             | running natively on my machine due to mismatches in local
             | configuration, installed library versions, directory
             | conventions, etc vs what's expected. It's also far easier
             | to blow away all the containers and volumes and start over
             | with Docker; no need to hunt down that config file in an
             | obscure place that's still messing with the deployment.
        
           | vincentkriek wrote:
           | To add to this, for me it's not specifically about the ease
           | of setup which isnt that much easier (although it's nice that
           | it's standardized). It's more about the teardown if it's not
           | something for you. Services can leave a lot of residuals in
           | the system, files in different places, unwanted dependencies,
           | changes in system configuration. Removing a docker container
           | is very clean, with the remaining stuff easily identifiable.
           | 
           | Makes trying new stuff a way less troublesome.
        
           | magicalhippo wrote:
           | I upgraded my PiHole running on an Allwinner H3 SBC last
           | year. It wouldn't start, turned out some indirect dependency
           | wasn't compiled for the ARMv7 platform.
           | 
           | No worries, just specify the previous version in my launch
           | script, literally changing a couple of digits, and I'm back
           | up and running in seconds.
           | 
           | I'm sure I could get it done using apt, but it was literally
           | changing some numbers in a script and rerunning it.
           | 
           | As someone who just wants things to work, Docker has made
           | things significantly better.
        
         | byteknight wrote:
         | Can I ask why ease of deployment makes you want to turn from
         | IT? The speed of deployment cant be beat.
         | 
         | Earnestly interested in your take.
        
           | amar0c wrote:
           | Can you easily debug stuff? Can you tail -f /var/fing/log and
           | see what X or Y does not work (without introducing another
           | container/whatever just for this) ? I know I am minority..
           | but whole concept This runs X and This runs Y but
           | storage/data is over there having nothing to do with both X
           | or Y is F'd up.
           | 
           | Yeah, you can easily pull and run things but you have no idea
           | how or what it does and when things break whole idea is pull
           | it again and run.
           | 
           | I have nothing against containers.. real system ones (LXC for
           | example)
        
             | byteknight wrote:
             | It seems there's a bit of a misunderstanding about how
             | containers work. Firstly, debugging in containers is not
             | inherently more difficult than on a traditional system. You
             | can indeed `tail -f /var/log/...` within a container just
             | as you would on the host system. Tools like Docker provide
             | commands like `docker exec` to run commands within a
             | running container, making debugging straightforward.
             | 
             | The concept of separating runtime (X or Y) from data
             | storage is not unique to containers; it's a best practice
             | in software design called separation of concerns. This
             | separation makes applications more modular, easier to
             | scale, and allows for better resource optimization.
             | 
             | The "pull it again and run" mentality is a simplification.
             | While containers do promote immutability, where if
             | something goes wrong you can restart from a known good
             | state, it's not the only way to troubleshoot issues. The
             | idea is to have a consistent environment, but it doesn't
             | prevent you from debugging or understanding the internals.
             | 
             | Lastly, while there are differences between application
             | containers (like Docker) and system containers (like LXC),
             | they both leverage Linux kernel features to provide
             | isolation. It's more about the use case and preference than
             | one being "real" and the other not.
        
               | tryauuum wrote:
               | I'm not the original poster but _with default config_
               | logs are worse with docker. Running `docker exec` to
               | check the  /var/log in a container is pointless,
               | application writes to stdout. So you do `docker logs`
               | 
               | And by default logs are stored in a json format in a
               | single file per container, grepping `docker logs` feels
               | slower than grepping a file. And the option to read logs
               | for n last hours is incredibly slow -- I think it reads
               | file from the beginning until it reaches the desired
               | timestamp
        
             | tryauuum wrote:
             | you can tail -f the container logs, which are in
             | /var/lib/docker I think
             | 
             | I've recently come across a talk related to running
             | openstack in kubernetes. Which sounded like a crazy idea,
             | openstack needs to do all kinds of things not allowed by
             | default for containers, e.g. create network interfaces and
             | insert kernel modules. But people still did it for some
             | reason -- on of them was that it's easier to find someone
             | with k8 experience than with openstack one. And they liked
             | the self-healing properties of k8.
             | 
             | I don't know what the bottom line is
        
             | more_corn wrote:
             | docker logs -f containername docker exec -it containername
             | /bin/sh
             | 
             | I'm by no means a docker evangelist, but it does work and
             | it simplifies deployment and management quite a bit.
        
           | ris wrote:
           | > The speed of deployment cant be beat.
           | 
           | The sound of someone who hasn't used Nix.
        
             | byteknight wrote:
             | You'd be correct.
        
             | RussianCow wrote:
             | What Nix provides in reproducibility and ease of
             | deployment, it certainly makes up for with poor
             | documentation and opaque error messages. I've been trying
             | to learn it for the past few weeks in my spare time for a
             | personal project, and I still struggle with basic things. I
             | love the idea but they really need to invest in better
             | docs, tutorials, and error messages.
        
           | dotnet00 wrote:
           | My personal biggest peeve is how Docker still doesn't play
           | well with a VPN running on the host. It's incredibly annoying
           | and an issue I frequently run into on my home setup.
           | 
           | It's crazy to me that people push it so much given this
           | issue, aren't VPNs even more common in corporate settings,
           | especially with remote work nowadays?
           | 
           | I find it easier to just spin up a full VM than deal with
           | docker's sensitivities, and it feels a bit ridiculous to run
           | a VM and then setup docker within it instead of just having
           | appropriate VM images.
        
             | byteknight wrote:
             | I think that has more to do with not understanding routing
             | and firewalls. Vpns usually have something called a kill
             | switch that force tunnels all traffic to avoid leaks.
             | 
             | While I can see it does at times make it more difficult to
             | do certain things with the proper permissions, know how and
             | set up there is nothing it cannot do.
        
               | dotnet00 wrote:
               | So we're back to where we started, just tinker "a little"
               | with the setup to try to make it work, exactly the issue
               | Docker claimed to be aimed at solving.
               | 
               | I tried running a docker based setup for a year on my
               | homeserver, thinking that using it for some time would
               | help me get over my instinctive revulsion towards
               | software that makes Docker the only way to use it, the
               | way that forcing myself to use Python had helped me get
               | over disdain for it back during the early days of the
               | transition from 2 to 3. Didn't help at all, it was still
               | a pita to rely on. Went back to proper installs, couldn't
               | be happier.
        
               | byteknight wrote:
               | How is that any different than any software?
               | Configuration and trial and error is the name of the game
               | no matter your stack...
        
         | notatoad wrote:
         | no, not everything has to be docker. for example, none of
         | wireguard, pihole, or unbound have to be docker. you are
         | welcome to install all those things yourself.
         | 
         | but the whole project here is to wrap up a bunch of other
         | projects in a way that makes them easy to install and configure
         | with minimal fuss. docker is perfect for that. if you want to
         | be fussy and complain about the tools other people choose, then
         | projects like this probably aren't much interest to you.
        
         | api wrote:
         | If the Linux ecosystem could get its act together, standardize,
         | and consolidate all the totally needless and pointless
         | distribution fragmentation we could challenge this.
         | 
         | Docker took off because there is no Linux. There are 50
         | different slightly incompatible OSes. So the best way to
         | distribute software is to basically tar up the entire
         | filesystem and distribute that. Dependency management has
         | failed because there's just too much sprawl.
         | 
         | One illustrative example: OpenSSL has divergent naming and
         | versioning schemes across different versions of distributions
         | that use the same Debian package manager. So you either build
         | your packages at least four or five times, Dockerize, or
         | statically link OpenSSL. That's just for dpkg based distros
         | too! Then there is RPM, APK, and several others I can't recall
         | right now.
         | 
         | BTW Windows has a bit of the same disease and being from one
         | company has a lot less of an excuse. OS standardization and
         | dependency standardization is very hard to get right,
         | especially at scale.
         | 
         | Apple macOS is the only OS you can ship software for without
         | statically linking or bundling everything and be reasonably
         | sure it will work... as long as you are not going back more
         | than two or three versions.
        
           | amar0c wrote:
           | I have feeling whole Docker (or application containers) took
           | of when "non Linux people" (read: developers) tried to be sys
           | admins too and failed.
           | 
           | Best thing after sliced bread is apps/software packed in
           | single GO binary. Runs everywhere, you only need to rsync/scp
           | it to million of other places and it "acts" (usually) as
           | normal Linux program/daemon
        
             | api wrote:
             | That's true but IMHO that's an indictment if Linux not
             | them. It's 2023 and there is no reason system
             | administration should be this hard unless you are doing
             | very unusual things.
             | 
             | The Go approach is just static linking. Rust often does the
             | same though it's not always the default like in Go, and you
             | can do the same with C and C++ for all but libc with a bit
             | of makefile hacking.
             | 
             | Statically linking the world is the alternative approach to
             | containers.
        
               | djbusby wrote:
               | One problem with SysAdmin stuff is that, like crypto, we
               | keep telling folk it's too hard and just out-source.
               | While I think don't roll your own crypto makes sense -
               | we've done a dis-service to the trade to discourage self-
               | hosting and other methods to practice the craft. Don't
               | run your own infra, use AWS. Don't host your own email
               | it's too hard, just use a provider. Etc. Then a decade
               | later...hey, how come nobody is good at SysAdmin?
        
               | intelVISA wrote:
               | Most of the "don't do X it's too hard" is just $corp who
               | wants to sell their preferred solution trying to convince
               | you to buy their SaaS equivalent of a Bash script.
        
           | biorach wrote:
           | > Docker took off because there is no Linux. There are 50
           | different slightly incompatible OSes. So the best way to
           | distribute software is to basically tar up the entire
           | filesystem and distribute that. Dependency management has
           | failed because there's just too much sprawl.
           | 
           | That's not an accurate description of the main motivation for
           | Docker. It's a nice secondary benefit, sure.
        
             | api wrote:
             | What is it then? It's not a good security isolation tool.
             | It's not great at resource use isolation. Containers are
             | bulkier than packages.
        
               | xorcist wrote:
               | It used to be completely free hosting, that's one thing
               | that was great about it. Same thing made Sourceforge so
               | completely dominant that it took many years for projects
               | to move off it even after more suitable alternatives were
               | made available.
               | 
               | But the main use case was probably convenience. It's a
               | very quick way for Mac and Windows users to get a small
               | Linux VM up and running, and utilize the copious amount
               | of software written for it.
               | 
               | These days it's mostly standard, for better or worse.
               | There are a handful vendor independent ways to distribute
               | software but this works with most cloud vendors. Is it
               | good? Probably not, but few industry standards are.
        
             | byteknight wrote:
             | Not to be contradictory but my understanding was, that
             | absolutely is the main motivation.
             | 
             | It was to solve the age old "it runs on my machine".
             | 
             | Open to being wrong but when docker hit the scene I
             | remember that being touted left and right.
        
           | xorcist wrote:
           | There are several issues here which tends to get mixed up a
           | lot.
           | 
           | Yes, a dpkg is built for a distribution, and not only that
           | but a specific version of a distribution. So they tend to get
           | re-built a lot. But this is something buildhosts do. What you
           | upload is the package source.
           | 
           | If you want to distribute a package to work on "Linux" in
           | general, then you can't build it for a specific distribution.
           | Then you bundle all the shared libraries and other
           | dependencies. (Or make a static build, but for various
           | reasons this is less common.) Do not try to rely on the
           | naming scheme of openssl, or anything else really. This is
           | what most games do, and the firefox tarball, and most other
           | commercical software for Linux.
           | 
           | There are of course downsides to this. You have to build a
           | new package if your openssl has a security issue, for
           | example. But that's how most software is distributed on most
           | other operating systems, including Windows. This is also how
           | Docker images are built.
           | 
           | The alternative is to build packages for a specific
           | distribution and release, and as stated above, that takes a
           | bit of integration work.
           | 
           | There are issues with both alternatives, but they should not
           | be confused.
        
           | yjftsjthsd-h wrote:
           | > If the Linux ecosystem could get its act together,
           | standardize, and consolidate all the totally needless and
           | pointless distribution fragmentation we could challenge this.
           | 
           | Maybe, but that will never happen because the ecosystem got
           | here by being open enough that people could be dissatisfied
           | with existing stuff and make their own thing, and to a
           | remarkable degree things _are_ intercompatible. It 's always
           | been like this; just because there are 20 people working on
           | distro A and 20 people working on distro B doesn't mean
           | combining them would get 40 people working on distro AB. (In
           | practice, attempting it would probably result in the creation
           | of distros C-F as dissidents forked off.)
           | 
           | > Docker took off because there is no Linux. There are 50
           | different slightly incompatible OSes. So the best way to
           | distribute software is to basically tar up the entire
           | filesystem and distribute that. Dependency management has
           | failed because there's just too much sprawl.
           | 
           | I think I agree with you; part of the problem is that people
           | treat "Linux" as an OS, when it's a _piece_ that 's used by
           | many OSs that appear similar in some ways.
           | 
           | > Apple macOS is the only OS you can ship software for
           | without statically linking or bundling everything and be
           | reasonably sure it will work... as long as you are not going
           | back more than two or three versions.
           | 
           | ...but then by the same exact logic as the previous point, I
           | think this falls apart; macOS isn't the only OS you can
           | target as a stable system. In fact, I would argue that there
           | are a _lot_ of OSs where you can target version N and have
           | your software work on N+1, N+2, and likely even more extreme
           | removes. Last I looked, for example, Google 's GCP SDK
           | shipped a .deb that was built against Ubuntu 16.04
           | specifically because that let them build a single package
           | that worked on everything from that version forward. I have
           | personally transplanted programs from RHEL 5 to (CentOS) 7
           | and they just worked. Within a single OS, this is perfectly
           | doable.
        
         | soneil wrote:
         | It seems the canned deployment is the entire value-add here.
         | It's existing components that you can already deploy yourself
         | if you prefer.
         | 
         | I much prefer this over the old method of canned deployment
         | where you ran a script and prayed it didn't hose the host too
         | badly.
        
           | byteknight wrote:
           | You have absolutely hit the nail on the head.
           | 
           | My view is this:
           | 
           | There is a myriad of amazing toolage out there that the
           | everyday person could greatly benefit from in their day-to-
           | day life. A lot of that has a very high barrier to entry for
           | technical knowledge. By simplifying this setup down to a
           | simple Docker compose file I believe that I have allowed the
           | lay person to play and experiment in the freedom of their own
           | home with technology they may have otherwise been eyeing.
        
             | babyeater9000 wrote:
             | I completely agree and want to add that the readme file
             | does a good job of letting me know what this thing is and
             | why I should use it. I really appreciate when developers
             | take the time to be inclusive by writing for a less
             | technical audience. I will at least try it out and see what
             | it is all about. I have been looking to add more services
             | to my pihole.
        
       | A_No_Name_Mouse wrote:
       | > Navigate to http://{YOUR_SERVER_IP}:51821. Log in using the
       | admin password
       | 
       | Over http? Pretty YOLO...
        
         | 0x073 wrote:
         | I think it's mostly for intranet setup. Most router still use
         | http for management ui, as it's complicated to setup an working
         | certificate, especially only with ip.
        
           | A_No_Name_Mouse wrote:
           | You might be right. There's a link for deployment to Oracle
           | cloud, but that seems to use a different way to login.
        
             | byteknight wrote:
             | I should've stipulated more clearly and will do. Thank you.
        
         | byteknight wrote:
         | I should've stipulated more clearly and will do. Thank you.
        
         | Alifatisk wrote:
         | http for local networks should be fine, right?
        
           | thekashifmalik wrote:
           | It's okay but not ideal.
           | 
           | Otherwise anyone connected to WiFi can snoop on traffic.
           | 
           | Unfortunately my router, switches, AP and NAS don't support
           | HTTPS either :'(
        
             | Alifatisk wrote:
             | But if you think people are snooping on your network then
             | you've got a larger issue.
             | 
             | But of course, good security practices is never bad and
             | using https whenever you can is always good.
        
               | getcrunk wrote:
               | You should always assume someone is snooping on your
               | network.
        
       | tristanb wrote:
       | Does this have any mdns reflection?
        
         | ace2358 wrote:
         | Is that what is required so I can do my server.local and have
         | it work? I've struggled a lot of .local stuff with various
         | routers and port openings etc. I know that .local isn't a
         | standard or something and I'm meant to use something else. I've
         | never known what to google to fix it though
        
           | JamesSwift wrote:
           | .local is a standard. Its a part of mDNS (multicast DNS).
           | Dont use it for your own DNS records.
           | 
           | I'm not sure what exact issue you are having, but if you are
           | trying to resolve mDNS .local across internal networks then
           | you need to look up mDNS reflection. If you are trying to use
           | .local for your own DNS records then pick something else
           | (ideally using an actual registered TLD, so e.g. if you own
           | foo.com then you could use lan.foo.com for your internal
           | records).
        
       | sthlmb wrote:
       | Ooh, this is definitely something to play around with tomorrow. A
       | split-tunnel on my phone would be nice!
        
         | byteknight wrote:
         | Yup! Now we're thinking alike. Split only DNS and bingo, native
         | ad blocking.
        
       | ThinkBeat wrote:
       | >* Publish the changes on GitHub as an open-source & linked fork;
       | 
       | Great an open-source license that mandates the use of a
       | proprietary Microsoft product.
        
         | j45 wrote:
         | Doesn't seem exclusive, and could be posted elsewhere in
         | addition.
         | 
         | It might not be ideal or my choice but the alternative of no
         | choice at all would probably be more concerning.
        
           | byteknight wrote:
           | This is true and only true while the project uses wg-easy.
           | Once the new UI is done it will no longer be required.
        
             | j45 wrote:
             | Oh that's a great clarification, thanks!
        
       ___________________________________________________________________
       (page generated 2023-10-28 23:00 UTC)