Been using tinc for god knows how long... Perhaps 20 years? It's been fantastic. I really don't know why people are wax-lyrical about wireguard. tinc was doing it 20 years ago. UDP? yeah. encryption, mesh, proxy ARP you name it. I've had countless install and it's been the best VPN.
You even get a 'dot' graph of your current network status if you want to. When 'git' was invented, I put my /etc/tinc/*/ into git with the public keys, and installing a new host to the mesh is one 'git clone' away.
I used and still really adore tinc, it's been a joy to use.
I will say however that development appears to have fizzled out. I was really looking forward to some of the changes in 1.1, but it's been in pre-release for years now and doesn't seem like it's closer to being officially released.
I've largely replaced tinc with Nebula [0], but I still think fondly of it.
I am kindof curious if Tinc could leverage Wireguard for the encryption. Everything else is in user-space so I think it should just be wg vs tun and a config option and do all the dynamic mesh routing on top of wg, but it would be nice if the maintainer could chime in.
I (not a tinc maintainer, but a tinc contributor) have replied to this in the past [0]. Summary: it's not really possible without giving up some flexibility.
Tailscale gets pretty good performance using Wireguard in userspace, with some very clever use of the tun/tap device like supporting TCP segmentation offload. Does (or could) tinc use the same tricks?
Makes sense. Not sure how I missed your comment in the past as I've always been curious if it could be done. Thankyou for answering that. I'm fine with Tinc's performance for my use cases.
So what exactly does mesh VPN mean in this context? Because my need is to ensure that three portable devices (2 laptops, 1 android phone) always have access to the same private LAN and any services hosted by the devices on that LAN.
Today I use a hub-and-spoke design with wireguard to achieve this, because I never know where these devices will be so I can't guarantee that I can forward ports to them through a firewall.
Does tinc solve that port forwarding problem? I've read the intro in the manual but it only says tinc is a regular VPN.
From the features list: “Regardless of how you set up the tinc daemons to connect to each other, VPN traffic is always (if possible) sent directly to the destination, without going through intermediate hops.” and “As long as one node in the VPN allows incoming connections on a public IP address (even if it is a dynamic IP address), tinc will be able to do NAT traversal, allowing direct communication between peers.”
So yes, it gets around that problem if you have at least one publicly connectable node. How it does this if that node doesn't have a fixed address I've not looked into. Also there are no doubt NAT arrangements that it can't punch through meaning it'll have to fall-back to something akin to your hub model with nodes affected by such NAT talking to each other through another node.
Yes. I use it in production for that exact purpose. It only requires that one node is reachable via public IP address.
An advantage that tinc has over wireguard is that it can do local discovery and direct communication if your devices happen to be in the same local network.
So it's no different from the setup I have today with a hub VPS then. Since my portable devices are always on the move, I must have a VPS to remain publicly accessible. Just like tinc.
"Mesh VPN" means that if two devices want to communicate, it will attempt to establish a direct VPN connection between them. E.g. if you have a central server X and 3 "mobile" devices A, B and C, it's enough that ABC can talk to X, they will automatically learn about each other and when e.g. A wants to talk to B it will attempt to establish a direct tunnel between them, coordinating through X, including attempting to punch holes through NAT if needed. If that fails, it falls back to sending traffic through X. (And X could be more than one device too)
HN has extreme recency bias. I've been on here forever and things definitely get reinvented over and over again, and if it wasn't built in the last six months it's not cool.
Lisp is old enough that only wizards still use it, basically. Cool kids talk about lisp but actually host react on kubernetes and think it’s a good thing. We poor folks in between just chug along with our bare metal esxis and a master-slave replicated Postgres because it gets the job done, not despite it.
Tinc is incredible, it has worked flawlessly for me for 6+ years with exactly 0 maintenance.
As trustworthy as it is, I am sadly on the hunt to replace it. Compared to wireguard, the throughput ain't great, and it takes way too much CPU on my low power nodes. I would pay good money for "tinc, but with wireguard transport" -- there's of course projects purporting to do this but I haven't found one I trust yet.
There's another dead comment saying the same thing, but take a look at Nebula. I set it up over a year ago and haven't really thought about it much since - it just works. The open source version doesn't have any fancy GUIs or anything but it's not very hard to deploy. Covers every OS that you'd probably care about too.
I haven't contributed to tinc in a while and I haven't contributed a transport mechanism, but I do know it is modular and supports more than TUN/TAP (for example PPP) -- so knowing this and having worked with the code-base in general I would be surprised if adding wireguard as a transport was more than a weekend project to get something working (with the drawbacks I mentioned here [0]).
WebVM runs x86 binaries in WASM on any browser w/ ("[CheerpX:] an x86-to-WebAssembly JIT compiler, a virtual block-based file system, and a Linux syscall emulator") and for external sockets there's Tailscale networking. https://webvm.io/
IIUC that means an SSH (and/or MoSH Mobile Shell) client in a WASM WebVM in a browser tab could connect to a (tailscale (wg)) VPN mesh? (And JupyterLite+WebVM could ssh over an in-browser VPN mesh)
Yes, with the wireguard implementation being very deeply intertwined with the rest of the VPN implementation, resulting in sometimes higher speeds than in-kernel wireguard implementation.
I recently switched from Tinc to Wireguard (4 machines) due to simpler configuration and better support for road warriors. Transition was quite painless.
When you say 4 machines, do you mean a mesh between 4 machines? And it's not hub-and-spoke? I'm looking for a solution like that but because my portable devices can be anywhere in the world I had to use a hub-and-spoke setup where there is one central VPS that they can all connect through.
You can do a full non-hub mesh with Wireguard if 1) you can find a NAT hole punching method that works (usually can), and 2) you have some means of passing peer information between them, which also means you need to use a means to get at your external IP and port. If you don't have a reliable way of getting the external IP and port for all of them, if one of them supports port forwarding, just a basic dynamic DNS provider to get one of the dynamic external IPs is enough - you can then get the rest by hitting the first one.
Note that "some means" of exchanging data here really is any way of communicating at all. Post an encoded string to a Mastodon server? Send you an e-mail that's automatically picked up?
Also if 3) if you have the energy to write and maintain some stateful thingy that manages this dynamic peer information you need to pass around. And while doable, a hack in bash won't cut it if you want reliability and the occasional introspection when things go wrong.
You could try https://nordvpn.com/meshnet/ - it's wireguard, cross platform and meshnet handles everything automatically for you. Also meshnet is free so if you don't want to use vpn you won't have to pay anything.
A hack in whatever language works just fine, and depending on your setup you may not need any hacks at all - e.g. of you have dynamic DNS and port forwarding set up for one of your peers. It's not a beginner option, but it's an option that is simple for most common setups if you know what you're doing.
It's full mesh with 3 fixed servers and one machine with a dynamic IP. Just configured all peerings in WG instead of Tinc. I don't need Tinc's mesh routing, so WG is sufficient for me.
You should give nebula a try. I've recently switched my private VPN setup from wireguard to nebula and am looking into using it for work. It has some really nice features (for our use case), so ymmv. But so far it's been fantastic and very easy to use.
That's exactly why I ended up not using tinc, in spite of really liking everything else about it:\ It might well be fine, but there seemed to be some doubt about the crypto, AFAIK it's never been properly audited, and I'm not personally qualified to asses its security properties. In contrast, wireguard is harder to set up and use, and has fewer features (intentionally), but it was specifically designed to use the best crypto primitives available and it's had a lot more eyes on it.
Yes. The version 1.0 protocol is bad. [1] Don't use it.
Every time tinc comes up I'm annoyed that 1.1 (which supports an improved protocol iirc and according to the doc you linked; it also has some quality-of-life improvements I put work into) hasn't been officially released, 15 years later.
One of the nicest things about tinc is how little attention it needs. It starts on boot, and no matter if the connection between two points drops, or one end gets a new address, or connects via IPv6 instead of IPv4, or restarts, the connection just always comes back up magically, without any futzing. There are many other tunneling methods that don't do this.
I used to provide a tunnel using tinc via a MIPS-based Cobalt RaQ. Throughput was surprisingly good, even on an old 250 MHz CPU, so even though I hear people talking about needing something faster, I can't imagine other tunneling methods being measurably faster, unless they're using weaker encryption. I'd benchmark it some time, but the slowest NanoPis that I use for tunnels these days can push many times more traffic through tinc than their Internet connections will allow. I'd be curious to see anyone else's comparisons, though.
Tinc can work on L2, which means works like switch, means it can works like an cable between any nodes.It doesn't need an ip, you can make a bridge. There is no known good replacement for this.
The down side is
- single thread (perf has limits in 10gbe)
- userspace (wg can works in kernel)
- 1.1 is stable enough, but still may crash, be careful
I love using this to solve problems between bespoke network appliances. "Oh they need to be on the same layer 2, no prob *spins up tinc VMs." How can something so wrong feel so right!
Personally, I've been building my mesh network up over Yggdrasil[1]. A router can even hand out Ygg IP's, resolve traffic for-, and firewall off- naive IOT devices (neccessary if you route through the public mesh, which isn't the only way to set things up).
Hopefully the network segregation[1] feature makes it in at some point:
> A shared network key, included in the tree announcements, will ensure that two nodes that peer with what should be segregated networks will ignore each other. This should make it much easier to operate private Yggdrasil meshes and not worry that they will end up accidentally peered to the public network.
The current code we have for Yggdrasil v0.5 allows creating isolated networks using TLS roots. You can optionally provision the node identity as a TLS certificate and key and specify a root certificate, so that Yggdrasil will reject peerings with other nodes unless the peer presents a certificate from the same root. It should make it significantly easier to manage fleets of nodes under the same root whilst preventing accidental peerings to other networks.
Is it possible to use Yggdrasil in a way that it never gets connected to the public mesh and without knowing the public IP addresses of all the nodes in your private mesh?
Specifically, I'm wondering if it's possible to prevent outside nodes from completely establishing a connection to your nodes and therefore cause your private traffic to route over the public mesh.
That is, assuming that your Yggdrasil traffic has to route over the public Internet and is not contained within a private network itself.
The next version will make it much simpler to deploy isolated networks by using TLS roots to prevent accidental peerings.
It's also worth noting that Yggdrasil doesn't have the equivalent of "peer exchange" — only directly connected peers would ever find out your public IP address. Yggdrasil will not form new peerings automatically, with the single exception being multicast-discovered nodes on the same LAN.
> The next version will make it much simpler to deploy isolated networks by using TLS roots to prevent accidental peerings.
Is that PR #1038 [1]? Any info on how to use that feature and whether it works over multicast as well?
I noticed this PR uses SHA-1 for matching fingerprints. SHA-1 has been broken for 18 years now. Is it possible to use something more secure?
> It's also worth noting that Yggdrasil doesn't have the equivalent of "peer exchange" — only directly connected peers would ever find out your public IP address. Yggdrasil will not form new peerings automatically, with the single exception being multicast-discovered nodes on the same LAN.
Right, my worry is that by having a server with a public IPv4 address and Yggdrasil running on an open port (so that my other nodes can connect to it) will allow someone else to connect to it (either on purpose or accidentally) and cause my traffic to route over their node(s) and/or the public mesh.
one killer feature tinc has is a poor man's anycast
you can assign an ip to any number of nodes and tinc will talk to the one with the lowest latency. i've used this to run globally distributed dns on a tinc network
> you can assign an ip to any number of nodes and tinc will talk to the one with the lowest latency. i've used this to run globally distributed dns on a tinc network
Can you (or someone) explain more in depth how that works, or how one could replicate that?
I've been doing that with keepalived for redundant dns. Are you saying I could put the dns vip as a secondary IP (keeping the primary node IP) in the tinc host config?
Tinc is a perfect tool to make a VPN mesh across different clouds/hosters. Been using it for 5 years. It's so much easier in support comparing with ipsec madness.
Switched from openvpn to tinc after openvpn certificate expired after 10 years (default duration of creation script) and I lost connection to my family computers, so I had to drive a few hundred kilometers
Sounds very similar to how SyncThing. I would if the SymcThing discovery and NAT traversal could be combined with wireguard and the ease of tailscale, but distributed mesh and no headscale. And all the other things that tinc does.
I’d love to understand how this compares to Zerotier, Wireguard, Tailscale, Nebula, …
I use Zerotier because simplicity, cost, and iOS support matters more for me than speed, but I’m curious about alternatives (WG seemed much easier for me to screw up)
Another huge Tinc fan here. Used it in prod for 5 or so years before switching to zerotier for easier management as we grew. Tinc is rock solid and dead easy to configure.
Another tool to look at is vpncloud (https://github.com/dswd/vpncloud). It also builds a mesh network over UDP. Key setup is a bit easier, static keys are only used for authentication. Encryption keys are dynamically generated and replaced on a schedule.
I combine it with an ansible script to push out the (minimal) configuration to end nodes.
P.S. It is a Rust program, I compile it as a static binary, so my ansible script can push the binary out to any Linux distribution (that is x86_64) and it will run.
Well, you have the Headscale implementation if you want to go full free software.
But, for starters, Tinc is Layer 2 and TS/HS (WireGuard in reality) is Layer 3. TS/HS configuration is centrally managed, you don't have to touch anything on the clients. I think Tailscale hole punching works better, but also it might depend on your situation (DERP servers are nice, but maybe you can just add a Tinc node).
These are HS available features that I think are not available in Tinc:
Split DNS
Node registration Single-Sign-On / Pre authenticated key
Taildrop (File Sharing)
Access control lists
MagicDNS
Routing advertising (including exit nodes)
Ephemeral nodes
Embedded DERP server
But maybe you don't need that, and you rather have Layer 2 and use broadcast messages. So it depends on your needs.
You even get a 'dot' graph of your current network status if you want to. When 'git' was invented, I put my /etc/tinc/*/ into git with the public keys, and installing a new host to the mesh is one 'git clone' away.
Most underrated open source software ever.