End to end encrypted external access with Tailscale and nginx

So you’re a nerd, you’re hosting services at home, and you want to access them externally.

The traditional, independent, tried and true method is to poke a hole in your firewall or create a port forwarding mapping, then set up dynamic DNS to follow your home’s dynamic IP address(es), or pay your ISP for a static allocation.

This still works fine, but there are a few things to consider:

  • IPv4 exhaustion means an increasing number of ISPs are being forced to implement CGNAT or other IPv6 transition mechanisms, which will prevent IPv4 port forwarding from working at all
  • If you change ISP, you’ll have to go through the rigmarole again (assuming it’s even possible): disabling CGNAT, ordering a static IP, getting their default firewall disabled, etc
  • If you are unfortunate enough to have an ISP that follows the discouraged practice of dynamic IPv6 prefix allocations, exposing services over IPv6 will range from annoying to impossible
  • If you have a backup(or primary) Internet connection over 5G/4G, there’s basically a 100% chance that it’ll be behind a firewall you can’t control, so external access will be impossible

These hurdles are among the reasons behind products like Cloudflare Tunnel, which in short creates a persistent outbound connection from your server to Cloudflare which external traffic can be proxied back through, getting around firewalls, NAT, etc.

I was using Cloudflare Tunnel until fairly recently, when Cloudflare made some extremely concerning executive decisions. I have since decided to work on minimising my use of their services, and Tunnel was one of the easiest products I could cross off.

Mapping out a solution

To start off, the services I want to expose are Home Assistant and my ADS-B receiver. These both run on Raspberry Pis on my home network.

The Pis already have appropriate Let’s Encrypt certificates to serve their services over HTTPS on domain names pointed to them via local overrides in the network’s DNS resolver.

I also already have a Linode cloud VPS running nginx, so my rough plan is to point the domains to my VPS, and then have some form of tunnel from the VPS back to the Pis.

The fun part! (setting up the tunnels)

The first tool I reached for was WireGuard, which you really should have a look at if you haven’t before. It’s described as a VPN, but I feel that really does it a disservice, conjuring images of being elbow-deep in a complicated config file. It’s essentially a lightweight, stateless, mutually authenticated encrypted transport with some routing sauce. The Conceptual Overview on their homepage does a good job of…overviewing the concepts 😉

With WireGuard, I was very quickly able to create tunnels between the Pis and the VPS. I then wanted to restrict access to only the necessary ports, so gazed wearily at ufw and iptables…thankfully though, I was reminded of a much nicer way to do this: tailscale!

Tailscale is essentially WireGuard bundled with some very useful features including access control, a central control plane and NAT traversal (which is a very interesting read if you think, like I did, a peer-to-peer connection on the modern Internet near impossible). With it, you can set up a peer-to-peer network with very little configuration.

I started by installing it on the three devices and enrolling them using auth keys, tagging the VPS with linode and the Pis with external-https.

After enabling the MagicDNS feature, I was off to the races:

cameron@linode:~$ ping hasspi.tugzrida.github.beta.tailscale.net
PING hasspi.tugzrida.github.beta.tailscale.net 56 data bytes
64 bytes from fd7a:115c:a1e0:ab12:4843:cd96:6279:651b: icmp_seq=1 ttl=64 time=8.90 ms
64 bytes from fd7a:115c:a1e0:ab12:4843:cd96:6279:651b: icmp_seq=2 ttl=64 time=8.46 ms
64 bytes from fd7a:115c:a1e0:ab12:4843:cd96:6279:651b: icmp_seq=3 ttl=64 time=8.72 ms
^C
--- hasspi.tugzrida.github.beta.tailscale.net ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 8.463/8.693/8.899/0.178 ms

Now I said this was going to make access control nicer, and it does! Here’s all that’s needed in the tailscale dash to accomplish what I need:

{
    "acls": [
        {
            "action": "accept",
            "src"   : ["tag:linode"],
            "proto" : "tcp",
            "dst"   : ["tag:external-https:4433"]
        }
    ],

    "tagOwners": {
        "tag:linode"        : ["tugzrida@github"],
        "tag:external-https": ["tugzrida@github"]
    },

    "disableIPv4": true
}

You’ll notice I also disabled IPv4 inside my tailnet because I live in the future 😎

If all you want is external access on your own devices, then this is all you really need: put your laptop or phone in the place of the VPS in my setup and you’re set. But I want public external access, so I need to go a bit further.

The fun part, part 2 (setting up nginx)

From here, there’s a number of ways you could proceed. The most common of which would be to set up an HTTPS server in nginx on the VPS and then proxy connections back to the origin over HTTPS. Even plaintext HTTP would be acceptable in this case as the tunnel is encrypted. I won’t go over that here as there’s endless nginx guides which cover it, but it’s a perfectly valid solution.

But there’s a less common way, and it’s how the titular end to end encryption is achieved: nginx’s stream and ssl_preread modules! They let you use the power of nginx, but one layer down, proxying TCP (or even UDP) rather than HTTP.

Seeing as my Pis already have certificates and are serving HTTPS, I’ve also chosen this route so I don’t need to double up on certificate management – the VPS doesn’t need certificates for the exposed services as it’s not decrypting the TLS, just passing it over to the Pis.

Here’s what my config looks like:

stream {
    resolver 127.0.0.53;

    server {
        listen 443;
        listen [::]:443;

        ssl_preread on;

        proxy_pass $upstream;
        proxy_protocol on;
    }

    map $ssl_preread_server_name $upstream {
        default              unix:/run/nginx-locally-terminated-https.sock;

        hass.example.com     hasspi.tugzrida.github.beta.tailscale.net:4433;

        adsb.tugzrida.xyz    adsbpi.tugzrida.github.beta.tailscale.net:4433;
        adsb-pf.tugzrida.xyz adsbpi.tugzrida.github.beta.tailscale.net:4433;
    }
}

There’s two things to draw attention to here:

  • The stream module will completely take over port 443, so it will conflict with any existing nginx HTTPS servers. This is why the default stream route above is a unix socket. I can still run HTTPS servers on the VPS like this:
    server {
        listen unix:/run/nginx-locally-terminated-https.sock ssl http2 proxy_protocol;
        real_ip_header proxy_protocol;
        set_real_ip_from unix:;
        #...
    }
    
  • The resolver directive is crucial for nginx to be able to resolve the tailscale hostnames. In my case, it’s pointing to systemd-resolved, which will automatically route queries to MagicDNS. If you’re not using resolved, you’ll need to include a MagicDNS IP directly: fd7a:115c:a1e0::53 or 100.100.100.100

The final step is the config on the Pis. I have nginx running on both of mine, and here’s the relevant config:

server {
    listen [::]:4433 ssl http2 proxy_protocol;
    set_real_ip_from insert_linode_tailscale_ip_here;
    real_ip_header proxy_protocol;
    #...
}

Once nginx is restarted, and the DNS records are pointed to the VPS, everything is good to go. Requests are HTTPS encrypted by the client device, routed through the VPS while still encrypted, sent over the tunnel to the origin server, and finally decrypted there.

I’ve also added a basic server to nginx on the VPS solely to redirect HTTP requests for the relevant hostnames to HTTPS.

Tweaks

Because I’ve used nginx on both ends of the tunnel, and it supports the PROXY protocol, I’ve used it to pass the original client’s IP address to the origin server. If you don’t care about keeping the client’s address (or your origin server software doesn’t support it), there’s no other benefit to using PROXY.

A more common way to preserve the client address is the X-Forwarded-For or X-Real-IP HTTP headers. Those can also be used here, though not with the end-to-end encrypted, stream module solution. You’ll need to do a standard nginx HTTP proxy and add the appropriate header there. That method would also allow you to add HTTPS to an origin service that doesn’t support it.

Additional services can also be exposed very easily. For example, a Prometheus exporter on my Home Assistant Pi is exposed to Prometheus on my VPS by adding the tailscale tag prometheus-exporter to the Pi and tweaking the ACL like so:

{
    "action": "accept",
    "src"   : ["tag:linode"],
    "proto" : "tcp",
    "dst"   : ["tag:external-https:4433", "tag:prometheus-exporter:9100"]
}

Prometheus is then simply pointed to hasspi.tugzrida.github.beta.tailscale.net:9100

This way of exposing services is really quite flexible, so I hope I’ve sparked some ideas! If you discover another interesting or unusual use of tailscale or nginx, share it below!


Pssst, my referral link for Linode will give you a $100, 60 day credit, and give me $25 after your first payment of $25.

Comments