Docker IPv6 networking, routing, and NDP proxying

Today I was working on migrating my RIPE Atlas probe from running natively on a Raspberry Pi to running on a different Pi inside Docker.

You don’t really need to know about the Atlas project to follow along here, though it is an interesting and useful project nonetheless.

The background

By default, a Docker container will be assigned an IPv4 address in some private (RFC1918) range, which the Docker daemon will then NAT to the host’s address. You can expose ports (essentially port forward) from containers to be visible at ports on the host’s IP.

Also by default, Docker just doesn’t do IPv6, which after today has definitely been added to my list of Docker pet peeves! If you want to expose ports from containers over IPv6, then a current practice seems to be:

  • Create a Docker network with an IPv6 ULA(unique local address) range
  • Use the docker-ipv6nat container to NAT this to the host’s IPv6 address.

My cringing at the thought of NATv6 aside, NAT does, for both v4 and v6, work *okay* for exposing many services, and in both cases does allow you to specify an address on the host to bind to.

My problem

The difference with my Atlas container however, is that Atlas probes don’t actually need to expose any ports. They simply open an outbound connection to the RIPE servers and then conduct measurements as instructed.

Because of this, my objective was actually to specify an IPv6 address (in the range assigned by my ISP) for outgoing connections from the Atlas container. The add-on NATv6 container, and also the built-in NATv4 for that matter, doesn’t seem to have this capability.

In my specific circumstance, I don’t mind NATing IPv4 in Docker — I only have a single public v4 address so my router does NATv4 anyway. But I did want to set the outgoing IPv6 address for neatness, ease of identification, and separation of Atlas traffic.

A solution or two

In the end, I think there’s probably two ways to accomplish my objective.

The first, which isn’t the one I ended up using, is to connect the container to a Docker network using the macvlan driver. This essentially makes the container the same as any other device on the network. It can receive DHCP and RA’s directly from a router and set itself up however you configure it. The main reason I didn’t go this route is that I would’ve had to make my own Docker image and embed my chosen IPv6 address within the guest OS’s settings (or write a script to pull it from an environment var). This is kinda clunky and also goes against the whole Docker mojo of portability.

The second option I think is sneakily clever. First up, here’s the relevant parts of my docker-compose.yaml, with generic IP addresses (assume 2001:db8::/56 is the range assigned to me from my ISP):

services:
  probe:
    # other container options...
    networks:
      probe-network:
        ipv6_address: 2001:db8::a:71a5

networks:
  probe-network:
    driver: bridge
    enable_ipv6: true
    ipam:
      driver: default
      config:
        - subnet: 2001:db8::a:71a5/125

This alone will establish Docker’s default IPv4+NAT setup, but also will give the container 2001:db8::a:71a5, and add the necessary routes to the host for the 2001:db8::a:71a5/125 range. As a side note, a /125 was the smallest range that worked for me, due to the address required for the interface on the host and other overheads.

At this point, the host should be able to ping the container at 2001:db8::a:71a5, but nothing beyond the host will work, and that’s simply because nothing else knows where to find that address on the local network.

To fix this, firstly some sysctl options on the Docker host need to be set to allow it to route packets. This is generally net.ipv6.conf.all.forwarding=1, as well as net.ipv6.conf.INTERFACE.accept_ra=2 if the host requires SLAAC. If the host uses systemd’s networkd, then adding IPForward=ipv6 to the interface’s config file has the same effect, just with a little better readability!

The host will now be able to route packets to the container, but other devices still won’t know that the host is responsible for the container’s address. There’s a few ways to fix this depending on your exact network setup, such as having the Docker host make Router Advertisements, or adding the subnet to a router as a static route (with the latter noteworthy for also being possible with IPv4 if desired).

Both of those options would be great for an IPv6 range, but for individual addresses, particularly ones within the range of an existing network, there’s a simpler way…

NDP proxying

One of the neatest features of IPv6 I’ve discovered so far is NDP proxying. NDP is the IPv6 equivalent of IPv4’s ARP. They’re both essentially the protocols devices use to find the MAC address for a given IP address on their local network segment.

Adding the line IPv6ProxyNDPAddress=2001:db8::a:71a5 to the relevant networkd config on the Docker host will make it attract traffic from the local network bound for that address by way of Neighbour Advertisements, but without actually assigning it to an interface on the host. Once a packet arrives, the host will see that it matches the route for 2001:db8::a:71a5/125 and pass it to the Docker interface. If you’re not using systemd, then you can also control NDP proxying via the net.ipv6.conf.INTERFACE.proxy_ndp=1 sysctl option and ip -6 neigh command.

NDP proxying works in my case as the address of the container is within the range of my LAN, so routes already exist for it. Something on the network just needs to stick up its hand and say “Hey! That’s me!”, which is pretty much exactly what NDP proxying makes the host do. If I wanted the container address to be outside of my LAN, then I’d have to go with the RA or static route option.

I should note that ARP proxying is also a thing, but from my very brief look at it, it doesn’t seem to be selectable like NDP proxying. At least from the systemd manual, it sounds like a method a router may use to attract all traffic from a network segment so it can route it itself, so definitely something that could brick a network if you’re not careful.

I’ll end by noting that I haven’t extensively tested and researched these solutions, so you should before implementing them in something important 😛

If you discover any big drawbacks then let me know.

UPDATE: I’ve noticed that systemd has a bug where proxied addresses added through the IPv6ProxyNDPAddress option are sometimes dropped when an interface goes down and not re-added when it comes back up. This issue was fixed in systemd 248, and also backported to 247. When using older versions of systemd, you can make a workaround using networkd-dispatcher and a script in /etc/networkd-dispatcher/configured.d like this (substituting your interface of course):

#!/usr/bin/sh
ip -6 neigh add proxy 2001:db8::a:71a5 dev INTERFACE

Comments