A few years ago the IPv6 christmas tree emerged as a festive (ab)use of IPv6’s vast address space. There’ve been a few variants on the concept in the meantime, but none of them seem to be active currently. I’ve wanted to play with IPv6 ranges on public cloud providers, so this seemed as good an excuse as any. Have play with the canvas, or keep reading for the behind-the-scenes.
Update: The project has come to an end. Thanks for joining in!
If you’re interested, take a look at the source code, but be warned, it’s not well optimised or neat. If I were to run the canvas again, there’s a number of improvements I’d want to make.
Coverage elsewhere: Jae’s Website, Evan Pratten
Routing the prefix
The base concept behind these previous projects has been that pinging a specific IPv6 address will change something according to parameters encoded in the IP address itself. For example, pinging 2001:db8::0405:ff:00:00
would change the pixel at coordinates (4, 5)
to the colour #ff0000
, or red.
To achieve that, first you need a range (aka prefix) of IPv6 addresses routed to the server where the project will run. The way this works varies between cloud providers.
I started with Vultr. They provide a 64-bit prefix (/64) to all VMs and state “You may use any IPs within the assigned prefix”, which sounded promising. What I discovered fairly quickly however, is that this is provided as an “on-link” prefix, which is exactly what your home Internet router does.
The way this works is that when a packet from the Internet arrives for an address within my prefix, say 2001:db8::1234
, Vultr’s router will send a Neighbour Discovery Protocol (NDP) broadcast asking “Who is 2001:db8::1234
?”
For the use case here, this is unhelpful, as I need to be able to receive pings on many addresses. There are ways to force NDP to reply to all queries, but they’re kludgy and inefficient.
So my next stop was Linode. They offer the choice of a /64 or /56 and explicitly state that it will be routed to the VM. This means that Linode’s router already knows that my VM is responsible for 2001:db8::/64
and sends the packets directly to me.
Accepting packets
So we’ve got packets making it to my VM, however the VM doesn’t know what to do with them. The default behaviour of most devices when they receive packets not addressed to them is naturally to ignore them.
This is easily solved with the addition of a “local” route on the VM, which tells the kernel that a given range of addresses belong to the local machine.
With systemd’s networkd, and Linode’s managed configuration of it, this is easily accomplished by creating a file at /etc/systemd/network/05-eth0.network.d/canvasprefix.conf
:
[Route]
Destination=2001:db8::/64
Type=local
Receiving pings
Seeing as the protocol ping uses, ICMP, is normally handled by the kernel with no user interaction at all, you might wonder how a normal program can receive ping packets.
The answer is actually very simple and is similar to accepting any other network connection:
import socket
# Open a raw INET6 (IPv6) socket, set to the ICMPv6 protocol
sock = socket.socket(socket.AF_INET6, socket.SOCK_RAW, socket.IPPROTO_ICMPV6)
# Set the RECVPKTINFO flag, which will provide us with the *destination* address of packets
sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVPKTINFO, 1)
while True:
# Receive packets from the socket
# Keep only 1 byte of the packet body, and 32 bytes of the additional data
data = sock.recvmsg(1, 32)
# The first byte of ICMPv6 packets is its type.
# We only care about ping (echo request), which are type 0x80
if data[0] == b"\x80":
# Dig the destination address out of the additional data
pingDst = data[1][0][2][:16]
...
Drawing the canvas
From here, the implementation is fairly simple. We just need to keep track of what colour each pixel is, update it according to incoming pings, and send this information to the webpage that renders the canvas to viewers.
I decided to do this with MQTT, which is not really what it’s designed for, but it does all the things I needed ¯\_(ツ)_/¯
Initially, I had one MQTT topic per pixel, which on a canvas of 256×256 makes 65,536 topics. This, rather predictably, did not work too well, with approximately a 200% network overhead on the pixel data, the canvas took around 20 seconds to load 😂
Moving to one topic per line (a total of 256 topics), the network overhead comes to approximately 1%, and load times are 2-3 seconds. Much better.
Running some numbers
To demonstrate the vastness of the IPv6 address space, I did some calculations.
For this project, I’m using a single /64 prefix, which means I have the remaining 64 bits of address available to use. I should note that a /64 is generally the smallest subnet size used in IPv6.
Standard 8-bit colour uses 24 bits per pixel (8 bits each for red, green and blue). That leaves 40 bits for coordinate information.
If we divide that by 1 million, then we get megapixels:
\[\frac {2^{40}} {1,000,000} = 1,099,511.628...\]Over a million megapixels! The equivalent of 22,906.49 iPhone 14 Pros, which have a 48MP main camera.
So with a single /64, again the smallest subnet size commonly used in IPv6, we could address every possible colour of every pixel in an image with the resolution of 23 thousand iPhone cameras. That’s…big 😳
If you’re used to dealing with IPv4, that may seem extremely wasteful, and there was indeed much debate around this in the 90’s when IPv6 was being designed, but the core of it is that IPv6 is not designed to save on addresses as there’s simply a near-infinite number of them. The overhead of a few extra bits in IP headers is, in the context of today’s massive amounts of Internet traffic, minimal, and is outweighed by management and privacy benefits.
And now that world IPv6 adoption is at 42%, musing on alternative solutions is basically pointless.
Comments