So one day in my windowless office building, it was decided through divine providence that our developers needed complete access to our test beds. (Our harrowing experience with network architecture is a tale for another day. Suffice to say, for now all developers machines are on a "WAN", while our testbeds are on individual "LANs", completely* isolated from each other.) The applications we develop communicate using multicasting, thus it would be quite convenient for developers to also send/receive those messages while, well... developing. "Ah!" I though, "VPNs are the perfect solution." Mind you, I'm a lowly software developer myself, and had never so much as seen a VPN configuration file, but I understood the basics and the philosophy behind them.
Well, OpenVPN was the only solution available to us, for a variety of not-so-simple reasons, so I took up the task of soaking up as much information as I could about VPNs. All the cool kidz on the 'net were talking about TUN devices and route configuration, so I, just wanting to fit in, followed the same path and set up a basic client/server OpenVPN configuration. After tinkering about with the configs for a bit, I came to the realization that (duh!) routing implies actually routing packets between different subnets. WAN and LAN are different subnets, thus the reason we could not use multicast to begin with. I smelled fish. Multicast only works on a single network segment, so we need a way to put our VPN clients into the same subnet. Bridged VPN to the rescue! OpenVPN has a convenient, yet lightly-advertised 'bridged' vpn mode, wherein the server will assign clients to the same subnet as the server, so they could virtually be part of the same network segment. This required setting up a virtual bridge NIC on the server outside of the VPN config, but I made quick work of the change and determined a subset of the LAN IPs that could be reserved for VPN clients. Quick jump, skip, and a hop, and I was up and running, with my WAN device happily talking to all the LAN machines, as if we were on the same switch. Victory! Just start up the multicast app.....ah, shoot. ¿Que?
Quick break, if anyone needs to review bridged VPNs, here is a great summary. My bridge was setup properly, most everything seemed fine, but multicast messages were still not crossing the bridge. I had to go back to the basics. How does multicast work? Your machine sends out an IGMP join message, and network devices (namely, switches) add your IP address to that IGMP group. (Well, basically.) Time to fire up ye olde wiresharkke and see where this process was going haywire. Now, on my client machine, the VPN sets up a virtual TAP device, and routes all VPN traffic (i.e. traffic destined to the LAN subnet) through that TAP device. So I point wireshark at that interface, and see... nothing. Hmm. Why are the join requests not going across that device? I stumble into the system routing table, and notice that I'm routing all 188.8.131.52/4 messages through my physical NIC. Well, that won't do one bit. I need it to go through the virtual TAP device so that it can be properly encapsulated into a VPN packet and placed on the server's LAN segment.
Solution: ip route del 184.108.40.206/4; ip route add 220.127.116.11/4 dev tap0
Another, more elegant solution: Edit the client config file and place the following line: redirect-gateway local This will add or replace your default route to send everything over the virtual TAP device.
Simple enough, right? Well, since the main benefit of bridged over routed VPNs is that the client ends up on the same network segment, thus able to use multicast, and people keep touting that no routing configuration is need for bridged connections, I feel this should be advertised more. Make sure multicast messages are routed through the VPN interface. Thanks for coming to my TED talk.