Category

Network

Gl inet OpenVPN client routing all traffic, ignoring pushed routes

If you’re using the rather excellent Gl inet series of routers as VPN end points, then you may find they have a “feature” which causes all traffic to tunnel through the OpenVPN, even if you push smaller subnet routes.

[email protected]:~# traceroute 1.1.1.1
traceroute to 1.1.1.1 (1.1.1.1), 30 hops max, 38 byte packets
 1  10.254.254.17 (10.254.254.17)  78.267 ms  76.049 ms  77.880 ms
 2  10.34.56.1 (10.34.56.1)  82.636 ms  84.339 ms  80.361 ms
 3  *  *  * (81.82.83.84)  82.747 ms  122.769 ms  93.138 ms
 4  *  *  *
 5  *  *  *
 6  tcma-ic-3-0.network.myisp.net (62.63.64.65)  128.810 ms  157.364 ms  141.007 ms
 7  162.158.32.254 (162.158.32.254)  78.899 ms  144.601 ms  83.033 ms
 8  one.one.one.one (1.1.1.1)  80.166 ms  79.087 ms  76.484 ms

There is a script which runs on these devices which forces all internet traffic to route down the OpenVPN tunnel, no matter what settings you seem to apply either on the client web page, or on the server side of things. This seems to be evidenced by looking at the routing table, which seems to generate two default routes, with the ethernet route being a higher (and therefore deprioritised) metric.

[email protected]:~# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         *               128.0.0.0       U     0      0        0 tun0
default         10.10.99.1      0.0.0.0         UG    10     0        0 eth0
10.10.19.0      *               255.255.255.0   U     0      0        0 br-lan
10.10.99.0      *               255.255.255.0   U     10     0        0 eth0
10.10.222.0    10.254.254.17   255.255.255.0   UG    0      0        0 tun0
10.254.254.17   *               255.255.255.255 UH    0      0        0 tun0
18.203.182.0    *               255.255.255.0   U     0      0        0 eth0
86.11.242.12    10.10.99.1      255.255.255.255 UGH   0      0        0 eth0
128.0.0.0       *               128.0.0.0       U     0      0        0 tun0

After *much* searching around, I eventually got directed to this post on the Gl-inet forums https://forum.gl-inet.com/t/openvpn-configuration-to-avoid-the-default-redirection-all-through-the-vpn/6519/5 which details the cause of this.

You have to edit two files – /etc/init.d/startvpn

Just add a # to the line lan2wan_forwarding disable which is in ovpn_firewall_start() section

The next file to edit is /etc/vpn.user

Just add # marks on every line between (and including) if and fi on the section # Load default rules

Finally, reboot or restart your OpenVPN service for the new rules to take place. After a reset, you can see that the routing is as it should be

[email protected]:~# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         10.10.99.1      0.0.0.0         UG    10     0        0 eth0
10.10.19.0      *               255.255.255.0   U     0      0        0 br-lan
10.10.99.0      *               255.255.255.0   U     10     0        0 eth0
10.10.222.0    10.254.254.17   255.255.255.0   UG    0      0        0 tun0
10.254.254.17   *               255.255.255.255 UH    0      0        0 tun0
86.11.242.12    10.10.99.1      255.255.255.255 UGH   0      0        0 eth0

And a ping

[email protected]:~# traceroute 1.1.1.1
traceroute to 1.1.1.1 (1.1.1.1), 30 hops max, 38 byte packets
 1  10.254.254.17 (10.254.254.17)  0.733 ms  0.591 ms  0.426 ms
 2  *  *  *
 3  *  *  *
 4  31.32.33.31 (31.32.33.31)  9.383 ms  8.870 ms  31.55.185.180 (31.55.185.180)  8.968 ms
 5  core2-hu0-8-0-5.colindale.theotherisp.net (191.99.127.154)  9.055 ms  core1-hu0-6-0-6.colindale.theotherisp.net (211.121.192.0)  8.594 ms  core2-hu0-12-0-1.colindale.theotherisp.net (191.99.127.118)  8.984 ms
 6  core2-hu0-7-0-0.colindale.theotherisp.net (191.72.16.128)  8.900 ms  peer7-et-7-0-1.telehouse.theotherisp.net (101.159.252.92)  15.313 ms  peer7-et-3-1-4.telehouse.theotherisp.net (109.159.252.168)  9.227 ms
 7  *  109.159.253.95 (109.159.253.95)  10.421 ms  14.362 ms
 8  one.one.one.one (1.1.1.1)  9.379 ms  9.363 ms  9.062 ms

Note, I install nano on my devices but obviously you can use VIM or any other preferred text editor.
I also noted during my research that others using OpenWRT suggested similar behaviour, but the fix may or may not be related to the above.

Also note that changing this may cause an unexpected security issue as traffic will now go locally other than the routed subnets.

Finally, be VERY confident that this will fix your issue if your device is remote deployed. Perhaps make sure you still have remote access to the device if the VPN breaks, or be prepared to travel to the device if it breaks. With the low cost of Gl-inet devices, its worth having a spare on hand to test against before deploying to live.

iPXE CloneZilla

CloneZilla is a linux toolset that allows you to clone either a partition or whole disk to another location;  either a connected storage device,  or remotely to the network.   This is a great tool for imaging systems before you work on them and lets you have a copy in case the worst should happen.    It has a variety of bundled tools in order to read from most of the popular filesystems in use,  falling back to DD to copy each disk sector if you’re using some obscure or proprietorial filer.   This is the FOSS alternative to Norton Ghost!

The great thing about CloneZilla is that its quick and easy to get it booting via iPXE,  so is worth investing a small amount of time in setting up so that you have it ready to go should you need it.

These instructions are based on release clonezilla-live-20121217-quantal.iso which seems to be versioned 2.0.1-15.  

Download the ISO from the CloneZilla site.  Use 7zip or your favourite image opening tool to open the ISO.  You need to extract the following files:

  • vmlinuz
  • initrd.img
  • filesystem.squashfs

and put them onto your boot webserver.  In this example,  I have created a folder called CloneZilla.

############ CloneZilla ############
:Clonezilla
echo Starting CloneZilla with default options
kernel http://boot.server/CloneZilla/vmlinuz
initrd http://boot.server/CloneZilla/initrd.img

imgargs vmlinuz boot=live config noswap nolocales edd=on nomodeset ocs_live_run=”ocs-live-general” ocs_live_extra_param=”” ocs_live_keymap=”” ocs_live_batch=”no” ocs_daemonon=”ssh” usercrypted=Kb/VNchPYhuf6 ocs_lang=”” vga=788 nosplash noprompt fetch=http://boot.server/CloneZilla/filesystem.squashfs
boot || goto failed
goto start


And that is really about it! You’ll notice we pass a few arguments which set various options. The most important option is the ‘fetch=’ command which tells the image where to download the main file system from. The other option I set was ‘usercrypted=’ which uses the Linux mkpasswd command to set the root password on boot – in this example iloveclonezilla.

A really easy one this week, but one worth trying. I’m fighting to get Backtrack5 booting over iPXE without using the ISO method, but this is proving troublesome. I think the image simply isn’t able to cope with being booted from a http network source.

We Canna do it Captain, we don’t have the Power.

Last night, we experienced an area wide power cut.   Despite being in modern 21st century,  this is nothing unusual – they seem to occur 3-4 times per year, with an average length of about 20mins.

Many people in the area are fully prepared,  with battery powered lanterns, torches and the traditional candles and matches to hand.  But after you’ve provided yourself a bit of light (when it happens at night), you find yourself sitting there, wondering what to do now the TV, Radio, Computers have all gone off.   And of course, whilst your broadband has almost certainly gone off, your smartphone falls back to the mobile network for its internet which more often than not keeps working.  Anyone using social networks via their phone can tweet for facebook that the power has gone out,  and start comparing notes as to how far it stretches.  Most people wouldn’t bother to phone their power supplier to let them know about the outage, as I think the assumption is that they can detect it and start resolving it ASAP.

But do they?

The previous supplier of electricity, Central Networks used to have a map that you could click on and see where they knew there were faults.  Western Power Distribution http://goo.gl/JjuKU , the new incumbent do not.  Now,  the information used to be available to the suppliers, so where has it gone now?  Do they no longer have the systems to collect this data from the grid,  or was it collated from their CRM systems?  Maybe they feel that it is some kind of sensitive data so best not to share it,  or simply that the bosses do not think this information is valuable to its client base.
I’m singling out Western Power, but a quick search of the power companies listed by the National Grid (http://goo.gl/7XSpX) seem to highlight only 
Northern Power Grid (http://goo.gl/OrPvNand
Electricity Northwest (http://goo.gl/lvIwR)
seem to be willing and/or able to provide this information.   And good on them too.  Not only do ENWL show current unplanned outages, but also future planned work.   

So my point is,  if they can do it,  why can’t other utility providers provide outage information.  Or,  maybe its something that the National Grid can do – after all, they provide a live view of the demand (http://goo.gl/A3tv6),  but can they see deep enough into the local grids to see the outages. 

Power cuts happen – its a fact of life.  But how long will it take suppliers to embrace the communication power of the Internet to get information out to its customers.  For now,  I guess we’ll just have stick with searching twitter for #PowerCut