If you have visited here before you will notice that the site has a new look. For the last couple of years this site has been more or less exclusively based around cycling. I’ve now broadened it out to include photo galleries of trips I have made to various parts of the world. I’ve therefore updated the look of the site to reflect this more diverse content.
The Travel pages and associated photographs have always been part of the site but were tucked away and consisted of static pages of content that were written for the original website. The stories of the trips have now gone but the photographs (that have now been re-edited and improved) remain.
In the process of updating the site I did a comprehensive audit of its security, although there have been no breaches in the 3 years I have been hosting the site here at home, there are constant attempts to probe and gain access. Up to now I have been relying on (a) the hardware firewall on my router that all traffic in and out of the network passes through and (b) locking down the file permissions on the server. The firewall/router, a Firebrick 2500, is excellent but of course it does have to let the traffic through to the webserver and for the other services I run.
I therefore felt that I needed to beef up the security on the server itself. The first line of defence is a software firewall and this being a Linux based server this means ufw – the Uncomplicated Firewall – which, despite its name, is not particularly uncomplicated for a newbie! Of course, I have chosen to make things more complicated by running both IPv6 and IPv4. The network runs entirely on IPv6 internally and is the preferred protocol for external connections, but as relatively few networks use IPv6
the machines here fall back to IPv4 when they need to. As iptables as yet does not integrate IPv4 and IPv6 it means running seperate firewall rules for the two protocols. Boy, was this a steep learning curve and I’m not completely there yet.
The purpose of the firewall on the server is to filter the requests on the open ports and block known bad addresses. To achieve this I run fail2ban, a programme that monitors the server log files for defined attacks on the services and automatically inserts rules into the firewall to block them for a defined period. I have this set up to be pretty draconian – on some forms of attack it picks up the first attempt and bans that IP address for a year. There are 2 downsides to this, firstly IP addresses are easy to spoof so you are probably not banning the actual perpetrator and secondly some addresses, particularly those from proxies are in a form that fail2ban cannot interpret and therefore cannot ban. However, fail2ban picks up over 70% of attempts to scan or break into the system. For persistent attacks that are getting through I do manual scans of the logs and manually insert the ban into iptables. The second downside is that fail2ban doesn’t do IPv6, at the moment that isn’t a big problem as I see very few connections from outside the network using IPv6 and are easily harvested from the logs.
The next layer of security is the webserver itself. Like most websites this one is run on apache and its behaviour controlled through directives in files. While I have been using apache for some years I have largely relied on the default configuration supplied with Ubuntu Server. Apache security is a whole (intimidating) subject area in itself, however as this site is fairly simple I didn’t need to get deep into the various issues. Whilst researching this I came across Jeff Starr’s excellent website http://perishablepress.com/ This site contains a wealth of information and he shares all his code. On the basis of the content of his site I purchased his book .htaccess made easy which is really a compliation of his website articles plus access to a forum and other resources. For me, it is well worthwhile to have all this content wrapped up in one source on the desktop. A couple of hours with this book and I felt I could make some real progress. I haven’t finished it yet, in particular there are the sections on writing pages for monitoring and reporting on the log files that look really useful. The techniques described in the book have cut down on the probes and attacks on the website considerably.
The upshot of all this work is, hopefully a website that is more responsive and easier to get around. The site is a bit image intensive and running on a 1Mb uplink so it is never going to be lightning fast but is low traffic. I guess if traffic increased I would have to look at hosting it elsewhere. Telecoms in the area I live is currently being upgraded and sometime in the next 12 months I should have access to a 10Mb uplink, so maybe I’ll always be able to host the site.