Web server

How to protect a web server from hurricanes

Enlarge / We could all use a little lightness in the computer world (especially if you lived on the way to Hurricane Harvey).

Aurich / Getty

HOUSTON – I had enough to worry about when Hurricane Harvey hit the Gulf Coast of Texas on the night of August 25 and delivered a Category 4 punch to the nearby town of Rockport. But I was simultaneously faced with a different kind of storm: an unexpected increase in traffic hitting the Space city weather Web server. It was the first of what would turn into several very long and hectic nights.

Space City Weather is a Houston area weather blog and forecast site run by my colleague Eric Berger and his pal Matt Lanza (with contributing author Braniff Davis). A few months before Hurricane Harvey decided to screw us up all over Texas, after seeing Eric and Matt struggling with web hosting companies in previous busy weather events, I offered to host SCW on my own private dedicated server (not the one in my closet, a real server in a real data center). After all, I thought, the box was heavily underused with just my own silly stuff. I had had some experience with self-hosting WordPress sites before, and my usual hosting strategy should handle SCW’s expected traffic just fine. It would be fun !

But this Friday night, with Harvey hits Rockport and forecasters predicting disaster for hundreds of miles of Texas coastline, SCW’s 24-hour pageviews counter passed the 800,000 mark and continued to operate. The number of unique visitors was north of 400,000 and was climbing. The server delivered between 10 and 20 pages per second. The traffic storm had arrived.

It was the first time, but not the last, over the next few days, that I stared at the numbers that were popping up quickly and wondered if I had forgotten anything or if I had messed something up. The heavy realization that millions of people (literally) relied on Eric and Matt’s predictions to make life and death decisions – and that if the site went down, it would be my fault – was making me sick. Has the site been prepared? Were my configuration choices (many made months or years earlier for very different needs) the right ones? Did the server have enough RAM? Would the load be greater than it could support?

Time-lapse video showing Buffalo Bayou, one of downtown Houston’s main flood control bayous, rising precipitously during the storm. The night of August 27 was particularly stressful for most Houstonians.

The wild blue over there

On a normal day with calm skies, Space City Weather sees maybe 5,000 visitors which generates maybe 10,000 pageviews, max. If there is a big thunderstorm on the horizon, the site can see twice as many views as people try to check in and get weather updates.

Hurricane Harvey managed to generate traffic at around 100 times the normal rate for several days, peaking at 1.1 million views on Sunday, August 27 (this is not a cumulative figure, it’s just for this day). -the). Every day between August 24 and 29, the site saw at least 300,000 views, and many had many more. Enter Eric’s first Article related to Harvey by August 22 and by the time the whole mess was finally over a week later, Space City Weather had served 4.3 million pages at 2.1 million unique – and the only downtime was when I called the hosting company on August 25 to have the server re-wired to a gigabit switch port for more burst capacity.

Part of the sustainability of the site is due to the simplicity: SCW is a vanilla self-hosted WordPress site without too much customization. There is nothing very special about the configuration – the only pieces of custom code are a few lines of PHP I copied from the WordPress Codex site in order to display the site logo on the WordPress login screen, and another bit remove query strings from some static resources to help cache hit rate. The site manages a shelf theme. Including the two bits of custom code, the site uses a total of eight plugins, rather than the dozen that many WP sites load with.

I would love to be a shameless rock star system administrator and pretend that I customized my hosting specifically around a premonitory assumption of SCW’s worst-case traffic nightmare scenario, but that would be a lie. Much of what kept SCW going was a basic philosophy of “caching whatever is possible in case traffic arrives unexpectedly.” The SCW team typically doesn’t post more than twice a day, and the site uses WordPress’ built-in commenting system. So the site is well suited to take advantage of as many levels of caching as possible, and my setup tries to cache everywhere.

Any system administrator who has spent some time deploying web applications in a non-simple environment can tell you that configuring the cache correctly can take years; you can waste a lot of time tracking down marginal cases and dealing with exclusions. The tiered cache in your stack causes additional inconvenience and configuration changes when setting up new sites, and this can greatly complicate deployment troubleshooting.

But in this case, that’s what saved the site from drowning in the tidal wave of traffic.

SCW statistics via the WordPress.com reporting tool, with the busiest day highlighted.  This is what a traffic storm looks like.
Enlarge / SCW statistics via the WordPress.com reporting tool, with the busiest day highlighted. This is what a traffic storm looks like.

How to throw a hurricane party (waiter)

From a hosting perspective, engineering a site to manage SCWs Ordinary daily traffic load is easy – you can do it with a Digital ocean drop or a free one Amazon AWS micro example. But building the capacity to serve a million views per day requires a different vision: you can’t just send so much traffic to an AWS micro and bring it online.

Fortunately, when I offered to take over hosting from SCW in July, I didn’t have to worry much about the basics. A few years ago, I wrote a series for Ars called Web Served, which focused on readers through some basic (and not so basic) web server setup tasks. The guide is really out of date and I plan to update it. So to facilitate this update and give me a big sandbox to play, at the end of 2016 I acquired a dedicated server for Liquid web in Michigan and moved my little closet data center there.

The box was happy to serve my personal domain and a few other things, namely the Chronicles of Georges and Fangs, a Dangerous elite webcomic, but it was largely underused and mostly inactive. With the exception of the occasional Reddit hug when The Chronicles of George was mentioned in the Technical support stories subreddit, traffic typically didn’t exceed a few hundred pageviews per day on any site.

Yet, as someone who suffers from an overwhelming urge to tinker, I have spent the last few years messing around and blogging about adventures in web hosting. In order to satisfy my own curiosity about how things would work, I had managed to whip up a reasonably Reddit-proof web stack – again, not through requirements-based planning, but simply through the want to have neat things to play with. Luckily or by accident, this pile ended up being just the right amount to absorb hurricane traffic.

As Hurricane Harvey started making life difficult, here’s what Space City Weather was running on, sharing server space with my other sites:

The software stack behind Space City Weather (as well as everything else on the <a href=
Enlarge / The software stack behind Space City Weather (as well as everything else on the BigDinosaur.org Web server.)

Lee hutchinson

Every layer here has played a role in keeping the site going. If you’d rather just have the “tl; dr” version of how to keep a lot of traffic on a single server, here it is:

  • Dedicated hardware server – no shared hosting
  • Gigabit uplink to handle burst traffic
  • Varnish cache with lots of RAM
  • WordPress installation with lightweight plugin
  • Out of box statistics via WordPress / Jetpack
  • Cloudflare as CDN
  • No heavy addictions (i.e., ad networks)

Now let’s get down to the details.


Source link

Leave a Reply

Your email address will not be published. Required fields are marked *