WebDAV Woes with Nginx, Sabre/Dav

I’m in the process of moving my hosting to a new server,
because I wanted one that offers me more flexibility, and the ability to grow
the server and add resources to it during spikes in demand. I’ve chosen to go
with Vultr (I recorded
a screencast
about six weeks ago showing how easy it is to set up a new
server on their platform). I’ve also moved some non-essential hosting duties to
another provider altogether, CloudAtCost.

Anyway, this is not really my point.

One of the things on the server I’m going to be decommissioning
is a private WebDAV store. I don’t use it for much, just moving the occasional
file between computers and “publishing” my work Outlook calendar so that I can subsequently
synchronize it back to my Google calendar and get notifications
on my wrist
. It’s the WebDAV server that I’ve been setting up this week.

Most of the stuff that I’m moving to new servers is being
moved as-is: this is not an exercise in updating stuff, it’s about making sure
I’m done with the old server by the time my lease on it expires, but there were
some things about the WebDAV share that I really wanted to update, so I took
the opportunity.

The main thing I wanted to achieve was to use my Windows
domain username and password
on the site. Most of my password-protected web
tools are already set up that way, but the WebDAV share was lagging behind.
Since this means I have to use “basic”
authentication instead of the “digest” authentication
I previously had set
up this posed another problem. Windows’ built-in WebDAV client doesn’t allow
basic authentication on unencrypted connections (because that means the
password is sent in the clear), so I had an SSL certificate issued. Then I
found out that the Windows WebDAV client doesn’t support server name
, which meant some additional reconfiguration, and since I
was doing that I figured I may as well take the opportunity to update to the
latest version of sabre/dav, which is the
PHP-based WebDAV server I use (I find it much easier to set this up than to use
the built-in WebDAV functionality on web server software, which I’ve never been
able to get working no matter which server software I’m using).

I set all this up this week, tested it out by adding
it as a network location
on my personal and work laptops, and, once I was
satisfied it was all working well, pointed the domain name at the new server
and deleted the files from the old one.

Then I fired up Outlook, and hit the button to publish my

It didn’t work.

It ended up creating a file with the right name, but a size
of zero bytes. A quick google search revealed there could be many reasons for this, and since I’d
made the rookie mistake of changing everything
I really didn’t know where to start, not to mention that by this time I’d
deleted the original files and so I couldn’t go backward. I tried everything,
with no success. I spent a good chunk of my day on Tuesday troubleshooting.

All along I’d been convinced that the issue was with sabre/dav.
After all, all the other server functionality was working, so what other
explanation could there be for the one bit of it that sabre/dav was responsible
for being non-functional?

After a few hours though I was pretty sure that I had it set
up correctly, and I was convinced that I’d either found a bug in sabre/dav or nginx. I checked the nginx logs.

2015/06/23 16:24:41 [error] 18736#0: *33 client intended to
send too large body: 1945486 bytes, client: 75.159.xxx.xxx, server: xxxxxx.jnf.me, request: "PUT /Calendars/Williams_Jason_Calendar.ics HTTP/1.1", host: "xxxxxx.jnf.me"


All the files I’d tested the share with were very small, but
my published calendar with 30 days history and 60 days of future events was
1.85mb. The server was configured to accept uploads with a maximum size of 1mb.

I added a single line to my nginx server configuration:

client_max_body_size 100m;

Done! It’s so obvious when you know how.

Just a couple of days ago I wrote a little bit about how cloud servers are such a commodity item now, easily created and destroyed.

Today I wanted a server to test out a new tool, but I didn’t want to risk there being any impact to any of my existing production servers. So I created a new one on Vultr. From the time I started to the time I had a running server was just over a minute, and I recorded a screencast.

When I was done testing a couple of hours later I destroyed the server. Total cost to me for this exercise was about $0.02, or it would have been were it not for the fact that Vultr gave me a $5 account credit when I signed up.

It’s hardly riveting viewing, but it’s nevertheless amazing in its own way.

Server Commoditization

I’ve had a personal website of one description or another
for a long time now. For much of that time, the site was hosted by renting
space on someone else’s large server – so called “shared hosting.”

The theoretical problem with this model was that the
server’s resources were shared between all its users, and if one user chewed
through a whole lot of them then that left fewer available for everyone else.
I’m not sure I ever actually experienced this (although I’m sure it really was
an issue for web hosting companies to contend with), but the problem I did come
across was that to protect against this kind of thing hosts often put policies
and configuration options in place that were very restrictive. Related to this
is the fact that server configuration options apply to everyone with space on
that server, and they’re not for individual users to control. A problem if you
want to do anything that deviates even slightly from the common-case.

The alternative to shared webhosting would have been to rent
an entire server. This was – and still is – an expensive undertaking. It also
was – and still is – far more power than I need in order to host my website.
Sure, it’s possible to build a lower-powered (cheaper) server, but the act and
cost of putting it in a datacentre to open it up to wider world mean that it’s
probably not a worthwhile exercise to do all that with low-cost hardware.

What seems to me like not very long ago, virtualization
technology took off and created a market for virtual private servers (VPSs).
This allowed server owners to divide their hardware up between users, but in
contrast to shared hosting each user gets something that’s functionally
indistinguishable from a real hardware computer. They can configure it however
they wish, and it comes with a guaranteed chunk of resources: heavy usage of
one of the virtual machines hosted on the server does not negatively impact the
performance of any of the others.

This is the model under which my website is currently
hosted. I’ve chosen a low-powered VPS because that’s all I need, but recently
as my site has started to see more traffic it occasionally sees spikes in
incoming traffic that tax its limited memory and processing resources. I use CloudFlare as a service to balance this
out, mitigate threats, easily implement end-user caching policies and generally
improve speeds (particularly for those users that a geographically far away
from the server), but once my server resources are maxed there’s nothing I can
do about it: my host has divided the server up into VPS’s of a predefined size,
and doesn’t allow me to grow or shrink the server along with my needs.

The new paradigm is an evolution of this. Instead of
dividing each bare-metal server up into predefined VPS chunks, each server is a
pool of resources within which VPSs of various sizes are automatically
provisioned according to customer requirements. Behind the scenes, technology
has grown to make this easier, especially when you scale the story up to more
than one bare-metal server. A pool of physical servers can also pool resources.
If a VPS hosted on one physical server needs to grow beyond the remaining
available resources of its host, it can be invisibly moved to another host
while it’s still running and then its resources expanded.

This new paradigm is the one I plan to move to. Led by the
likes of Amazon and Google and now followed in the marketplace
by lower-cost providers like DigitalOcean
and Vultr (likely to be my
provider of choice), servers have really become commodity items that can be
created and destroyed at will. You used to rent servers/hosting by the month or
year, now it’s by the minute or hour. It’s common for hosting companies to
provide an API that lets you automate the processes involved – if my server
detects that it’s seeing a lot of traffic and is running low on resources it
could – with the right script implemented – autonomously decide to grow itself,
or maybe spin up a sibling to carry half the load. When things settle down it
can shrink itself back down or destroy any additional servers it created.

What a wonderful world we live in!