Using JavaScript to Identify Whether a Server Exists

Recently, for reasons I’m sure I’ll write about in the
future, I needed to find a way to use JavaScript to test if either of two
web-locations are accessible – my home intranet (which would mean the user is on
my network), or the corporate intranet of the company for which I work (which
would mean the user is on my organization’s network). The page doing this test
is on the public web.

My solution for doing this test was simple. Since neither
resource is accessible publicly I put a small JavaScript file on each, then I
use AJAX and jQuery to try and fetch it. If
that’s successful, I know the user has access to whichever intranet site served
the request and my page can react accordingly.

If neither request is successful I don’t have to do
anything, but the user doesn’t see any errors unless they choose to take a look
in the browser console.

This all worked wonderfully until I enabled SSL on the page
that needs to run these tests, then it immediately fell apart.

Both requests fail, because a page served over HTTPS is
blocked from asynchronously fetching content over an insecure connection. Which
makes sense, but really throws a spanner into the works for me: neither my home
nor corporate intranet sites are available outside the confines of their safe
networks, so neither support HTTPS.

My first attempt at getting around this was to simply change
the URL prefix for each from http:// to https:// and see what happened. Neither
site supports that protocol, but is the error that comes back different for a
site which exists but can’t respond, vs. a site which doesn’t exist? It appears
so!

Sadly, my joy at having solved the problem was extremely
short lived. The browser can tell the difference and reports as much in the
console, but JavaScript doesn’t have access to the error reported in the
console. As far as my code was concerned, both scenario was still identical
with a HTTP response code of 0 and the status description worryingly generic “error.”

We are getting closer to the solution I landed on, however.
The next thing I tried was specifying the port in the URL. I used the https://
prefix to avoid the “mixed content” error, but appended :80 after the hostname
to specify a port that the server was actually listening on.

This was what I was looking for. Neither server is capable
of responding to a HTTPS request on port 80, but the server that doesn’t exist
immediately returns an error (with a status code of 0 and the generic “error”
as the descriptive text), but the server that is accessible simply doesn’t
respond. Eventually the request times out with a status code of 0 but a status
description, crucially, of “timeout.”

From that, I built my imperfect but somewhat workable
solution. I fire a request off to each address, both of which are going to
fail. One fails immediately which indicates the server doesn’t exist, and the
other times-out (which I can check for in my JavaScript), indicating that the
server exists and I can react accordingly.

It’s not a perfect solution. I set the timeout limit in my
code to five seconds, which means a “successful” result can’t possibly come
back in less time than that. I’d like to reduce that time, but when I
originally had it set at 2.5 seconds I was occasionally getting a
false-positive on my corporate network caused by, y’know, an actual timeout
from a request that took longer than that to return in an error state.

Nevertheless if you have a use-case like mine and you need
to test whether a server exists from the client perspective (i.e. the response
from doing the check server-side is irrelevant), I know of no other way. As for
me, I’m still on the lookout for a more elegant design. I’m next going to try
and figure out a reliable way to identify if the user is connected to my home
or corporate network based on their IP address. That way I can do a quick
server-side check and return an immediate result.

It’s good to have this to fall back on, though, and for now
at least it appears to be working.

New Code Projects: Backblaze B2 Version Cleaner & VBA SharePoint List Library

It’s been a while since I’ve posted code of any description, but I’ve been working on a couple of things recently that I’m going to make publicly available on my GitLab page (and my mirror repository at code.jnf.me)

Backblaze B2 Version Cleaner

I wrote last week about transitioning my cloud backup to Backblaze’s B2 service, and I also mentioned a feature of it that’s nice but also slightly problematic to me: it keeps an unlimited version history of all files.

That’s good, because it gives me the ability to go back in time should I ever need to, but over time the size of this version history will add up – and I’m paying for that storage.

So, I’ve written a script that will remove old versions once a newer version of the same file has reached a certain (configurable) “safe age.”

For my purposes I use 30 days, so a month after I’ve overwritten or deleted a file the old version is discarded. If I haven’t seen fit to roll back the clock before then my chance is gone.

Get the code here!

VBA SharePoint List Library

This one I created for work. Getting data from a SharePoint list into Excel is easy, but I needed to write Excel data to a list. I assumed there’d be a VBA function that did this for me, but as it turns out I was mistaken – so I wrote one!

At the time of writing this is in “proof of concept” stage. It works, but it’s too limited for primetime (it can only create new list items, not update existing ones, and each new item can only have a single field).

Out of necessity I’ll be developing this one pretty quickly though, so check back regularly! Once it’s more complete I’ll be opening it up to community contributions.

I have no plans to add functions that read from SharePoint to this library, but once I have the basic framework down that wouldn’t be too hard to add if you’re so inclined. Just make sure you contribute back!

Get the code here!

Raspberry Pi Whole Home Audio – The Conclusion?

Welcome to what is possibly the concluding post in my Raspberry Whole Home
Audio Project
series of posts… or possibly not.

At the start of this journey I had a plan to install mopidy on one of my Raspberry Pis and use pulse
audio
to stream the output to the others. Along the way I ran into some challenges
stemming from me buying the cheapest peripherals I could (and subsequently needing
to upgrade the WiFi adapters and power cables I first bought to better ones),
and my vision evolved as things progressed.

Instead of using mopidy, I switched to installing Kodi on each of the Pis thanks to the OpenElec linux distribution that’s available for
several types of hardware, the Pi included.

image

Kodi, as a full-blown media centre system, might seem like a
bit of an odd choice for a headless device (i.e. something with no attached
display), but it’s the right choice for me for a couple of reasons.

  • I already have it installed on a couple of PCs
    in the house, attached to the TVs in the living room and the bedroom
  • I already have a remote
    control app
    for it on my phone
  • There are plugins for a bunch of stuff, such as this one for my favourite music streaming service. Well written
    plugins integrate perfectly with the system, and the remote control app.
  • It has built-in support for acting as an airplay
    receiver

For me, these things combine to provide me with the best of
both worlds. If I just want to play music from my library or from an internet
streaming service on one set of speakers, then I fire up the remote app and
target the particular device I want to output from.

If I want to play the same thing on several (or all) the
devices at the same time, then I fire up TuneBlade
on my laptop and any sounds that would usually come out of its speakers get
redirected to all the airplay receivers.

image

When it works, it’s glorious. Having the same music playing
in sync on all the speakers in the apartment is awesome.

The problem is that it doesn’t always work. TuneBlade
includes a setting that lets you set how much of a buffer you want. If you set
it too high the devices won’t synchronize because it will take a slightly
different amount of time to fill the buffer on each of them. I have it set to
zero, which works amazingly well most of the time but leaves me especially
prone to blips in network connectivity and bandwidth. When these occur, things
get out of sync (which sounds terrible, because each set of speakers is not all
that far away from its neighbours), and it can’t seem to automatically recover –
I have to manually disconnect and reconnect the affected player to get it back
in sync with its peers.

The bottom line then is that my setup is good, but not
perfect. It’s no Sonos.

The search for a perfect system will likely continue, but
for the time being I’m pretty content. I spent less than $100, and I have a
setup that would have cost me $5,000 from them.

The Golden Ratio: Design’s Biggest Myth

The other day I watched a Criminal Minds episode where the BAU rescued some potential victims of a serial killer mathematician by using the golden ratio and the related Fibonacci sequence (or rather, by identifying and understanding the killer’s use of them).

It was an interesting episode. When I decided I wanted to read a little more about the golden ratio I found the article linked above, and that was an interesting read too.

I’ve used the golden ratio in design (indeed, if you’re reading this by visiting my site on a large-screened device then the proportions of the left and right columns match the golden ratio).

Is it more aesthetically pleasing than different proportions would be? That’s the problem with things like this that are said to impact us at a subconscious level, my conscious mind doesn’t know.

The Golden Ratio: Design’s Biggest Myth

WebDAV Woes with Nginx, Sabre/Dav

I’m in the process of moving my hosting to a new server,
because I wanted one that offers me more flexibility, and the ability to grow
the server and add resources to it during spikes in demand. I’ve chosen to go
with Vultr (I recorded
a screencast
about six weeks ago showing how easy it is to set up a new
server on their platform). I’ve also moved some non-essential hosting duties to
another provider altogether, CloudAtCost.

Anyway, this is not really my point.

One of the things on the server I’m going to be decommissioning
is a private WebDAV store. I don’t use it for much, just moving the occasional
file between computers and “publishing” my work Outlook calendar so that I can subsequently
synchronize it back to my Google calendar and get notifications
on my wrist
. It’s the WebDAV server that I’ve been setting up this week.

Most of the stuff that I’m moving to new servers is being
moved as-is: this is not an exercise in updating stuff, it’s about making sure
I’m done with the old server by the time my lease on it expires, but there were
some things about the WebDAV share that I really wanted to update, so I took
the opportunity.

The main thing I wanted to achieve was to use my Windows
domain username and password
on the site. Most of my password-protected web
tools are already set up that way, but the WebDAV share was lagging behind.
Since this means I have to use “basic”
authentication instead of the “digest” authentication
I previously had set
up this posed another problem. Windows’ built-in WebDAV client doesn’t allow
basic authentication on unencrypted connections (because that means the
password is sent in the clear), so I had an SSL certificate issued. Then I
found out that the Windows WebDAV client doesn’t support server name
identification
, which meant some additional reconfiguration, and since I
was doing that I figured I may as well take the opportunity to update to the
latest version of sabre/dav, which is the
PHP-based WebDAV server I use (I find it much easier to set this up than to use
the built-in WebDAV functionality on web server software, which I’ve never been
able to get working no matter which server software I’m using).

I set all this up this week, tested it out by adding
it as a network location
on my personal and work laptops, and, once I was
satisfied it was all working well, pointed the domain name at the new server
and deleted the files from the old one.

Then I fired up Outlook, and hit the button to publish my
calendar.

It didn’t work.

It ended up creating a file with the right name, but a size
of zero bytes. A quick google search revealed there could be many reasons for this, and since I’d
made the rookie mistake of changing everything
I really didn’t know where to start, not to mention that by this time I’d
deleted the original files and so I couldn’t go backward. I tried everything,
with no success. I spent a good chunk of my day on Tuesday troubleshooting.

All along I’d been convinced that the issue was with sabre/dav.
After all, all the other server functionality was working, so what other
explanation could there be for the one bit of it that sabre/dav was responsible
for being non-functional?

After a few hours though I was pretty sure that I had it set
up correctly, and I was convinced that I’d either found a bug in sabre/dav or nginx. I checked the nginx logs.

2015/06/23 16:24:41 [error] 18736#0: *33 client intended to
send too large body: 1945486 bytes, client: 75.159.xxx.xxx, server: xxxxxx.jnf.me, request: "PUT /Calendars/Williams_Jason_Calendar.ics HTTP/1.1", host: "xxxxxx.jnf.me"

D’oh.

All the files I’d tested the share with were very small, but
my published calendar with 30 days history and 60 days of future events was
1.85mb. The server was configured to accept uploads with a maximum size of 1mb.

I added a single line to my nginx server configuration:

client_max_body_size 100m;

Done! It’s so obvious when you know how.

Raspberry Pi Whole Home Audio Updates

It’s been a long time since I’ve written about my Raspberry Pi Whole Home Audio Project.

Simply, that’s because I’ve hit a bit of a wall and I’m especially busy with work right now so I haven’t been able to find the time to work my way around it.

The problem is that the USB WiFi adapters that I bought (for about $5 each) don’t perform well. They have signal strength issues, and while they do work and maintain a network connection, the poor signal strength means the connection isn’t fast enough to stream audio. There are plenty of other people out there having the same problem. You get what you pay for, I guess, and I need to buy replacement adapters.

I’m also considering a change in direction. My original plan was to install mopidy on one of the Pis and use pulse audio to stream the output to the others.

I’m considering instead installing TuneBlade on one of my Windows PCs. TuneBlade takes all the audio output from that computer and streams it using Apple’s AirPlay protocol. I’d then install ShairPort on all the Pis to turn them into AirPort receivers.

What do you guys think?

Just a couple of days ago I wrote a little bit about how cloud servers are such a commodity item now, easily created and destroyed.

Today I wanted a server to test out a new tool, but I didn’t want to risk there being any impact to any of my existing production servers. So I created a new one on Vultr. From the time I started to the time I had a running server was just over a minute, and I recorded a screencast.

When I was done testing a couple of hours later I destroyed the server. Total cost to me for this exercise was about $0.02, or it would have been were it not for the fact that Vultr gave me a $5 account credit when I signed up.

It’s hardly riveting viewing, but it’s nevertheless amazing in its own way.

Server Commoditization

I’ve had a personal website of one description or another
for a long time now. For much of that time, the site was hosted by renting
space on someone else’s large server – so called “shared hosting.”

The theoretical problem with this model was that the
server’s resources were shared between all its users, and if one user chewed
through a whole lot of them then that left fewer available for everyone else.
I’m not sure I ever actually experienced this (although I’m sure it really was
an issue for web hosting companies to contend with), but the problem I did come
across was that to protect against this kind of thing hosts often put policies
and configuration options in place that were very restrictive. Related to this
is the fact that server configuration options apply to everyone with space on
that server, and they’re not for individual users to control. A problem if you
want to do anything that deviates even slightly from the common-case.

The alternative to shared webhosting would have been to rent
an entire server. This was – and still is – an expensive undertaking. It also
was – and still is – far more power than I need in order to host my website.
Sure, it’s possible to build a lower-powered (cheaper) server, but the act and
cost of putting it in a datacentre to open it up to wider world mean that it’s
probably not a worthwhile exercise to do all that with low-cost hardware.

What seems to me like not very long ago, virtualization
technology took off and created a market for virtual private servers (VPSs).
This allowed server owners to divide their hardware up between users, but in
contrast to shared hosting each user gets something that’s functionally
indistinguishable from a real hardware computer. They can configure it however
they wish, and it comes with a guaranteed chunk of resources: heavy usage of
one of the virtual machines hosted on the server does not negatively impact the
performance of any of the others.

This is the model under which my website is currently
hosted. I’ve chosen a low-powered VPS because that’s all I need, but recently
as my site has started to see more traffic it occasionally sees spikes in
incoming traffic that tax its limited memory and processing resources. I use CloudFlare as a service to balance this
out, mitigate threats, easily implement end-user caching policies and generally
improve speeds (particularly for those users that a geographically far away
from the server), but once my server resources are maxed there’s nothing I can
do about it: my host has divided the server up into VPS’s of a predefined size,
and doesn’t allow me to grow or shrink the server along with my needs.

The new paradigm is an evolution of this. Instead of
dividing each bare-metal server up into predefined VPS chunks, each server is a
pool of resources within which VPSs of various sizes are automatically
provisioned according to customer requirements. Behind the scenes, technology
has grown to make this easier, especially when you scale the story up to more
than one bare-metal server. A pool of physical servers can also pool resources.
If a VPS hosted on one physical server needs to grow beyond the remaining
available resources of its host, it can be invisibly moved to another host
while it’s still running and then its resources expanded.

This new paradigm is the one I plan to move to. Led by the
likes of Amazon and Google and now followed in the marketplace
by lower-cost providers like DigitalOcean
and Vultr (likely to be my
provider of choice), servers have really become commodity items that can be
created and destroyed at will. You used to rent servers/hosting by the month or
year, now it’s by the minute or hour. It’s common for hosting companies to
provide an API that lets you automate the processes involved – if my server
detects that it’s seeing a lot of traffic and is running low on resources it
could – with the right script implemented – autonomously decide to grow itself,
or maybe spin up a sibling to carry half the load. When things settle down it
can shrink itself back down or destroy any additional servers it created.

What a wonderful world we live in!

TorrentApp

I have a small app on my computer that I wrote myself. It’s
small and simple, and it’s the default application for opening BitTorrent files on our computers. When I
download one of these files the app takes the file and moves it to a folder on
the server. This folder is watched by my torrent
client of choice
which runs on the server and immediately starts the
download when it sees the file.

The app then pops up a notification to the user to ask if
they want to be directed to the deluge web interface to see the download
progress.

I rewrote the app about a year ago. The original version was
written in RealStudio but the
location of the watched folder and the URL for Deluge’s web interface were
hard-coded in: a reasonable design decision given it was just a small app for
only my use one, but still a poor one – when a change I made to my network
configuration required me to adjust these variables I no longer had a copy of
RealStudio available.

I wrote a new version in Visual
Basic 2010 Express
, and this time I did a little extra work to take the
configuration variables out of the source code and put them into an .ini file.

Why am I telling you all this?

Well, not that I think you’d need the app, but I have today
made the source code
(and the compiled executable, for good measure) publicly available on my brand
new GitLab account!

I’ve been using Git for a while (and I’ve written about it once
or twice
before), but I really haven’t been taking advantage of its featureset.

I’m working on something right now that’s big and complex
and I value having version control and branches to work with. I already have
Git installed on my server (both my home server and my public webserver), but
I’ve downloaded a windows Git client
to compliment that setup and opened a GitLab account to use as an external
repository and a means to eventually make a finished product public.

Why have I chosen GitLab over the more ubiquitous GitHub? GitHub makes you pay to host a private
repository, and I want somewhere where I can both host code that’s a work in
progress (and not ready for public distribution) and distribute completed code
that’s ready for download, public review and maybe even improvement by the
wider community. GitLab gives me free private repositories for
partially-completed things that I can later make public once I’m ready to.

I’ve already created a couple of public repositories, mostly
to test the platform out, and TorrentApp is one of them.

So use it if it’s a tool that might be useful to you,
improve upon it if you have the expertise, and send me a merge request so I
can incorporate your changes into the code!